<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daria Kovachevich</title>
    <description>The latest articles on DEV Community by Daria Kovachevich (@kovachevich).</description>
    <link>https://dev.to/kovachevich</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kovachevich"/>
    <language>en</language>
    <item>
      <title>How We Cut Our Azure Cloud Costs by 3x — Solda.Ai’s Experience</title>
      <dc:creator>Daria Kovachevich</dc:creator>
      <pubDate>Wed, 07 May 2025 15:32:33 +0000</pubDate>
      <link>https://dev.to/solda/how-we-cut-our-azure-cloud-costs-by-3x-soldaais-experience-2ao4</link>
      <guid>https://dev.to/solda/how-we-cut-our-azure-cloud-costs-by-3x-soldaais-experience-2ao4</guid>
      <description>&lt;p&gt;At Solda.Ai, we build voice AI agents that handle high-volume outbound sales calls. Our platform operates entirely in the cloud, with Kubernetes at the core of our production stack — from call processing to API handling and analytics. As our call volume grows, it’s essential for us to keep infrastructure costs as low as possible to maintain a sustainable and scalable business.&lt;/p&gt;

&lt;p&gt;I’m &lt;a href="https://dev.to/igor_yermakov"&gt;Igor&lt;/a&gt;, CTO at Solda.Ai, and together with &lt;a href="https://dev.to/dsamirov"&gt;Dmitrii&lt;/a&gt;, our Head of Development, we’ve spent the last few months optimizing our infrastructure for cost-efficiency and scalability. What follows is a breakdown of how we cut our Azure bill by more than 3x— while our traffic was actually growing.&lt;/p&gt;

&lt;p&gt;During this period, our outbound traffic actually increased — making the cost reduction even more impactful. Our infrastructure handles hundreds of thousands of outbound calls each month. At one point, our monthly Azure bill was around &lt;strong&gt;€25,000&lt;/strong&gt;, which seriously impacted our budget. In just &lt;strong&gt;a few months&lt;/strong&gt;, we brought it down to &lt;strong&gt;€8,000&lt;/strong&gt; — here’s exactly how we did it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xmzama7yzxgu3ifnb3x.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xmzama7yzxgu3ifnb3x.webp" alt="How We Cut Our Azure Cloud Costs by 3x — Solda.Ai’s Experience" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 0: Fixing requsets and limits (~€ 850)
&lt;/h2&gt;

&lt;p&gt;The very first thing we did — and it may sound boring — was align &lt;code&gt;resources.requests&lt;/code&gt; and &lt;code&gt;resources.limits&lt;/code&gt; in our Deployments. Without this, autoscaling would keep extra capacity "just in case." Once we tuned the values for our API, call services, admin panel, and analytics, we immediately saw &lt;strong&gt;around 3.4% savings&lt;/strong&gt; - Kubernetes finally knew how much each pod actually needed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; Don’t even think about scaling until your requests and limits make sense.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 1: Enabling nodepool autoscaling (~€ 1,700)
&lt;/h2&gt;

&lt;p&gt;Next, we turned on the &lt;strong&gt;Managed Kubernetes Cluster Autoscaler&lt;/strong&gt; for all nodepools. Previously, even during idle periods, several VMs were always up. Now, if there’s nothing to schedule — the autoscaler brings node count down to zero. That alone gave us &lt;strong&gt;~6.8% savings&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Our Go‑based scale‑to‑zero operator (~€ 3,400)
&lt;/h2&gt;

&lt;p&gt;We noticed that Kubernetes HPA never goes below 1 replica — even when there are no calls queued. We initially experimented with CPU-based metrics to drive scaling decisions, but found them too noisy and inconsistent for our use case. So we shifted to a business-level metric — the length of the outbound call queue — which gave us a much more reliable and actionable signal. This helped us scale down aggressively when the system was idle, without sacrificing responsiveness.&lt;/p&gt;

&lt;p&gt;So we wrote a &lt;strong&gt;custom operator&lt;/strong&gt; in Go. It monitors the &lt;strong&gt;outbound call queue&lt;/strong&gt; and sets &lt;code&gt;replicas: 0&lt;/code&gt; when the queue is empty. Once pods drop to zero, Azure's autoscaler shuts down the node — &lt;strong&gt;saving us ~13.6%&lt;/strong&gt;. It took some time to deal with race conditions and logs, but it was totally worth it.&lt;/p&gt;

&lt;p&gt;If you’re looking to implement a similar pattern, it’s worth noting that tools like &lt;a href="https://keda.sh/" rel="noopener noreferrer"&gt;KEDA&lt;/a&gt; can help you scale workloads based on event sources such as queue length. In our case, we decided to write a custom operator instead — mainly to retain full control and avoid relying on KEDA availability or support in our specific cloud provider setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Splitting workloads across 6 nodepools (~€ 850)
&lt;/h2&gt;

&lt;p&gt;We avoided the “everything in one bucket” trap by splitting workloads across &lt;strong&gt;six dedicated nodepools&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;calls&lt;/code&gt; (voice bots, D-series)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;api&lt;/code&gt; (HTTP API, B-series)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;admin&lt;/code&gt; (admin panel, B-series)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;analysis&lt;/code&gt; (real-time analysis, E-series)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;classification&lt;/code&gt; (batch jobs)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;misc&lt;/code&gt; (auxiliary services)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This allowed for more accurate autoscaling, giving us another ~3.4% savings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Spot VMs for non-critical jobs (~€ 10,200)
&lt;/h2&gt;

&lt;p&gt;Our biggest win came from switching &lt;strong&gt;non-real-time workloads&lt;/strong&gt; — call classification, background analysis, and even parts of the API — to &lt;strong&gt;Spot VMs&lt;/strong&gt;. These are significantly cheaper but can be evicted at any time. We built in retry logic and increased API replicas so the system could continue even if Spot VMs got pulled. That alone brought &lt;strong&gt;~40.8% savings&lt;/strong&gt; on eligible workloads.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Always check the &lt;strong&gt;pricing and availability&lt;/strong&gt; of Spot VMs in your region. Some aren’t much cheaper than standard VMs; others live only 5–10 minutes — not ideal if your job takes longer.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Summary: From €25,000 to €8,000 in a few months
&lt;/h2&gt;

&lt;p&gt;Each step built on the previous one, so the percentages don’t add up linearly. &lt;strong&gt;Here’s a simplified view of where the savings came from:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Step 0: Fixing requests/limits — ~3.4%&lt;/li&gt;
&lt;li&gt;Step 1: Cluster autoscaling — ~6.8%&lt;/li&gt;
&lt;li&gt;Step 2: Scale-to-zero operator — ~13.6%&lt;/li&gt;
&lt;li&gt;Step 3: Splitting nodepools — ~3.4%&lt;/li&gt;
&lt;li&gt;Step 4: Spot VMs — ~40.8%&lt;/li&gt;
&lt;li&gt;Total estimated savings: ~68%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What’s even more impressive is that &lt;a href="https://www.solda.ai/" rel="noopener noreferrer"&gt;Solda.ai's&lt;/a&gt; total outbound call volume grew during this time — we managed to cut costs despite handling more traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xmzama7yzxgu3ifnb3x.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xmzama7yzxgu3ifnb3x.webp" alt="How We Cut Our Azure Cloud Costs by 3x — Solda.Ai’s Experience" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We brought our bill down from €25,000 to €8,000 without compromising SLA or system stability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The key:&lt;/strong&gt; monitor everything, scale smart, and automate aggressively.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use IaC (Terraform / ARM)&lt;/li&gt;
&lt;li&gt;Simulate Spot VM interruptions in staging&lt;/li&gt;
&lt;li&gt;Set up custom monitoring and alerting for idle infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hope this helps your team too!&lt;/p&gt;

&lt;p&gt;✍️ Want to learn more about our custom Kubernetes operator or multi-zone setup? Drop us a comment!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>infrastructure</category>
    </item>
  </channel>
</rss>
