<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sid Redjini</title>
    <description>The latest articles on DEV Community by Sid Redjini (@sidredjini).</description>
    <link>https://dev.to/sidredjini</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sidredjini"/>
    <language>en</language>
    <item>
      <title>A Practical FinOps Pipeline for Azure SQL: From Azure CLI to Power BI Without Log Analytics</title>
      <dc:creator>Sid Redjini</dc:creator>
      <pubDate>Thu, 24 Jul 2025 16:47:31 +0000</pubDate>
      <link>https://dev.to/sidredjini/a-practical-finops-pipeline-for-azure-sql-from-azure-cli-to-power-bi-without-log-analytics-41df</link>
      <guid>https://dev.to/sidredjini/a-practical-finops-pipeline-for-azure-sql-from-azure-cli-to-power-bi-without-log-analytics-41df</guid>
      <description>&lt;h1&gt;
  
  
  Azure SQL Metrics Export – A FinOps Pipeline Without Log Analytics
&lt;/h1&gt;

&lt;p&gt;As a FinOps consultant working with mid-sized and large organizations, I often land in complex environments where access to production systems, logs, or advanced monitoring tools is limited ; especially for external contractors. Despite that, I’m expected to produce actionable insights fast.&lt;/p&gt;

&lt;p&gt;This article explains how I built a self-contained metrics pipeline — using only Azure CLI and Python ; to collect, transform, and prepare Azure SQL performance data for analysis, without relying on premium monitoring features or costly third-party tools.&lt;/p&gt;

&lt;p&gt;This method is part of a broader effort to evaluate whether DTU-based databases could be migrated to more cost-effective vCore serverless models : a scenario many organizations face when scaling cloud usage.&lt;/p&gt;

&lt;p&gt;In this article, I also share my hands-on experience with performance rationalization projects, where these metrics have been used to justify migrating SQL databases from DTU pricing to the vCore serverless model. The specific choice of metrics and aggregation levels discussed throughout reflects real-world scenarios where cost optimization decisions are based on a blend of average usage patterns and performance peaks.&lt;/p&gt;

&lt;p&gt;This guide will show you how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Collect Azure SQL performance metrics via the Azure Monitor API using Bash scripts.&lt;/li&gt;
&lt;li&gt;Transform and structure this data using Python into a CSV aligned with Azure billing reports.&lt;/li&gt;
&lt;li&gt;Build a cost-aware monitoring strategy integrated with tools like Power BI, even without Log Analytics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find the full source code on GitHub:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/sidmechant/azure-sql-metrics-finops" rel="noopener noreferrer"&gt;sidmechant/azure-sql-metrics-finops&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;1. Why This Approach&lt;/li&gt;
&lt;li&gt;2. Architecture Overview&lt;/li&gt;
&lt;li&gt;3. Bash Script: Exporting Metrics&lt;/li&gt;
&lt;li&gt;4. Sample Output (Raw JSON)&lt;/li&gt;
&lt;li&gt;5. Python Script: Transforming Metrics&lt;/li&gt;
&lt;li&gt;6. Practical Extensions&lt;/li&gt;
&lt;li&gt;7. Conclusion&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  1. Why This Approach
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why These Metrics Matter in FinOps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The selection of specific metrics in this guide is driven by real-world FinOps objectives:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CPU Percent&lt;/strong&gt; helps evaluate compute saturation and identify chronic underutilization or periodic spikes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DTU Consumption Percent&lt;/strong&gt; provides a normalized measure of the overall capacity consumption in DTU-based databases, useful when comparing across different service tiers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage Percent&lt;/strong&gt; tracks how close databases are to their storage limits, which can influence decisions on scaling or changing service tiers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Log Write Percent&lt;/strong&gt; and &lt;strong&gt;Workers Percent&lt;/strong&gt; are valuable for detecting backend bottlenecks and query concurrency issues, often overlooked in cost reports.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deadlock Count&lt;/strong&gt; serves as a performance health indicator, signaling contention issues that may require scaling up or query optimization.&lt;/p&gt;

&lt;p&gt;This carefully selected set of metrics supports actionable FinOps decisions such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Justifying a shift from DTU to vCore or serverless models based on consistent underutilization.&lt;/li&gt;
&lt;li&gt;Detecting high-peak workloads that may be better suited for scaling up or elasticity mechanisms like serverless compute.&lt;/li&gt;
&lt;li&gt;Aligning technical performance data with billing reports for transparent reporting to stakeholders.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;No additional costs&lt;/strong&gt;: It uses native Azure CLI APIs, avoiding Log Analytics and other premium services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-subscription support&lt;/strong&gt;: The script collects data across multiple Azure subscriptions in a single run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Customizable output&lt;/strong&gt;: Flexible configuration for metrics selection, aggregation type, and data retention period.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration-ready&lt;/strong&gt;: The data output format aligns with Azure billing reports, simplifying cross-analysis between usage and costs.&lt;/p&gt;

&lt;p&gt;This is not just a cost-avoidance strategy , it's also about autonomy. In large organizations, access requests can take days or weeks to be approved. By building my own collection pipeline through native CLI APIs, I removed dependency on internal platform teams while ensuring full reproducibility of the data used in decision-making.&lt;/p&gt;

&lt;p&gt;This approach is ideal for FinOps practitioners who want to monitor real usage patterns to guide cost optimization efforts.&lt;/p&gt;
&lt;h1&gt;
  
  
  2. Architecture Overview
&lt;/h1&gt;

&lt;p&gt;Here is the lightweight FinOps pipeline I used for this project:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02hcjj6t1o793e0zd7cn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02hcjj6t1o793e0zd7cn.png" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This structure allows for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Platform-agnostic data collection (Bash / Azure CLI)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reproducible transformation via Python&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Output that matches billing data structure&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Easy integration into tools like Power BI&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  3. Bash Script: Exporting Metrics
&lt;/h1&gt;

&lt;p&gt;View the Bash script on GitHub:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/sidmechant/azure-sql-metrics-finops/blob/main/scripts/export_metrics.sh" rel="noopener noreferrer"&gt;&lt;code&gt;scripts/export_metrics.sh&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This script collects Azure SQL Database metrics across multiple subscriptions while handling API rate limits and excluding irrelevant databases (e.g., vCore or 'master'). It is designed to be flexible, adaptable, and efficient for large-scale environments.&lt;/p&gt;
&lt;h3&gt;
  
  
  Key Features
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Multi-subscription handling&lt;/strong&gt;: Automatically iterates through all active Azure subscriptions using the Azure CLI, ensuring visibility across environments (e.g., development, staging, production). This is crucial in FinOps to avoid isolated analysis limited to a single subscription.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Targeted database selection&lt;/strong&gt;: Filters out irrelevant databases like &lt;code&gt;master&lt;/code&gt; and optionally excludes databases running on vCore models. This prevents wasting API calls on databases where DTU-specific metrics are not applicable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom metric selection&lt;/strong&gt;: Allows users to define exactly which metrics to collect via the &lt;code&gt;METRICS&lt;/code&gt; variable. For my own analyses, I specifically select &lt;code&gt;cpu_percent&lt;/code&gt;, &lt;code&gt;dtu_consumption_percent&lt;/code&gt;, and &lt;code&gt;storage_percent&lt;/code&gt; because they provide a holistic view of compute and storage pressure. I also include &lt;code&gt;log_write_percent&lt;/code&gt;, &lt;code&gt;workers_percent&lt;/code&gt;, and &lt;code&gt;deadlock&lt;/code&gt; to assess query workload saturation and contention issues. This approach is tailored to justify decisions like shifting from DTU to vCore (especially serverless), based on actual workload patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Granularity flexibility&lt;/strong&gt;: Supports different time intervals via the &lt;code&gt;INTERVAL&lt;/code&gt; parameter. Common choices are &lt;code&gt;PT5M&lt;/code&gt; (5 minutes), &lt;code&gt;PT30M&lt;/code&gt; (30 minutes), &lt;code&gt;PT1H&lt;/code&gt; (1 hour), or &lt;code&gt;P1D&lt;/code&gt; (1 day). The choice of interval is driven by the expected variability of workload. For highly transactional databases, I recommend &lt;code&gt;PT5M&lt;/code&gt; for peak detection; for cost baseline reports, &lt;code&gt;PT1H&lt;/code&gt; or &lt;code&gt;P1D&lt;/code&gt; are sufficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aggregation customization&lt;/strong&gt;: Enables retrieval of multiple aggregation types such as &lt;code&gt;Average&lt;/code&gt;, &lt;code&gt;Maximum&lt;/code&gt;, and &lt;code&gt;Minimum&lt;/code&gt; to provide a more comprehensive view of performance trends. This helps distinguish between chronic underutilization (average) and risk peaks (maximum). For example, databases averaging 25% CPU but peaking at 90% may require different scaling decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API error handling and rate-limiting protection&lt;/strong&gt;: Incorporates checks to exclude databases with no available metrics and introduces pauses between API calls to respect Azure Monitor rate limits. For environments exceeding 500 databases, I recommend extending sleep intervals and optionally batching requests per resource group.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Core configuration example:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--aggregation "Average" "Maximum" "Minimum"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is important to explicitly define aggregation types; otherwise, Azure defaults to Average only, potentially masking performance outliers.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Retention and date flexibility:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;START_DATE="$(date -u -d '-7 days' +"%Y-%m-%dT%H:%M:%SZ")"
END_DATE="$(date -u +"%Y-%m-%dT%H:%M:%SZ")"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Dynamic date calculation enables rolling window reporting, useful for weekly optimization reviews.&lt;/p&gt;

&lt;p&gt;This Bash script represents the first component of a lightweight monitoring pipeline, facilitating data-driven FinOps analysis without additional Azure monitoring costs.&lt;/p&gt;

&lt;h1&gt;
  
  
  4.  Sample Output (Raw JSON)
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff83m9l1yp5hwsx1qf5nh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff83m9l1yp5hwsx1qf5nh.png" alt=" " width="797" height="753"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Python Script: Transforming Metrics
&lt;/h2&gt;

&lt;p&gt;View the Python script on GitHub:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/sidmechant/azure-sql-metrics-finops/blob/main/scripts/transform_metrics.py" rel="noopener noreferrer"&gt;&lt;code&gt;scripts/transform_metrics.py&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once metrics are collected, a Python script converts the raw JSON into a structured CSV, transforming scattered time series data into a Power BI-compatible format.&lt;/p&gt;
&lt;h3&gt;
  
  
  Key elements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Parses resource identifiers into structured fields (subscription, server, database).&lt;/li&gt;
&lt;li&gt;Pivots metrics so each metric becomes a dedicated column.&lt;/li&gt;
&lt;li&gt;Aligns column names to match Azure billing exports.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Example transformation step:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnldm0dcxt1g1rtp4s1ay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnldm0dcxt1g1rtp4s1ay.png" alt="Example of pivoted output" width="712" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This pipeline produces a structured output easily consumable in Power BI for reporting and cost analysis.&lt;/p&gt;


&lt;h2&gt;
  
  
  6. Practical Extensions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Azure Automation&lt;/strong&gt;: Schedule recurring data collection weekly or monthly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blob Storage Integration&lt;/strong&gt;: Archive historical performance data for trend analysis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Power BI Reports&lt;/strong&gt;: Create dashboards aligning technical performance with financial costs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Metrics&lt;/strong&gt;: Modify the Bash script to include specific metrics relevant to your workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Example customization:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;METRICS="cpu_percent,storage_percent,workers_percent"
INTERVAL="PT1H"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. Conclusion
&lt;/h2&gt;

&lt;p&gt;This article focused on a core building block of any effective FinOps practice: establishing independent, cost-aware access to usage data. Without reliable metrics — and without being tied to expensive tools or internal platform dependencies — optimization efforts often remain superficial.&lt;/p&gt;

&lt;p&gt;By using a lightweight combination of Azure CLI and Python, I was able to implement a solution that scales across subscriptions, respects enterprise constraints, and delivers visibility where it’s most needed — at the intersection of performance and cost.&lt;/p&gt;

&lt;p&gt;This approach is not just about collecting data; it's about regaining control. It allowed me to identify low-usage DTU-based databases, highlight peak performance risks, and prepare clear migration paths to vCore serverless — all backed by evidence.&lt;/p&gt;

&lt;p&gt;In the next article, I’ll show how this data connects to Azure billing exports, how I created actionable KPIs, and how it all comes together in Power BI to support FinOps decision-making with clarity and impact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/rest/api/monitor/metrics/list" rel="noopener noreferrer"&gt;Azure Monitor Metrics API Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/cost-management-billing/" rel="noopener noreferrer"&gt;Azure Cost Management and Billing Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/sidmechant/azure-sql-metrics-finops" rel="noopener noreferrer"&gt;GitHub Repository – azure-sql-metrics-finops&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Author&lt;/strong&gt;: Sid Ahmed Redjini · FinOps Consultant&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>architecture</category>
      <category>microsoft</category>
    </item>
    <item>
      <title>Thanks for the support. I will edit this with more details https://dev.to/sid_rdj_bc998504b31f86326/why-your-azure-sql-dtu-database-might-be-charging-you-for-more-than-24-hours-a-day-5ddg</title>
      <dc:creator>Sid Redjini</dc:creator>
      <pubDate>Wed, 16 Jul 2025 09:57:50 +0000</pubDate>
      <link>https://dev.to/sidredjini/thanks-for-the-support-i-will-edit-this-with-more-details-268p</link>
      <guid>https://dev.to/sidredjini/thanks-for-the-support-i-will-edit-this-with-more-details-268p</guid>
      <description></description>
      <category>azure</category>
      <category>sqlserver</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Why Your Azure SQL DTU Database Might Be Charging You for More Than 24 Hours a Day</title>
      <dc:creator>Sid Redjini</dc:creator>
      <pubDate>Wed, 09 Jul 2025 15:09:21 +0000</pubDate>
      <link>https://dev.to/sidredjini/why-your-azure-sql-dtu-database-might-be-charging-you-for-more-than-24-hours-a-day-5ddg</link>
      <guid>https://dev.to/sidredjini/why-your-azure-sql-dtu-database-might-be-charging-you-for-more-than-24-hours-a-day-5ddg</guid>
      <description>&lt;p&gt;In cloud cost optimization, assumptions can be expensive. Azure’s Database Transaction Unit (DTU) pricing model for SQL Databases is often considered a simple solution for provisioning compute, memory, and IOPS as a bundled service. Yet, behind its simplicity hides a subtle — and surprisingly costly — inefficiency.&lt;/p&gt;

&lt;p&gt;As a FinOps consultant analyzing production environments, I uncovered a pattern of overbilling in Azure SQL’s DTU model. Specifically, I observed multiple cases where a single database was charged for more than 24 hours in a single day. This article documents the root cause, demonstrates how to detect it, and recommends migration paths toward better cost control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
1. Understanding the DTU Model
&lt;/li&gt;
&lt;li&gt;
2. The Anomaly: More Than 24 Hours Charged in a Day
&lt;/li&gt;
&lt;li&gt;
3. Why This Happens: Technical Breakdown
&lt;/li&gt;
&lt;li&gt;
4. Visualization in Power BI: Detecting the Drift
&lt;/li&gt;
&lt;li&gt;
5. Implications and Recommendations
&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  1. Understanding the DTU Model
&lt;/h1&gt;

&lt;p&gt;Azure DTUs combine CPU, RAM, and I/O into a single opaque metric. Customers select a tier (e.g., S0 to S12), each with a fixed DTU value. Billing is typically expressed as a cost per hour, for example:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;S7 (800 DTUs) = 1.3777 €/hour&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;S4 (200 DTUs) = 0.3445 €/hour&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;While Microsoft states that any database “active for a fraction of an hour is billed for the full hour,” in practice this behavior is more nuanced — especially when service tiers change during the same billing period.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A key nuance is this:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Azure does not apply the same hourly billing principle to tier changes as it does to resource creation or deletion.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you create or delete a SQL Database, Azure applies hourly rounding — even a few minutes of activity are billed as one full hour. However, when a tier change occurs (e.g., from S4 to S7), the pricing engine converts usage to a base unit of 10 DTU per day rather than by hour. This means overlapping or sequential tier allocations within the same day can be computed independently and summed in a way that exceeds 24 hours, even though no restart or redeployment took place. This is a subtle but important divergence from standard VM billing logic.&lt;/p&gt;

&lt;h1&gt;
  
  
  2. The Anomaly: More Than 24 Hours Charged in a Day
&lt;/h1&gt;

&lt;p&gt;In one case study, a production SQL database (ResourceName = A418) was billed as follows on April 1st:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dbl6begpsfiy3esjppr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dbl6begpsfiy3esjppr.png" alt=" " width="800" height="286"&gt;&lt;/a&gt;&lt;br&gt;
 Despite a calendar day being 24 hours long, Azure computed a total of 26.02 billed hours. This discrepancy can accumulate unnoticed in environments that scale tiers dynamically.&lt;/p&gt;

&lt;p&gt;Interestingly, one might expect Azure to round each tier usage to the nearest full hour — similar to how VM creation or deletion incurs a one-hour charge — especially since running a VM for five minutes still incurs a full-hour cost. However, when we recompute the estimated hours billed based on the real EffectiveCost and hourly prices without rounding, we match Azure's billed result precisely. This proves that Azure does not apply rounding in tier changes: it uses exact usage duration internally, then sums them for the day — which can lead to over 24 hours billed.&lt;/p&gt;

&lt;p&gt;Below is the actual Power BI output for a single day of activity for the same resource:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pd27hilm5vs4r0t1w3o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pd27hilm5vs4r0t1w3o.png" alt=" " width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This table confirms that billing logic does not round up to the next full hour, but calculates an exact fractional hour total per tier, which is then aggregated — often surpassing the daily limit of 24h.&lt;/p&gt;

&lt;h1&gt;
  
  
  3. Why This Happens: Technical Breakdown
&lt;/h1&gt;

&lt;p&gt;This overbilling stems from two structural behaviors in the DTU model:&lt;/p&gt;

&lt;p&gt;Each service tier change generates a separate billing line. Even within the same day, Azure treats each tier period as an independent unit.&lt;/p&gt;

&lt;p&gt;Azure does not reconcile overlapping allocations. Instead, it applies a conversion ratio based on the 10 DTU/day unit, regardless of time overlap.&lt;/p&gt;

&lt;p&gt;In many environments, these tier switches are triggered by internal scripts or autoscaling logic based on Azure Monitor metrics (e.g., spikes in dtu_consumption_percent). This mechanism allows quick scaling between S4, S7, or S9 in the same day — but introduces unintended cost multiplication.&lt;/p&gt;

&lt;h1&gt;
  
  
  4. Visualization in Power BI: Detecting the Drift
&lt;/h1&gt;

&lt;p&gt;To quantify this anomaly, I created a custom DAX measure:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;EstimatedBilledHours = EffectiveCost / HourlyRatePerTier&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This measure converts the billed cost back into implied usage time, tier by tier. When aggregated per resource per day, it reveals total hours billed.&lt;/p&gt;

&lt;p&gt;This method reveals clear overbilling patterns, especially when correlated with metrics like dtu_consumption_percent or cpu_percent, which often show 0% activity for multiple hours.&lt;/p&gt;

&lt;h1&gt;
  
  
  5. Implications and Recommendations
&lt;/h1&gt;

&lt;p&gt;This behavior can result in significant waste:&lt;/p&gt;

&lt;p&gt;Tier changes generate unoptimized billing granularity&lt;/p&gt;

&lt;p&gt;Databases with spiky workloads are penalized with high DTU tiers&lt;/p&gt;

&lt;p&gt;Usage at 0% DTU/CPU is still charged at full rate&lt;/p&gt;

&lt;p&gt;Recommended solutions (with caution):&lt;/p&gt;

&lt;p&gt;Switch to vCore model, which bills per second and offers greater transparency — but beware of storage costs. Unlike the DTU model (where storage is included), vCore charges separately for:&lt;/p&gt;

&lt;p&gt;Data storage (~0.11 €/GB/month)&lt;/p&gt;

&lt;p&gt;Backup storage (~0.06 €/GB/month)&lt;/p&gt;

&lt;p&gt;Use Serverless where appropriate for automatic pausing&lt;/p&gt;

&lt;p&gt;Create KPIs in Power BI: BilledHours &amp;gt; 24, DTU usage = 0%, EffectiveCost spikes&lt;/p&gt;

&lt;p&gt;This tradeoff must be considered carefully, especially when migrating small DTU workloads (like S0 or S1), where the cost of vCore + storage might exceed the current flat-rate DTU cost.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;While Azure’s DTU model appears predictable, it carries hidden complexity that can lead to silent overbilling. By monitoring EffectiveCost / HourlyRate, tracking tier changes, and visualizing billed time per day, we uncovered a FinOps opportunity to optimize cost — without sacrificing performance.&lt;/p&gt;

&lt;p&gt;If you rely on the DTU model in production, it may be time to ask: &lt;strong&gt;How many hours is your database actually billing you each day?&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Sid Ahmed Redjini&lt;/strong&gt;, FinOps Consultant at &lt;strong&gt;FinOps &amp;amp; Co&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Specializing in Azure cost optimization, Power BI analysis, and FinOps automation.&lt;/p&gt;

&lt;p&gt;Let's connect: &lt;a href="https://fr.linkedin.com/in/sid-ahmed-redjini-349001303" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; | &lt;a href="https://github.com/sidmechant" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>finops</category>
      <category>devops</category>
      <category>cloudpractitioner</category>
    </item>
  </channel>
</rss>
