In cloud cost optimization, assumptions can be expensive. Azure’s Database Transaction Unit (DTU) pricing model for SQL Databases is often considered a simple solution for provisioning compute, memory, and IOPS as a bundled service. Yet, behind its simplicity hides a subtle — and surprisingly costly — inefficiency.
As a FinOps consultant analyzing production environments, I uncovered a pattern of overbilling in Azure SQL’s DTU model. Specifically, I observed multiple cases where a single database was charged for more than 24 hours in a single day. This article documents the root cause, demonstrates how to detect it, and recommends migration paths toward better cost control.
Table of Contents
- 1. Understanding the DTU Model
- 2. The Anomaly: More Than 24 Hours Charged in a Day
- 3. Why This Happens: Technical Breakdown
- 4. Visualization in Power BI: Detecting the Drift
- 5. Implications and Recommendations
- Conclusion
1. Understanding the DTU Model
Azure DTUs combine CPU, RAM, and I/O into a single opaque metric. Customers select a tier (e.g., S0 to S12), each with a fixed DTU value. Billing is typically expressed as a cost per hour, for example:
S7 (800 DTUs) = 1.3777 €/hour
S4 (200 DTUs) = 0.3445 €/hour
While Microsoft states that any database “active for a fraction of an hour is billed for the full hour,” in practice this behavior is more nuanced — especially when service tiers change during the same billing period.
A key nuance is this:
Azure does not apply the same hourly billing principle to tier changes as it does to resource creation or deletion.
When you create or delete a SQL Database, Azure applies hourly rounding — even a few minutes of activity are billed as one full hour. However, when a tier change occurs (e.g., from S4 to S7), the pricing engine converts usage to a base unit of 10 DTU per day rather than by hour. This means overlapping or sequential tier allocations within the same day can be computed independently and summed in a way that exceeds 24 hours, even though no restart or redeployment took place. This is a subtle but important divergence from standard VM billing logic.
2. The Anomaly: More Than 24 Hours Charged in a Day
In one case study, a production SQL database (ResourceName = A418) was billed as follows on April 1st:
Despite a calendar day being 24 hours long, Azure computed a total of 26.02 billed hours. This discrepancy can accumulate unnoticed in environments that scale tiers dynamically.
Interestingly, one might expect Azure to round each tier usage to the nearest full hour — similar to how VM creation or deletion incurs a one-hour charge — especially since running a VM for five minutes still incurs a full-hour cost. However, when we recompute the estimated hours billed based on the real EffectiveCost and hourly prices without rounding, we match Azure's billed result precisely. This proves that Azure does not apply rounding in tier changes: it uses exact usage duration internally, then sums them for the day — which can lead to over 24 hours billed.
Below is the actual Power BI output for a single day of activity for the same resource:
This table confirms that billing logic does not round up to the next full hour, but calculates an exact fractional hour total per tier, which is then aggregated — often surpassing the daily limit of 24h.
3. Why This Happens: Technical Breakdown
This overbilling stems from two structural behaviors in the DTU model:
Each service tier change generates a separate billing line. Even within the same day, Azure treats each tier period as an independent unit.
Azure does not reconcile overlapping allocations. Instead, it applies a conversion ratio based on the 10 DTU/day unit, regardless of time overlap.
In many environments, these tier switches are triggered by internal scripts or autoscaling logic based on Azure Monitor metrics (e.g., spikes in dtu_consumption_percent). This mechanism allows quick scaling between S4, S7, or S9 in the same day — but introduces unintended cost multiplication.
4. Visualization in Power BI: Detecting the Drift
To quantify this anomaly, I created a custom DAX measure:
EstimatedBilledHours = EffectiveCost / HourlyRatePerTier
This measure converts the billed cost back into implied usage time, tier by tier. When aggregated per resource per day, it reveals total hours billed.
This method reveals clear overbilling patterns, especially when correlated with metrics like dtu_consumption_percent or cpu_percent, which often show 0% activity for multiple hours.
5. Implications and Recommendations
This behavior can result in significant waste:
Tier changes generate unoptimized billing granularity
Databases with spiky workloads are penalized with high DTU tiers
Usage at 0% DTU/CPU is still charged at full rate
Recommended solutions (with caution):
Switch to vCore model, which bills per second and offers greater transparency — but beware of storage costs. Unlike the DTU model (where storage is included), vCore charges separately for:
Data storage (~0.11 €/GB/month)
Backup storage (~0.06 €/GB/month)
Use Serverless where appropriate for automatic pausing
Create KPIs in Power BI: BilledHours > 24, DTU usage = 0%, EffectiveCost spikes
This tradeoff must be considered carefully, especially when migrating small DTU workloads (like S0 or S1), where the cost of vCore + storage might exceed the current flat-rate DTU cost.
Conclusion
While Azure’s DTU model appears predictable, it carries hidden complexity that can lead to silent overbilling. By monitoring EffectiveCost / HourlyRate, tracking tier changes, and visualizing billed time per day, we uncovered a FinOps opportunity to optimize cost — without sacrificing performance.
If you rely on the DTU model in production, it may be time to ask: How many hours is your database actually billing you each day?
Sid Ahmed Redjini, FinOps Consultant at FinOps & Co
Specializing in Azure cost optimization, Power BI analysis, and FinOps automation.
Top comments (11)
the transparency here is next level, seriously appreciate seeing the messy real side of billing
makes me wonder if more cloud stuff like this has hidden traps too, you think they’re everywhere or just with certain providers
Very instructive
Interesting and very well written
Good work !!
Great article, good heads-up for anyone using DTUs
Amazing !!! Thanks a lot for your work 🙏🏾
Congratulations !!
Congratulations, it's very interesting and very intuitive.
It’s very, very interesting. The article is complete and well structured great work!
Thanks for the clarity, that is why where i work we are using lambdas to uncharge our db