DEV Community

Evgenii Konkin
Evgenii Konkin

Posted on

The Engineering Math Behind Battery Runtime: Decoding the DC Load Duration Formula

Did you know that a 100Ah battery with 80% depth of discharge doesn't actually give you 80Ah of usable power? The real deliverable capacity can be as low as 64Ah when accounting for typical 80% system efficiency—a 20% reduction that could mean the difference between your backup system lasting through a power outage or failing prematurely.

The Formula

The battery runtime formula batteryLife = batteryCapacity × (depthOfDischarge / 100) × (efficiency / 100) / loadCurrent might look simple, but each term represents a critical physical constraint in energy storage systems. Battery capacity in ampere-hours (Ah) quantifies the total charge a battery can theoretically deliver—it's the fundamental energy reservoir. The depth of discharge (DoD) term converts the percentage limit to a decimal fraction, representing how much of that total capacity we're allowed to use before damaging the battery. This isn't just a safety factor; it's a material science constraint based on battery chemistry degradation.

The efficiency term accounts for energy losses that occur between the battery terminals and the actual load. This includes inverter losses (if converting to AC), wiring resistance, charge controller inefficiencies, and self-discharge. The division by load current transforms the energy quantity (Ah) into a time duration (hours) through the fundamental relationship time = charge / current. What makes this formula particularly elegant is how it separates the battery's inherent characteristics (capacity, DoD) from the system implementation factors (efficiency) and the external demand (load current).

In code terms, this formula translates to a straightforward calculation, but understanding why each variable exists is crucial for proper implementation. The DoD conversion from percentage to decimal ensures dimensional consistency, while the efficiency adjustment reflects real-world physics where energy transformations are never 100% perfect. The linear relationship with capacity and inverse relationship with current follows directly from charge conservation principles—more stored charge means longer runtime, while higher current drains that charge faster.

Worked Example 1

Let's calculate the runtime for a residential solar backup system. We have a battery bank with 400Ah capacity (typical for a medium-sized home system), powering a critical load drawing 8A (representing refrigeration, lighting, and communication equipment). The battery manufacturer specifies a 70% maximum depth of discharge to ensure longevity, and our system efficiency is measured at 85% accounting for inverter and wiring losses.

First, we convert percentages to decimals: DoD = 70/100 = 0.7, Efficiency = 85/100 = 0.85

Now apply the formula: Runtime = 400 × 0.7 × 0.85 / 8

Calculate stepwise: 400 × 0.7 = 280Ah (usable capacity before efficiency)
280 × 0.85 = 238Ah (effective capacity after losses)
238 / 8 = 29.75 hours

This 29.75-hour runtime would be interpreted by the Result Intelligence System as LONG, indicating sufficient duration for most overnight or weekend power outages. The calculation reveals that despite having 400Ah of nameplate capacity, only 238Ah actually reaches the load—a 40.5% reduction from the theoretical maximum.

Worked Example 2

Consider an industrial IoT sensor node with a much smaller battery. We have a 12Ah lithium battery powering a sensor that draws 0.15A during active transmission. The battery chemistry allows 90% depth of discharge, but the power management circuit introduces efficiency losses of 75% due to voltage regulation and sleep mode overhead.

Convert: DoD = 90/100 = 0.9, Efficiency = 75/100 = 0.75

Runtime = 12 × 0.9 × 0.75 / 0.15

Stepwise: 12 × 0.9 = 10.8Ah
10.8 × 0.75 = 8.1Ah
8.1 / 0.15 = 54 hours

This 54-hour runtime falls into the MODERATE category. Notice how the low current (0.15A) dramatically extends runtime despite the modest battery capacity and significant efficiency losses. This example illustrates why low-power design is so critical in battery-operated devices—halving the current doubles the runtime linearly.

What Engineers Often Miss

First, engineers frequently overlook temperature effects on both capacity and efficiency. Battery capacity typically decreases by about 1% per degree Celsius below 25°C, while efficiency can drop significantly in cold conditions due to increased internal resistance. The formula assumes room temperature operation—for outdoor or extreme environment applications, you need to derate both capacity and efficiency values.

Second, the assumption of constant current is often violated in real systems. Many loads have pulsed or variable current profiles. While the formula uses average current, peak currents can cause voltage sag that triggers low-voltage cutoffs prematurely, effectively reducing usable capacity. Always check if your average current calculation truly represents the load profile, especially for motors or devices with high inrush currents.

Third, engineers sometimes forget that depth of discharge isn't just a number—it's a trade-off between runtime and battery lifespan. A 100% DoD might give you maximum runtime initially, but it could reduce battery cycle life by 80% compared to 50% DoD. The formula doesn't capture this long-term cost, so you need to consider both immediate runtime needs and total cost of ownership when selecting your DoD value.

Try the Calculator

While manual calculations help build intuition, practical engineering requires quick, accurate screening. The Battery Life Calculator automates these calculations while ensuring proper unit handling and percentage conversions. It's particularly useful for comparing multiple design alternatives or performing sensitivity analysis on different parameters. Try it with your own numbers at Battery Life Calculator.

Top comments (0)