If you've ever tried to build a cellular IoT tracker that lasts more than a year in the field, you know the power budget is the whole problem. GPS is easy. Getting location is easy. Keeping a modem attached to the network without draining a battery in six weeks is where hardware and firmware engineers earn their pay.
I've been watching this category since 2G GPRS was the default. Here's what's actually changed in the last few years that makes 5-8 year field life realistic on a 24,000 mAh primary cell — not as a marketing number, but as a number you can defend in a design review.
The power budget that used to fail
A typical 2015-era GPRS tracker power profile looked roughly like this:
Idle listening (paging): ~5 mA continuous
GPS fix acquisition: ~40 mA for 30-60s
GPRS transmission: ~250 mA peak, ~100 mA avg for 10-30s
Deep sleep (modem off): ~50 uA
The problem is the first line. If your tracker is "idle but reachable" — meaning the modem is registered on a 2G network and listening for pages — you burn roughly 5 mA continuously. On a 6000 mAh battery, that's 1200 hours or about 50 days of idle alone, before you've sent a single message or taken a single GPS fix.
The workaround was brutal: keep the modem completely off between scheduled wake events, then fully re-attach every time you wanted to report. Attachment itself costs power (the handshake can run 5-15 seconds at 150+ mA), and — worse — if the modem can't find network immediately, it'll burn itself into the ground retrying.
What Release 13+ actually changed
3GPP Release 13 (LTE Cat-M1 and NB-IoT) and Release 14 introduced two features that changed the math: PSM (Power Saving Mode) and eDRX (extended Discontinuous Reception).
PSM is the headline. A device tells the network "I'm going to sleep for T3412 seconds. Please hold my context." The modem drops to something close to power-off — modern modules spec 3-15 µA in PSM state — but the network retains the device's registration. When the device wakes up on its scheduled T3324 timer, it doesn't re-attach. It just pings, transmits, and drops back into sleep.
The concrete number that matters: modern LTE Cat-1bis and Cat-M1 modules sit at 3-15 µA in PSM, versus 5 mA in legacy "idle listening" on 2G. That's a 300-1600x reduction in the dominant power state.
Here's what that does to the budget, assuming a once-per-day reporting cadence on a 24,000 mAh Li-MnO2 cell:
Time in PSM (per day): ~86,390 s
PSM current: ~10 uA
PSM energy (Ah/day): 10e-6 * 86390 / 3600 = 0.00024 Ah
Wake + GPS + TX (per day): ~10 s active
Avg active current: ~100 mA
Active energy (Ah/day): 0.1 * 10 / 3600 = 0.00028 Ah
Total daily: ~0.00052 Ah (0.52 mAh)
24000 mAh / 0.52 mAh/day = ~46,000 days worst-case theoretical
In practice you derate for self-discharge (~1% per year on good Li-MnO2), temperature variance, occasional emergency-mode activations, and module quirks. Real-world deployments land in the 5-8 year range. The theoretical ceiling is much higher, but the honest number is the one you'd put in a contract.
The lithium chemistry disclaimer
Battery datasheets are where multi-year claims go to die. Two chemistries dominate in this space, and they behave differently:
| Chemistry | Nominal V | Energy Density | Self-Discharge | Temp Behavior |
|---|---|---|---|---|
| Li-MnO2 (CR series) | 3.0V | ~270 Wh/kg | ~1%/yr | Decent cold behavior |
| Li-SOCl2 | 3.6V | ~500 Wh/kg | <1%/yr | Excellent cold, passivation risk |
Li-SOCl2 has roughly double the energy density and is the chemistry you see in water meters, gas meters, and industrial sensors rated for 10-20 years. The catch is passivation: after long idle periods, Li-SOCl2 develops an internal resistance layer that can drop voltage under sudden load — exactly what happens when your modem needs 150 mA for a TX burst. You need a hybrid approach (Li-SOCl2 + hybrid layer capacitor) or a depassivation pulse before transmissions.
Li-MnO2 doesn't have passivation issues but has lower energy density. For pallet-class trackers with moderate duty cycles, Li-MnO2 at 24,000 mAh is a clean fit and the chemistry I'd default to unless you're chasing every last month of field life.
Modem-side gotchas
PSM works on paper. In the field, three things bite.
1. Carrier-side PSM timer negotiation. When your modem requests T3412 = 86400s (24h), the carrier might grant you 10800s (3h). Some carriers don't honor long PSM timers at all. You have to read the actual granted values back in the +CGREG or +CEREG response — don't assume your requested timer is what you got.
# Pseudo AT sequence for LTE Cat-1 PSM entry
AT+CPSMS=1,,,"00100001","00000001" # Request T3412=24h, T3324=0s
AT+CEREG=5 # Enable network registration unsolicited
# Read actual granted timers from CEREG response
AT+COPS? # Confirm registration state
# Trigger TX
AT+QIOPEN=... # Open socket, send, close
# Modem auto-enters PSM when T3324 expires
2. eDRX vs PSM mode confusion. eDRX keeps the modem reachable for downlink; PSM makes it unreachable until the next wake. For a tracker that only needs to push data uplink (most pallet use cases), PSM is what you want. If you need server-initiated commands (remote emergency-mode switch, for example), you need eDRX — and you pay for it in power.
3. Network re-attach cost. If PSM is not actually honored and the modem drops context, every wake becomes a full attach sequence. 5-10 seconds at 100+ mA, plus the RRC connection setup. Do this every 15 minutes for a year and your "8-year" tracker is dead in 8 months. Field telemetry on actual PSM effectiveness is worth more than any datasheet claim.
The duty cycle matrix
Here's the table I use when scoping a deployment. Different use cases want different cadences, and the same hardware can land anywhere on this curve depending on how it's configured:
| Cadence | Idle draw | Active per day | Expected life (24 Ah) |
|---|---|---|---|
| 1x per day heartbeat | 10 uA | ~0.5 mAh | 5-8 years |
| Every 6h | 10 uA | ~2 mAh | 2-3 years |
| Hourly | 10 uA | ~12 mAh | 10-14 months |
| Every 15 min | 10 uA | ~48 mAh | 3-5 months |
| Continuous (live) | full power | huge | hours to days |
The takeaway: the hardware doesn't cap your field life. Your reporting frequency does. A well-designed tracker should let you configure cadence per device or per geofence, so high-value in-transit shipments can burn budget during transit and go back to heartbeat once they're stationary.
Sensor-triggered wake is the unlock
The real trick for pallet-class devices isn't "report once a day." It's "report once a day and wake on events of interest." Accelerometer interrupt lines, light sensor thresholds, and temperature excursions can fire hardware interrupts that wake the MCU without waking the modem unless the event actually merits a transmission.
Pseudocode for the interrupt handler:
// MCU wakes from deep sleep on GPIO interrupt
void wake_handler(void) {
event_t e = classify_event();
switch (e) {
case MOTION_START:
// Debounce: was this a forklift or a real move?
if (sustained_motion(30_seconds)) {
wake_modem_and_report(E_MOTION);
}
break;
case LIGHT_SENSOR_TRIGGER:
// Container opened
wake_modem_and_report(E_TAMPER);
break;
case TEMP_EXCURSION:
if (temp_out_of_range(config.high, config.low)) {
wake_modem_and_report(E_TEMP);
}
break;
default:
go_back_to_sleep();
}
}
The key is the debounce logic. A forklift nudging a pallet should not trigger a transmission. A pallet being loaded onto a truck should. Getting this right in firmware is what separates a tracker that reports usefully from a tracker that spams the platform and dies in six months.
Where this stack fails
I'll name four failure modes so you don't learn them the expensive way.
Indoor GPS is not a solved problem. Wi-Fi scan assist helps, but in a warehouse with inconsistent AP coverage you'll get either no fix or wildly inaccurate ones. For pallet tracking in a DC, you pair cellular with RFID or BLE beacons at known reference points.
The 2G sunset is uneven globally. Carriers in North America and Australia have largely shut down 2G. Parts of Africa, Southeast Asia, and Latin America still depend on it as fallback. A global tracker for the next 3-5 years still benefits from 2G fallback in the modem stack. A tracker for 2030+ probably shouldn't.
Certification takes calendar time, not money. FCC, CE, PTCRB, and carrier-specific approvals (Verizon, AT&T, Telstra) each run 2-4 months if everything goes right. If you're scoping a global deployment, scope the cert timeline from day one.
The platform is half the product. A tracker that lasts 8 years but feeds into a proprietary platform with no API is dead on arrival for any serious customer. Open TCP/UDP protocols, documented payload formats, and webhook support matter as much as the hardware specs.
Closing
The power engineering here isn't exotic anymore. PSM works, eDRX works, Li-MnO2 at 24 Ah is available, and LTE Cat-1 modules are cheap. The difference between a tracker that hits its datasheet number and one that doesn't is almost entirely in the duty cycle logic — how conservatively the firmware sleeps, how smartly it wakes, and how honest the reporting cadence is.
If you're designing in this space or scoping a hardware selection, the numbers that matter are:
- Quiescent current in actual PSM state (measured, not spec'd) — target under 20 µA
- Cold-start GPS TTFF — target under 35s, ideally with A-GNSS assist
- Self-reported "battery remaining" telemetry from the device — critical for fleet operations
- Emergency-mode auto-revert logic — must be firmware-enforced, not operator-remembered
What approaches are you using for long-life cellular trackers? Curious whether anyone's had luck with Cat-M1 at this power envelope or sticking mostly with Cat-1bis.
This article was written with AI assistance for research and drafting.
Top comments (0)