DEV Community

Muskan
Muskan

Posted on • Originally published at zop.dev

Closed-Loop FinOps: Detect, Decide, Act, Verify in 5 Minutes

Closed-Loop FinOps: Detect, Decide, Act, Verify in 5 Minutes

A FinOps team produces a recommendation report on Monday morning. It identifies $185,000 of monthly waste across 240 cloud resources. By Friday, 12 of those 240 are remediated. By the end of week 4, another 6. By month 3, the remaining 222 have been quietly dropped, because the engineer who would have owned each fix has shipped two sprints of features since the report was generated. The recommendation isn't wrong. The handoff is broken.

This is not a tooling problem. It is a process problem with a predictable decay curve. 30% action rate in week 1, 5% by week 4, effectively 0% by month 3 on the same recommendations. The fix is structural: close the loop. Detection feeds decision feeds action feeds verification, all under 5 minutes, with no human in the critical path for low-blast-radius remediations.

FinOps is the engineering practice of bringing financial accountability to variable cloud spend by aligning engineering, finance, and product on continuous cost decisions, per the FinOps Foundation. Applied as a control loop instead of a report queue, FinOps stops decaying.

Why Reports Don't Save Money

The action-rate decay curve is the central problem. A typical recommendation sits in a backlog while the engineer who would address it ships features, attends incidents, and forgets the original context.

Time since report Typical action rate What's happening
Week 1 30% Report fresh; easy ones get done first
Week 2-3 8% Sprint pressure crowds out non-urgent work
Week 4 5% Original context cold; engineer not sure why this was flagged
Month 2-3 <2% Recommendation effectively dead; new report supersedes it

The decay is not laziness. It is the cost of context-switching. Reading a recommendation, verifying it still applies, mapping it to the team that owns the resource, opening a ticket, scheduling the change, executing, and verifying takes 30-90 minutes per recommendation. Multiplied across 240 recommendations, that is 120-360 engineer-hours of work that nobody has on their calendar.

The closed-loop alternative collapses the same workflow into 5 minutes by eliminating context-switching for the safe-tier remediations. The report-and-ticket flow stays in place for the human-tier work. The middle tier (approval-required) keeps a human in the loop but pre-fills the context so the decision takes 30 seconds instead of 30 minutes.

This pattern works when the safe-tier classification is conservative enough that nobody fears the auto-action. It breaks when the classification is sloppy and the loop touches resources it shouldn't, because one bad auto-action damages trust for the next twenty good ones.

The Four-Stage Pipeline

The architecture has four stages with explicit contracts. Each stage has a specific input shape, a specific output shape, and a specific failure mode. The end-to-end target for the safe tier is under 5 minutes from detection to verification complete.

diagram

The signal flowing through the pipeline carries the resource ID, the proposed change, the classification tier, the snapshot of pre-state, and a reverse-action definition. A row in this signal is everything needed to execute, verify, and roll back. Every stage either advances the signal or kicks it back with a reason.

Detection: Anomaly + Threshold + Drift

Three input streams feed the loop. Each has a different latency, false-positive rate, and waste pattern it catches.

Detection method Latency False positive rate Catches
Threshold rules (Cloud Custodian, AWS Config) Minutes Low Known waste patterns: idle resources, missing tags, oversized instances
Anomaly detection (Datadog Cost, OpenCost) Hours Medium Sudden spikes, behavior changes, runaway workloads
Drift detection (Terraform refresh, AWS Config) Hours-days Low One-off manual changes that bypass IaC

Cloud Custodian is the most-adopted open-source policy-as-code engine for AWS / Azure / GCP cost remediation. Policies are YAML, run on a schedule, and support modes: report-only, notify, action. Most teams stop at notify; the productivity gain is in switching select policies to action with a defined blast-radius classification.

diagram

False positives go to the same queue but get a "needs review" tag. Novel anomalies (not seen in the last 30 days) automatically classify as approval-required, never as auto-safe. This is how the loop tolerates noisy detection without breaking trust: detection precision is fine; classification is what protects production.

Decision: Blast-Radius Classification

The safety architecture has three tiers with clear membership criteria. The Decide stage is a policy-as-code engine evaluating each signal and routing to the right action path.

Tier Coverage Examples Action
Auto-safe 70-80% of value Idle non-prod termination, log retention reduction, disk class downgrade with rollback Execute without human approval
Approval-required 15-20% of value Production VM right-size, reserved instance purchase, schedule change Pre-filled ticket; one-click approve
Human-only 5-10% of value Architecture changes, multi-tenant resource modifications Report and route to owner

Open Policy Agent Rego rules encode the classification declaratively. A rule like auto-allow termination of non-prod resources older than 30 days with no traffic in last 7 days executes deterministically every cycle without re-asking humans. The Rego rule is the source of truth for what counts as auto-safe.

[diagram could not be rendered]

The classification rules need to be reviewed quarterly. Workloads change, new resource types appear, and the line between auto-safe and approval-required moves. Treating the Rego rules as code (versioned, tested, reviewed) is the only sustainable model.

Action: Idempotent Automation

The Act stage executes the change. The technical floor is idempotency: running the same action twice produces the same result as running it once. Without idempotency, retries amplify rather than recover. With idempotency, the loop tolerates network failures, partial executions, and operator restarts.

Idempotent automation has three preconditions. The source of truth (Terraform / Pulumi / kubectl) is updated, not the live resource directly. The action records a snapshot ID and a reverse-action definition before executing. The action is wrapped in a verification check that confirms the resource state matches expectation post-execution.

diagram

The wrapper layer that handles snapshot/reverse-action is the operational glue that makes auto-action defensible. Without it, "we made the change" is a leap of faith. With it, "we made the change, here is the snapshot ID to roll back, here is the reverse-action definition" is auditable.

Verification and Rollback

Verification compares the metric the change was meant to affect (cost, utilization, response time) over a 5-15 minute post-action window. A statistically significant regression triggers automatic rollback. The window length is workload-specific.

Workload type Verification window Success criteria Rollback timeout
Stateless service (right-size) 5-10 minutes p95 latency unchanged, error rate unchanged <60 seconds
Batch job (downgrade compute) 15-30 minutes Job completion time within 1.2x baseline <5 minutes
Stateful system (storage class change) 30-60 minutes Read latency unchanged, no replication lag <15 minutes
Cost-only (log retention reduction) 24 hours No incident reports requiring deeper logs N/A (revert via re-enable)

Rollback is the safety mechanism that makes auto-action acceptable. The pattern: when verification fails, the loop reads the recorded reverse-action and executes it. The rollback path must complete in under 60 seconds for stateless workloads, under 5 minutes for stateful. If rollback itself fails, the on-call gets paged with full context: original signal, action taken, verification failure, rollback failure, current resource state.

This pattern works when there is a clear metric to verify against. It breaks when the change has no measurable signal in the verification window (e.g. cost reduction that takes a billing day to surface), in which case verification has to run on a longer cycle with explicit rollback approval gates rather than auto-rollback.

A 90-Day Closed-Loop Adoption Plan

Closed-loop adoption sequences cleanly. Each phase produces measurable safety wins, and the data from each phase informs the next.

Phase Weeks Action Effort Verification criterion
Detection-only 1-4 Deploy Cloud Custodian / OpenCost in report-only mode. Build the unified signal queue. Tag every detection with proposed tier and proposed action. 2 engineer-weeks 100% of detections have a tier and a reverse-action recorded
Classification 5-6 Write OPA Rego rules for auto-safe / approval / human tiers. Review with platform team. Deploy in shadow mode (predicts but doesn't act). 1 engineer-week Shadow predictions match human classification on 95%+ of historical signals
Auto-safe execution 7-10 Turn on action mode for the top 3 auto-safe rules (idle non-prod, log retention, disk class). Verification window per workload. Auto-rollback on regression. 2 engineer-weeks Zero verified regressions over 14-day rolling window
Approval-required pipeline 11-12 Pre-fill approval tickets with full context (resource, proposed change, snapshot, reverse-action). Slack-bot approve workflow. 1 engineer-week Median approval-to-action time under 30 minutes
Drift detection layer 13 Add drift detection to fill the gap between known-pattern threshold rules and statistical anomaly detection. Route most drift to approval-required. 3 days Drift backlog drains within 7 days of detection

A team starting with 240 unaddressed FinOps recommendations typically lands on 0-15 unaddressed at any given time after 90 days, because the auto-safe tier catches 70-80% of value before a human ever sees the signal. The remaining 20-30% flow through the pre-filled approval pipeline in days, not weeks.

To get started, run Cloud Custodian in report-only mode for one week against your production AWS account. The report itself is illuminating: 60-80% of recommendations will be obvious enough to classify auto-safe on the spot. Pair the loop with a chargeback / showback layer so the auto-actions are visible to the teams whose resources they touch, and the recommendation backlog stops growing while you build the rest of the pipeline.

Top comments (0)