DEV Community

Cover image for API Apocalypse: What Happens When Every API Breaks and Your Agent Has to Evolve
Rotifer Protocol
Rotifer Protocol

Posted on • Originally published at rotifer.dev

API Apocalypse: What Happens When Every API Breaks and Your Agent Has to Evolve

API Apocalypse Demo — Rotifer Agent vs Baseline

APIs break. They change response formats without warning, rate-limit you into oblivion, and go offline at 3 AM. Every developer has written code that worked Monday and failed Friday because a third-party endpoint decided temperature should now be nested inside weather.temp_celsius.

The usual fix: a human notices, updates the parser, deploys, and hopes nothing else broke. What if an agent could fix itself?

We ran an experiment to find out.


The Setup: API Apocalypse

We created a controlled chaos environment — three mock weather API sources that progressively degrade over 30 seconds:

Time Source A (JSON) Source B (XML) Source C (CSV)
0–6s Normal Normal Normal
6s Format change Normal Normal
12s Changed Format change Normal
18s Changed Changed Format change
24s Rate limited (429) Changed Changed
28s Rate limited Offline (503) Changed

Two agents compete in the same environment:

  • Rotifer Agent: 6 genes (2 per API source), auto-failover based on runtime fitness
  • Baseline Agent: 3 fixed parsers, no adaptation

Both agents aggregate weather data from all three sources every 2 seconds. When a source breaks, the Baseline agent fails permanently for that source. The Rotifer agent tries alternative genes.


The Core Innovation: Fitness-Ordered Failover

The Rotifer Agent maintains a pool of genes per domain. Each gene has a running fitness score that increases on success and decreases on failure. When the active gene fails, the agent:

  1. Penalizes the failed gene's fitness
  2. Sorts remaining genes in the same domain by fitness (descending)
  3. Tries each candidate until one succeeds
  4. Promotes the successful candidate to active
// When the primary gene fails, try alternatives by fitness rank
const candidates = this.getGenesByDomain(domain)
  .filter(g => g.name !== activeName);

for (const candidate of candidates) {
  try {
    const result = await candidate.express(input);
    this.updateFitness(candidate, true);
    this.activeGenes.set(domain, candidate.name);
    recovered = true;
    break;
  } catch {
    this.updateFitness(candidate, false);
  }
}
Enter fullscreen mode Exit fullscreen mode

This isn't retry logic. It's selection pressure. Failed genes don't get retried blindly — they drop in fitness rank and get replaced by alternatives that can handle the new reality.


The Results

  t   │ ROTIFER (auto-switch)        │ BASELINE (fixed code)
 ─────┼─────────────────────────────-┼──────────────────────
  0s  │  ✓ ✓ ✓          23.5°C      │  ✓ ✓ ✓          23.5°C
  6s  │  ✓ ✓ ✓          23.4°C      │  ✓ ✓ ✓          23.4°C
  8s  │  ✓⟳ ✓ ✓         23.3°C      │  ✗ ✓ ✓          23.2°C
 14s  │  ✓ ✓⟳ ✓         23.1°C      │  ✗ ✗ ✓          23.0°C
 20s  │  ✓ ✓ ✓⟳         22.9°C      │  ✗ ✗ ✗         NO DATA
 28s  │  ✗ ✓ ✓          23.0°C      │  ✗ ✗ ✗         NO DATA
Enter fullscreen mode Exit fullscreen mode

The ⟳ markers show automatic gene switches — moments where the Rotifer Agent detected a parsing failure and swapped to an alternative gene within the same tick.

Head-to-Head

Metric Rotifer Baseline
Source Uptime 83.3% 33.3%
Data Continuity 100% (always had at least 1 source) 50% (lost all sources at t=90s)
Auto Gene Switches 3 N/A
Human Interventions 0 Would need manual fix

The Rotifer Agent maintained 2.5x higher source availability while requiring zero human intervention. When Source A got rate-limited at t=24s, both agents lost it — that's a network-level failure no parser gene can fix. But by that point, the Baseline had already lost all three sources to format changes it couldn't handle.


What Makes This Different from Retry Logic

A retry loop repeats the same operation hoping for a different result. The Rotifer Agent does something fundamentally different:

It switches the implementation, not the attempt.

When json-v1 (which expects { temperature: 23.5 }) encounters { weather: { temp_celsius: 23.5 } }, retrying json-v1 will never work. But json-v2 was designed for exactly that format. The agent doesn't know why the API changed — it just observes that json-v1's fitness dropped and json-v2 succeeds, so it promotes json-v2.

This is the biological parallel: organisms don't debug their environment. They carry genetic variation, and selection pressure promotes whatever works.


Gene Fitness Dynamics

After the experiment, the fitness scores tell the story:

  weather-source-a-v1     █████████████████░░░ 0.850  (15✓ 1✗)
  weather-source-a-v2     ████████████████████ 1.000  (45✓ 0✗)
  weather-source-b-v1     █████████████████░░░ 0.850  (30✓ 1✗)
  weather-source-b-v2     ████████████████████ 1.000  (45✓ 0✗)
  weather-source-c-v1     █████████████████░░░ 0.850  (45✓ 1✗)
  weather-source-c-v2     ████████████████████ 1.000  (45✓ 0✗)
Enter fullscreen mode Exit fullscreen mode

Each v1 gene started as the active parser and served well during the stable phase. When the API format changed, the v1 gene failed once, took a fitness penalty, and the v2 gene was promoted. The v2 genes then ran flawlessly until the experiment ended or the source went down entirely.

No configuration file was updated. No human was paged. The agent adapted.


Try It Yourself

The full experiment is self-contained and reproducible:

git clone https://github.com/rotifer-protocol/rotifer-playground.git
cd rotifer-playground
npm install
npx tsx experiments/api-apocalypse/run.ts
Enter fullscreen mode Exit fullscreen mode

The 30-second demo version (optimized for screen recording):

npx tsx experiments/api-apocalypse/demo.ts
Enter fullscreen mode Exit fullscreen mode

Everything runs locally — mock server, genes, agents, metrics collection. No external dependencies, no API keys, no cloud services.


What This Proves (and What It Doesn't)

This experiment demonstrates that fitness-based gene selection with automatic failover can maintain service continuity through API disruptions without human intervention, achieving 2.5x better source uptime than fixed-code approaches.

This experiment does not claim to prove that evolutionary approaches are universally superior to traditional engineering. The scenario is deliberately constrained — single agent, local execution, known failure modes with pre-built alternative genes.

The honest question for the next iteration: what happens when the agent must discover or generate new genes at runtime, not just select from a pre-registered pool? That's the multi-agent HLT (Horizontal Logic Transfer) experiment we're planning for v0.9, when P2P networking makes cross-agent gene sharing possible.


Why This Matters Beyond Rotifer

Every production system that depends on third-party APIs faces the same problem. The current industry answer is defensive engineering: timeouts, retries, circuit breakers, fallback values. These are necessary but reactive — they manage failure, they don't evolve past it.

The alternative this experiment demonstrates is carrying genetic variation — maintaining multiple implementation strategies for the same capability and letting runtime performance determine which one leads. It's a pattern that works at any scale, from a single API parser to a fleet of autonomous agents.

The code is open source. Run it, break it, extend it.


Rotifer Protocol — Code as Gene, Evolution as Runtime

Top comments (0)