DEV Community

Matheus de Camargo Marques
Matheus de Camargo Marques

Posted on

Why CRUD bleeds (and what you feel)

Why CRUD bleeds (and what you feel)

Series: Part 1 of 5 — CQRS and architecture for AI agents.

Reading time: ~6 min.


Summary: Backends built on a single data model (CRUD) bleed when load grows, integrations multiply, and AI agents read and write in parallel. This article (first in the series) focuses on the anatomy of the problem: the symptoms you feel and why CRUD degrades when it meets AI agent orchestration.

In this article:

  1. Do you feel these symptoms? — 2. Why CRUD bleeds — 3. When CRUD meets AI agents

Contemporary software engineering is in a critical state of transition. The development ecosystem operates in a perpetual dynamic balance, where the continuous emergence of new paradigms, frameworks, and infrastructure demands renders rigid architectural foundations obsolete. Observing systems in production reveals a phenomenon known as "software aging": even when the physical codebase remains unchanged, the ecosystem around it evolves, increasing the effort required to keep productive operation and causing gradual architectural degradation.

Historically, the vast majority of systems were conceived under the premise of the CRUD (Create, Read, Update, Delete) pattern, supported by monolithic relational databases that process read and write operations in the same structural model. Although this approach offers conceptual simplicity and high development speed in the early phases of a project, it imposes severe limits as applications scale. This architectural exhaustion scenario reaches a breaking point with the introduction of multi-agent systems based on Artificial Intelligence (AI).

As AI moves beyond conversational text generation to take on "agentic" workflows — autonomous reasoning, long-horizon planning, and parallel tool execution (tool use) — traditional backend infrastructures collapse under concurrency contention, processing bottlenecks, and state inconsistencies. The question remains: how do we stem this bleeding without rewriting everything?

I will soon go deep on Evolutionary Architectures here. But before taking that step, we need to be honest about the current state of our backends: many of you are still using patterns, methods, and architectures from the 1950s.

Yes, it "works". Until it has to work with everything at once: concurrency, integrations, changing requirements, too many logs, and a curious AI agent (who, of course, always wants to "just test one more thing").

As a system grows, mixing read and write operations in the same data model is a recipe for disaster. The code becomes coupled, the database suffers from lock bottlenecks, and dealing with concurrency becomes an absolute nightmare.

Do you feel these symptoms?

If you feel any of these 4 symptoms in your project today, what follows is not a luxury — it is survival (and, in many cases, emergency care before the migration surgery).

1. Highly collaborative environments (humans or AIs)

Imagine dozens of users — or multiple AI agents working in parallel — trying to access and modify the same data at once. At some point it becomes a game of musical chairs with locks: everyone wants to sit, no one wants to get up, and the database starts to sigh. In a traditional (CRUD) model they sabotage each other; CQRS isolates intentions (Commands) and processes them one at a time, avoiding collisions in the database.

2. Friction in separation of responsibilities

The frontend (or the AI’s context reading) demands ultra-fast queries; the transactional core needs heavy validation before writing. CQRS physically separates writing from reading — you stop treating "read" as "negotiating a contract".

3. Systems in constant evolution

Display rules change much faster than business rules. With CQRS you can change the read database (or add a vector store just for agents) on the Query side without breaking the write core (Commands).

4. Acute pain in system integration

In distributed architectures, isolating commands allows graceful degradation: if a third-party API goes down, the command queues and waits; reading continues with the last state.

To stem this sustainably, we first need to understand the anatomy of the problem.

The Anatomy of Degradation in Data-Centric Architectures (CRUD)

The traditional CRUD paradigm uses a single data model to satisfy two fundamentally opposed business requirements: modifying system state and displaying information. This architecture ignores the inherent asymmetry between reading and writing, which triggers a series of structural and operational pathologies in high-demand systems.

The first symptom of this degradation appears through schema optimization conflicts. Write operations require a strictly normalized database schema to ensure referential integrity, avoid data duplication, and allow consistent application of complex business rules. In contrast, read operations demand denormalized schemas to avoid costly join operations, optimizing response time and reducing latency perceived by clients. Forcing both operations to share the same model demands suboptimal tradeoffs, resulting in slow writes due to excessive indexing and inefficient reads due to high structural normalization.

The second symptom concerns asymmetric scalability and lock contention. In real production environments, the ratio between reads and writes is rarely balanced; modern platforms often show ratios of a thousand reads per write. In the traditional model, complex read queries can lock records in the database, paralyzing crucial write operations, and vice versa. In collaborative scenarios where multiple actors try to change the same state, the CRUD architecture resorts to basic optimistic concurrency or pessimistic locks at the database level, which erodes transactional throughput.

The Impact of CRUD on AI Agent Orchestration

The fragility of the single-model architecture is exponentially worsened by the introduction of AI Agents. Unlike human users, who interact sequentially through graphical interfaces, AI agents are high-cadence systems that interact programmatically through APIs, often firing dozens of tasks in parallel.

When agent orchestrators implement concurrent patterns — where multiple agents run searches, synthesis, and mutation calls simultaneously — the backend architecture is put under extreme stress. If a decision-making agent (planner) and several execution agents (workers) try to read and update the same monolithic record, the system will succumb to race conditions or fail to maintain a coherent audit trail. The lack of separation between data exploration (read) and execution of tools that change reality (write) makes AI behavior unpredictable and unsafe, opening the door to executions that violate business policies.

Architectural pathology

Symptom in CRUD model Impact on systems with AI agents
Schema impedance — Competition between normalization (write) and denormalization (read) needs. Delay in context retrieval (RAG), degrading the agent’s ability to reason in real time.
Lock contention — Database locks reduce throughput of concurrent transactions. Failures in parallel agent execution loops; API timeouts that break the LLM’s reasoning.
Intention opacity — Generic operations (UPDATE) mask the true underlying business intention. Inability to effectively audit "why" an agent made a critical decision, harming governance.
Failure coupling — A slow write operation brings down read query availability. Failure in an external tool completely paralyzes the agent’s ability to observe the world.

If the content helped, you can buy me a coffee.

Next: CQRS: fundamentals and practice in Elixir — Command/Query segregation fundamentals and code.

Top comments (0)