Something is happening in software design that nobody organized.
Practitioners from different languages, different domains, different traditions are arriving at the same conclusion — independently, without coordination, often without knowing about each other's work.
The conclusion: business processes, not data entities, are the natural unit of software decomposition.
This isn't a manifesto. There was no conference keynote. No working group. Just a growing body of work from people solving real problems who kept ending up in the same place.
The Standard Starting Point
For two decades, the dominant approach to backend design has been data-first. Identify the entities — User, Order, Product. Define their attributes and relationships. Attach behavior. Build services that operate on shared objects.
This approach, formalized in Domain-Driven Design's tactical patterns, produces shared entity models that serve multiple contexts. The coupling is structural: change a shared entity, and every consumer is affected. Add a field to Order, and every service that touches orders must accommodate it — even services that don't care about the new field.
The pattern works. It has built enormous systems. But practitioners working at scale report the same friction points:
- Entities grow into God objects because every feature needs something different from the same concept
- Mapping layers accumulate (DTO to entity to DTO) because the entity doesn't fit any single feature perfectly
- Architecture discussions become debates about aggregate boundaries — who owns what, how big should the aggregate be, where does this behavior belong
- The "where does domain logic live?" question never settles — rich domain model, anemic domain model, transaction scripts, domain services. Each team picks differently, each project mixes approaches, and the answer changes depending on who you ask. The debate persists because entity-first design doesn't have a natural home for behavior that spans multiple entities
- Coupling increases with every new feature because features share entities instead of owning their own types
These aren't implementation failures. They're structural consequences of starting with data.
Convergent Evolution
In biology, convergent evolution describes species that develop the same trait independently — wings in birds, bats, and insects. Different lineages, different mechanisms, same solution to the same problem.
Something similar is happening in software design. Practitioners from different ecosystems are converging on the same structural adaptation.
Scott Wlaschin (F#) models domains as workflows with typed inputs and outputs, composing small functions into complete use cases. His phrase "make illegal states unrepresentable" captures the type-driven approach. His book Domain Modeling Made Functional demonstrates how types replace defensive coding and how workflows replace entity models.
Rico Fritzsche (Rust/TypeScript) models domains as contextual decisions, not shared entities. Each feature slice owns its own state reconstruction. In his framing, entities are not fixed structures — they are "flexible, context-dependent manifestations". A "Seat" is a row and number in booking, a reservation status in availability, a price category in pricing. Three different types, three different processes, no shared entity.
Roman Weis (Java) proposes focusing "100% on behavior — the commands" instead of finding the perfect aggregate root. Business logic belongs in scoped, task-based commands.
Sandro Mancuso (Java/Craftsmanship) starts from external usage. His Interaction-Driven Design lets the domain model emerge from actual needs — use cases first, internal structure second. The domain isn't modeled in advance; it's discovered through implementation.
Jimmy Bogard (.NET) organizes code by features, not layers. His Vertical Slice Architecture minimizes coupling between slices and maximizes coupling within a slice. Each feature is self-contained. Shared abstractions are extracted only when proven necessary.
Debasish Ghosh (Scala) expresses behavior as pure function compositions rather than object methods, with immutable types and explicit side effects. His book Functional and Reactive Domain Modeling shows how algebraic types and composition replace the entity-service-repository pattern.
Six practitioners. Five languages. Different continents, different communities, different audiences. None of them cites the others as primary inspiration. They arrived at the same place because they were solving the same problem.
What They Share
Strip away the language-specific details and the individual vocabulary, and the shared structure becomes clear:
Processes over entities. The primary decomposition unit is a business operation with a trigger, inputs, outputs, and failure modes — not a data entity with attributes and relationships.
Per-context types. Data structures are shaped by the process that uses them. A "User" in registration has different fields than a "User" in authentication or billing. Each process owns its own types.
Functional composition. Small, pure, testable operations composed into larger workflows. The composition itself is the design — not a layer on top of a design.
No shared domain model. Domain knowledge is distributed across processes, not centralized in entity classes. Shared types exist only for validated domain concepts (email addresses, monetary amounts) that genuinely mean the same thing across contexts.
What's Driving the Convergence
Independent convergence implies shared environmental pressure. Several forces are pushing practitioners toward process-first design simultaneously:
Distributed systems demand it. Microservices and serverless architectures naturally align with process-based decomposition. Each service IS a process. Trying to maintain shared entity models across service boundaries creates the exact coupling that microservices were supposed to eliminate.
Scale reveals entity-model friction. Small systems don't feel the pain of shared entities. At 5 services, a shared User entity is manageable. At 50, it's a coordination bottleneck. At 500, it's impossible. Teams that grow past a certain size independently discover that process boundaries work better than entity boundaries.
Functional programming went mainstream. Rust brought algebraic types, pattern matching, Result<T, E>, and immutability by default to systems programming — proving these aren't academic preferences but engineering necessities. Java added records, sealed interfaces, and pattern matching. C# added records and discriminated unions. TypeScript refined literal types and discriminated unions. The languages now support typed composition natively, without framework overhead. What was theoretical in 2010 is practical in 2025.
AI-assisted development rewards deterministic patterns. When the design process is mechanical — ask these questions, the structure follows — AI can participate reliably. Process-first design constrains the design space in ways that make AI-generated code structurally correct more often. This wasn't a design goal for any of the practitioners cited. It's an emergent benefit.
From Observation to Practice
Observing convergence is interesting. Formalizing it is useful.
If processes are the natural decomposition unit, the design activity becomes identifying processes and their boundaries. A process has:
- Typed input — what triggers it and what data it needs
- Typed output — what success looks like
- Typed failures — what can go wrong
- Steps — sub-processes with their own typed boundaries
These aren't abstract categories. They're concrete questions you can ask about any feature requirement:
- What triggers this process?
- What data does it need?
- What does success look like?
- What can go wrong?
- What are the steps?
- Which steps depend on each other?
- Are there conditional paths?
- Is there collection processing?
The answers determine the code structure. Not guidelines — deterministic mapping. Independent steps become parallel operations. Sequential dependencies become chains. Conditional paths become branches. The developer doesn't invent the structure; they discover it from the answers.
Consider an order placement process. The questions yield:
- Input: customer, items, address, payment method
- Output: order confirmation with estimated delivery
- Failures: invalid items, insufficient inventory, payment declined
- Steps: validate, check inventory, process payment, create order, send confirmation
- Dependencies: inventory check and payment are independent; order creation depends on both
The dependency analysis tells you the composition pattern: validate first (sequential), then inventory and payment in parallel (fork-join), then create order (sequential), then confirm (sequential). The code writes itself:
validate → (check inventory ∥ process payment) → create order → confirm
No architecture meeting required. No debate about aggregate boundaries. No class diagram. The process structure IS the architecture, derived mechanically from the requirements.
Context-Specific Types
One consequence of process-first design deserves emphasis: types belong to processes, not to the domain.
In entity-first design, you model "Seat" once and every feature uses that model. In process-first design, each process models exactly what it needs:
Booking — a location to select (row, number)
Reservation — a time-limited hold (id, reserved until)
Pricing — a cost input (id, category, base price)
Three different types, three different processes. No shared entity, no conflict, no coupling. Change the pricing model — only the pricing process changes. Add reservation expiry logic — only the reservation process is affected.
Shared types emerge only when genuinely needed: an email address means the same thing in registration and login, so it becomes a shared value object. But the sharing is discovered from evidence, not designed from speculation.
What This Changes
When processes own their types and composition follows from dependency analysis:
The language footprint shrinks. Most language features serve entity-model infrastructure — inheritance hierarchies, mutable state, reflection. Process-first design uses a small subset: records for data, sealed interfaces for alternatives, lambdas for composition. The rest becomes unnecessary.
Code reads like business documentation. A process method reads as a sequence of named business operations. New team members learn the domain by reading the code, not the framework.
Design evolves mechanically. New step? Add a step interface and insert it in the chain. Steps become independent? Change sequential to parallel. Process grows too large? Extract a sub-process. No "refactoring to patterns" — the patterns are used from the start.
Business stakeholders can validate structure. When code maps directly to process descriptions, a business analyst can look at the composition and verify it matches the business process. The gap between specification and implementation narrows to zero.
What Replaces Entity Modeling
A natural objection: if we don't start with entities, what happens to data modeling?
It doesn't disappear. It transforms.
Every backend process is fundamentally an act of knowledge gathering. Check inventory — now you know availability. Process payment — now you know if funds cleared. Each step acquires a piece of knowledge. The process ends — successfully or not — when enough knowledge has accumulated to formulate an answer.
This reframes data modeling entirely. Instead of asking "what data exists in the system?" (entity diagram), you ask "what does this process need to know?" (dependency graph). The data model becomes a data dependency graph scoped to each process:
PlaceOrder = Transform(ALL(InventoryStatus, PaymentResult))
ALL means "I need both pieces of knowledge — they're independent." Sequential chaining means "I need this knowledge before I can gather the next." ANY means "I can get this knowledge from multiple sources — the first success is enough."
These operators map directly to composition patterns in code. ALL is a fork-join. Sequential chaining is a flatMap. ANY is a fallback. The code structure mirrors the knowledge dependency structure — not because of a design framework, but because gathering knowledge to produce answers is what the code actually does.
The consequence: data types are scoped to the knowledge a process needs, not to what exists in the database. A "Seat" in the booking process carries row and number. A "Seat" in the pricing process carries category and base price. They're different knowledge, gathered for different answers. No shared entity needed.
Entity modeling asks: "What is a Seat?" — and produces one answer that fits no process perfectly.
Process modeling asks: "What does this process need to know about seats?" — and produces exactly the right answer for each process.
The Relationship to DDD
This isn't a rejection of Domain-Driven Design. DDD's most enduring contributions — bounded contexts, ubiquitous language, the insistence that software should model the domain — remain essential.
What's being reconsidered is the starting point. DDD's tactical patterns start with entities and aggregates, then attach behavior. Process-first design starts with behavior, then derives the types. The strategic patterns (bounded contexts, context mapping) are fully compatible — in fact, process boundaries often align with context boundaries more naturally than entity boundaries do.
The ubiquitous language still matters. But in process-first design, it emerges from use case identification and type definition rather than from separate modeling sessions. When a domain expert says "we need to check inventory before processing payment," that sentence maps directly to step interfaces and their ordering. The code reads like the conversation that produced it.
The Quiet Part
Nobody organized this convergence. There's no foundation, no standard, no certification program. Just practitioners solving problems and publishing what they found.
That's what makes it credible. When one person proposes a new methodology, it's an opinion. When six people from different ecosystems independently arrive at the same methodology, it's a signal. The environmental pressures — distributed systems, team scaling, AI-assisted development, functional language features — are producing the same structural adaptation across the industry.
The consensus is quiet because it doesn't need to be loud. It's not replacing anything overnight. It's just that every year, more teams try process-first design, find that it works, and don't go back.
The convergence described here is formalized in JBCT (Java Backend Coding Technology) — a methodology with patterns, tooling, and implementation guidance. The approach has been validated against a 326,000-line distributed runtime built entirely with process-first design.
Further reading: Scott Wlaschin, Domain Modeling Made Functional. Debasish Ghosh, Functional and Reactive Domain Modeling. Rico Fritzsche, How to Model Domain Logic Without Shared Entities. Roman Weis, Alternative Approach to DDD. Sandro Mancuso, Interaction-Driven Design. Jimmy Bogard, Vertical Slice Architecture.
Top comments (1)
As always, an excellent throw!