DEV Community

rinat kozin
rinat kozin

Posted on

We built an enterprise integration stack for .NET from scratch: EAV + DSL + runtime

Three-layer stack diagram: redb.Tsak at top (hot-reload runtime and cluster coordination via EAV), redb.Route in the middle (22 transports, 30+ EIP patterns, compiled expressions, .ProcessWithRedb() writes to EAV), redb.Core at the bottom (typed EAV, full LINQ, Postgres and MSSQL). Arrows illustrate that Route reads and writes EAV directly, and Tsak stores all cluster state — leader election, distributed locks, node registry — in the same EAV database.
Three open-source libraries. One coherent stack. All Apache 2.0.

This is the story of what we built, why, and how the three pieces fit together
in a way that surprised even us.


The problem

Enterprise .NET integration in 2026 still looks like this:

  • EF Core for data — plus a migration file every time a field changes
  • MassTransit or NServiceBus for messaging — great if you only need message brokers, but IBM MQ, SFTP, MQTT, SQL polling? Custom adapters, every time
  • Kubernetes + etcd for clustering — because "that's just how you do it"

We needed all three. We built all three. And then we noticed they could share
the same storage layer — and everything clicked.


Layer 1: redb.Core — typed EAV without the pain

github.com/redbase-app/redb · Apache 2.0

RedBase (REDB) - Entity Database for .NET | LINQ, Trees, Aggregations

Not a key-value store. Real objects, real queries. LINQ, trees, aggregations — native SQL.

favicon redbase.app

EAV (Entity–Attribute–Value) has a bad reputation — and for good reason.
Raw EAV means losing types, losing LINQ, losing your mind.

redb.Core is typed EAV. You define a schema as a plain C# class:

[RedbScheme("Employee")]
public class EmployeeProps
{
    public string  Department { get; set; } = "";
    public decimal Salary     { get; set; }
    public int     Level      { get; set; }
}
Enter fullscreen mode Exit fullscreen mode

That attribute is the entire schema definition. No DbContext. No Add-Migration.
No SQL files. Call SyncSchemeAsync<EmployeeProps>() once at startup — done.

// Save
await redb.SaveAsync(new RedbObject<EmployeeProps>
{
    name  = "Alice",
    Props = new() { Department = "Engineering", Salary = 95000m, Level = 3 }
});

// Query — full LINQ, compiled to SQL
var seniors = await redb.Query<EmployeeProps>()
    .Where(e => e.Level >= 3 && e.Salary > 80000m)
    .OrderByDescending(e => e.Salary)
    .ToListAsync();
Enter fullscreen mode Exit fullscreen mode

GroupBy, window functions, tree queries (CTE-based), aggregations — all there.
Add a field to EmployeeProps → call SyncSchemeAsync → it's live. No migration.

Why EAV? Because the schema is runtime data. You can add attributes, create
new object types, and restructure relationships without touching the database schema.
This matters a lot for the other two layers.


Layer 2: redb.Route — Apache Camel for .NET

github.com/redbase-app/redb-route · Apache 2.0

If you know Apache Camel, you know the model: From → Process → To.
redb.Route brings that model to .NET — in pure C#, no XML, no JVM.

// Kafka → filter → enrich → RabbitMQ
From("kafka://orders?groupId=svc&brokers=localhost:9092")
    .Filter(Header("type").isEqualTo("new"))
    .Retry(3)
    .To("rabbitmq://events?host=localhost");
Enter fullscreen mode Exit fullscreen mode

22 transports out of the box: Kafka, RabbitMQ, IBM MQ, gRPC, HTTP, Redis,
MQTT, S3, SFTP, FTP, SQL, TCP, WebSocket, SignalR, Azure Service Bus,
Elasticsearch, Firebase, LDAP, Mail, AMQP 1.0, Quartz, File.

30+ EIP patterns as first-class DSL: Content-Based Router, Splitter,
Aggregator, WireTap, Multicast, Recipient List, Dynamic Router, Resequencer,
Scatter-Gather, Claim Check, Idempotent Consumer, Saga, Circuit Breaker,
Throttle, Retry, Dead Letter, and more.

Compiled expression engine. ${header.x}, ${header.x++}, arithmetic,
JSONPath, XPath — all compiled to Func<IExchange, T> via
System.Linq.Expressions at route-build time. No interpreter overhead,
results cached per route.

A real pipeline from production — HTTP entry → validate → dedup →
Choice by priority → RabbitMQ RPC → AMQP RPC → gRPC RPC →
SQL INSERT + SELECT → WireTap(Kafka, File) → response:

From("http:0.0.0.0:5088/api/demo?inOut=true")
    .ConvertBody<string>()
    .Throttle(10)
    .ValidateJsonSchema(messageSchema)
    .IdempotentConsumer(e => e.In.GetHeader<string>("traceId"), repo)
    .Choice()
        .When(Header("priority").isEqualTo("high"))
            .SetHeader("fastTrack", "true")
        .Otherwise()
            .SetHeader("fastTrack", "false")
    .EndChoice()
    .To("direct://pipeline");
Enter fullscreen mode Exit fullscreen mode

The integration point: Route reads and writes EAV

This is where it gets interesting.

The Route DSL has a .ProcessWithRedb(...) step that gives you a named
IRedbService instance directly inside a pipeline step:

// Inside a route — full EAV access without leaving the pipeline
.ProcessWithRedb("pg-test", async (redb, exchange, ct) =>
{
    var id = await redb.SaveAsync(new RedbObject<DemoItemProps>
    {
        name  = $"Demo-{DateTime.UtcNow:HHmmss}",
        Props = new() { Title = "Pipeline result", Priority = 7 }
    });
    exchange.In.Headers["saved-id"] = id.ToString();
})

// Load what we just saved
.ProcessWithRedb("pg-test", async (redb, exchange, ct) =>
{
    var id   = long.Parse(exchange.In.GetHeader<string>("saved-id")!);
    var item = await redb.LoadAsync<DemoItemProps>(id);
    exchange.In.Headers["title"] = item?.Props?.Title;
})

// LINQ query inside a pipeline step
.ProcessWithRedb("pg-test", async (redb, exchange, ct) =>
{
    var results = await redb.Query<DemoItemProps>()
        .Where(p => p.Priority > 3)
        .Take(5)
        .ToListAsync();
    exchange.In.Headers["count"] = results.Count.ToString();
})
.Log("Found ${header.count} high-priority items");
Enter fullscreen mode Exit fullscreen mode

The transport doesn't matter. Swap Kafka for IBM MQ for an HTTP webhook —
the EAV read/write logic inside .ProcessWithRedb(...) stays unchanged.


Layer 3: redb.Tsak — the runtime container

github.com/redbase-app/redb-tsak · Apache 2.0

Drop a .dll or a .tpkg (ZIP with manifest + DLLs) into a folder.
Tsak loads it, wires the DI container, starts the routes. Hot-reload:
replace the file while running — zero downtime.

REST API + CLI + Blazor dashboard for management.

The cluster has no ZooKeeper, no etcd, no Consul.

Leader election, distributed locks, and node registry live entirely in EAV:

RedbLeaderElection
  → TTL-based row lock in EAV: "{groupName}:leader"
  → SELECT FOR UPDATE (atomic CAS)
  → epoch counter — stale leaders rejected by fencing

RedbDistributedLock
  → TryAcquireAsync: check existing lock → if expired → AtomicTakeover
  → AtomicTakeover: transaction + row-level lock, no races

RedbNodeRegistry
  → node records in EAV (TsakNodeProps)
  → registration protected by "{groupName}:node-register-lock"
  → serializes simultaneous startups
Enter fullscreen mode Exit fullscreen mode

Behind ILeaderElection / IDistributedLock / INodeRegistry — swap in
a K8s Lease implementation without changing anything else.

The cluster is the EAV database. No extra infrastructure.


Why it fits together

┌─────────────────────────────────────────────────────┐
│                   redb.Tsak                         │
│  (hot-reload runtime, cluster, REST/CLI/Blazor UI)  │
│         cluster state stored in redb.Core EAV       │
├─────────────────────────────────────────────────────┤
│                  redb.Route                         │
│  (22 transports, 30+ EIP, compiled expressions)     │
│       .ProcessWithRedb() ── reads/writes EAV        │
├─────────────────────────────────────────────────────┤
│                  redb.Core                          │
│  (typed EAV, LINQ, no migrations, Postgres/MSSQL)   │
└─────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode
  • The storage layer (EAV) is also the cluster coordination layer
  • The pipeline engine (Route) speaks directly to the data layer (EAV) without a separate service, separate DB, or separate connection
  • The runtime (Tsak) hosts Route assemblies with hot-reload and manages cluster membership through the same EAV instance

One PostgreSQL (or SQL Server) database. No Redis for caching, no ZooKeeper
for coordination, no separate message store for pipeline state.


Numbers from production

Running at EWS — e-commerce platform:

  • ~550 daily active users, ~150,000 orders/month
  • 3-node Tsak cluster
  • Pipelines: Kafka → filter/enrich → RabbitMQ, SQL outbox → HTTP webhooks, SFTP polling → CSV → SQL upsert, Timer → SQL → Mail

Zero-downtime deploys: replace .tpkg → Tsak reloads the module → routes
restart → in-flight messages complete on the old instance.


Links

All Apache 2.0. Issues and PRs welcome.


Next in the series: deep dive into the compiled expression engine —
how ${header.x++} becomes a Func<IExchange, T> at build time.

Top comments (0)