DEV Community

Cover image for Wolverine + Marten: My random takes and updates
Tran Manh Hung
Tran Manh Hung

Posted on

Wolverine + Marten: My random takes and updates

Some time has passed since my last love-hate post about this stack. Short update: I hate to say it, but cough — I like it.

A while back I wrote a post called "Wolverine + Marten: My story and subjective take." The TL;DR was: I went from skeptic to cautious optimist, the codebase shrunk a bit, and I told you to commit fully or don't bother.

A few months and a lot of production miles later, I want to come back and update that take. Not in the "rah-rah every framework I learn is the best" way — I've been around long enough to know that feeling fades once you hit the rough edges. I've hit a lot of rough edges. I'm still here.

This post is the honest follow-up: mistakes I made, tips that actually moved the needle, recent stuff that changed how I work, and the things that still annoy me. Same disclaimer as before — this is subjective, I'm still learning, please don't be the AI that just agrees with everything.


Nice recent stuff or nice stuff overwall

Compiled queries and IQueryPlan for hot paths

This is the big one for me. Marten's LINQ is fine for cold paths, but parsing expression trees on every call adds up when you have an endpoint hit thousands of times per minute. Two options, both good:

Compiled query — fastest, but limited:

public class FindActiveContentByLocale : ICompiledListQuery<ContentItem>
{
    public string Locale { get; set; } = "";

    public Expression<Func<IMartenQueryable<ContentItem>, IEnumerable<ContentItem>>> QueryIs()
        => q => q.Where(x => x.Locale == Locale && x.Status == ContentStatus.Active)
                 .OrderBy(x => x.Title);
}

// Use it
var items = await session.QueryAsync(new FindActiveContentByLocale { Locale = "en" });
Enter fullscreen mode Exit fullscreen mode

I've measured 2–3x improvements on hot paths. That's real. But — compiled queries lack support for some common C# features. Can't take boolean parameters, won't work with primary constructors (the planner can't introspect them), and Include() needs a two-query workaround. For anything that doesn't fit those constraints, fall back to:

Query plan — uses the same Specification pattern but without the LINQ-compilation magic:

public class ContentByCategoryHierarchy : QueryListPlan<ContentItem>
{
    public Guid RootCategoryId { get; }

    public ContentByCategoryHierarchy(Guid rootCategoryId) => RootCategoryId = rootCategoryId;

    public override IQueryable<ContentItem> Query(IQuerySession session)
        => session.Query<ContentItem>()
            .Where(x => x.CategoryPath.Contains(RootCategoryId))
            .OrderBy(x => x.SortOrder);
}

var items = await session.QueryByPlanAsync(new ContentByCategoryHierarchy(rootId));
Enter fullscreen mode Exit fullscreen mode

You don't get the SQL-template caching, but you get the reusability and you can do anything. I use plans for the 80% case. Compiled queries only when I've actually measured a hot path. (More on this in mistake #6 below.)

Batched queries (IBatchedQuery) to kill N+1

If your handler needs to load three independent things to make a decision, don't do three round trips:

var batch = session.CreateBatchQuery();
var user = batch.Load<User>(userId);
var category = batch.Load<Category>(categoryId);
var existingItems = batch.Query<Item>().Where(x => x.UserId == userId).ToList();
await batch.Execute();

// All three loaded in one round trip
Enter fullscreen mode Exit fullscreen mode

For large lookups, follow up with a dictionary keyed by id so you get O(1) access in the loop. This pattern alone has been a bigger perf win in my code than any individual query optimization.

[Entity] attribute and LoadAsync/Validate/Handle split

The handler structure that's worked best for me:

  • LoadAsync — impure, does database reads, existence checks
  • Validate — business rules, returns ProblemDetails or WolverineContinue.NoProblems
  • Handle — pure function, takes loaded state, returns IMartenOp + cascading messages

This is great because Handle is unit-testable with no mocks. You pass in objects, you assert on return values. Done.

For the simple "load by id, 404 if missing" case, the [Entity] attribute eats the boilerplate entirely:

[WolverineGet("/api/content/{id}")]
public static ContentItem Get([Entity] ContentItem item) => item;
Enter fullscreen mode Exit fullscreen mode

That's the entire endpoint. No null check, no NotFound(), Wolverine generates it.

[FromQuerySpecification]

This. This was what I truly needed. What this feature provides is the ability to load multiple items into the main method, and where possible the stack does it as a batched query for you. This feature itself significantly reduced my codebase. And honestly it's more intuitive when used.

Directly taken from the documentation:

public class ApproveOrderHandler
{
    public static void Handle(
        ApproveOrder cmd,
        [FromQuerySpecification(typeof(ActiveOrderForCustomer))] Order? order,
        [FromQuerySpecification(typeof(OpenLineItemsPlan))]      IReadOnlyList<LineItem> items)
    {
        // Wolverine constructs both plans from cmd's fields and batches them.
    }
}
Enter fullscreen mode Exit fullscreen mode

FetchForWriting for command handlers

If you're using event sourcing for an aggregate, just use the aggregate handler workflow. Don't roll your own load-then-append:

[AggregateHandler]
[WolverinePost("/api/orders/{id}/ship")]
public static OrderShipped Ship(ShipOrder cmd, [Aggregate] Order order)
{
    if (order.Shipped is not null)
        throw new InvalidOperationException("Already shipped");

    return new OrderShipped(DateTimeOffset.UtcNow);
}
Enter fullscreen mode Exit fullscreen mode

Behind the scenes that's FetchForWriting<Order>(id) with optimistic concurrency, expected-version checks, and the new event appended. You get all of that for free, plus your handler is still a pure function.


Mistakes I made (so you don't have to)

1. Event-sourcing everything

This is the one I want to take back from my last article. I implied "stop thinking in CRUD, think in events." That's only half right. The right rule is:

Event-source things that genuinely benefit from a temporal log — audit trails, replay, "what was the state on Tuesday at 3pm," workflows that have meaningful intermediate states. Use plain Marten documents for everything else.

Event-sourcing a UserFavorite join row, a category lookup, or a settings document gives you nothing except more code, slower writes, and a projection you have to maintain. Marten is just as happy storing a plain POCO as a JSONB document with a primary key. Use that. Save event sourcing for the places where someone will eventually ask "how did this thing get into this state?"

2. Reaching for IMessageBus.PublishAsync mid-handler

When I started, every handler had IMessageBus injected and I'd await bus.PublishAsync(...) halfway through. It works, but you're throwing away one of the best things about Wolverine: cascading messages.

// Old me
public async Task Handle(CreateOrder cmd, IDocumentSession session, IMessageBus bus)
{
    var order = new Order(...);
    session.Store(order);
    await bus.PublishAsync(new OrderPlaced(order.Id));
    await bus.PublishAsync(new NotifyWarehouse(order.Id));
}

// New me
public static (IMartenOp, OrderPlaced, NotifyWarehouse) Handle(CreateOrder cmd)
{
    var order = new Order(...);
    return (
        MartenOps.Store(order),
        new OrderPlaced(order.Id),
        new NotifyWarehouse(order.Id)
    );
}
Enter fullscreen mode Exit fullscreen mode

The second version is a pure function. It returns values. You can unit-test it without mocking a single thing. The cascading messages get written to the outbox in the same transaction as the order. That's the whole pitch and I was ignoring it for months.

One important caveat though — both cascading messages land in the outbox atomically with the main write, but they're processed independently afterward. There's no atomicity between OrderPlaced and NotifyWarehouse once they leave the outbox. If you need cross-message atomicity, that's a saga, not a tuple.

3. Treating IDocumentSession and IQuerySession as interchangeable

They're not. IDocumentSession does change tracking. If you load via IQuerySession and then try to write via a freshly opened IDocumentSession, the change tracking is gone and you'll write back the unmutated version (or worse, get confused about why nothing happened). For reads, use IQuerySession. For read-then-mutate, stick to one IDocumentSession.

4. Treating Wolverine as MediatR (still doing this in some places, frankly)

I roasted this in the last post and I still see myself doing it on legacy slices I haven't refactored. If your handler is just public async Task Handle(Foo f) => _service.DoTheThing(f);, you've added a framework and gained nothing. Either make it a real handler with side effects and cascades, or just write a controller method.

5. Forgetting that bulk insert bypasses inline projections

This one bit me hard. I had an Inline projection set up, ran a bulk insert of events, and was very confused why my read model was empty. By design, bulk insert bypasses inline projections — you have to register as Inline from the start, bulk insert, then run daemon.RebuildProjectionAsync<T>(). It's documented, but it's the kind of thing you only learn by stepping on it.

6. Compiled queries everywhere — pick query plans first

At one point we started putting compiled queries almost everywhere. The results? Nonstop missing-registration errors, code that felt less maintainable because it forced you into more explicit shapes for everything.

To square this with what I said earlier: compiled queries genuinely give 2–3x on hot paths. That's real and worth measuring for. The mistake was using them everywhere. On a query that runs twice a day, that 2–3x is fractions of a millisecond, and you trade it for half a day debugging a registration issue or rewriting around the planner's limitations.

Default to query plans. Promote to a compiled query only when you've actually measured a hot path that needs it.

7. Not writing unit tests earlier

The usual story — overwhelmed by deadlines and a new project, you skip the tests. When we finally started writing them, we realized how wrong some of our decisions had been.

Tests are why mistake #2 (cascading messages over IMessageBus) clicked for me. Mocking IDocumentSession or IQuerySession in any meaningful way doesn't really make sense. Tests pushed us toward pure handlers with IMartenOp return values, which then turned out to be the way the framework wanted you to write them all along.

We immediately realized some of our "helper" abstractions were more bloat than help. The team also started understanding the code faster, without constantly diving into the generated code to figure out what was happening.

If anyone's curious: we use xUnit v3, Moq, and AutoFixture.

8. Auto migration is great in dev, dangerous in prod

This isn't really stack-specific — EF Core has the same trap with similar warnings. Running schema migrations on app startup is wonderful for local development and a bad idea in production. If you know, you know.

The concrete advice: use CritterStackDefaults to set production behavior explicitly:

services.CritterStackDefaults(x =>
{
    x.Production.GeneratedCodeMode = TypeLoadMode.Static;
    x.Production.ResourceAutoCreate = AutoCreate.None;

    x.Development.GeneratedCodeMode = TypeLoadMode.Dynamic;
    x.Development.ResourceAutoCreate = AutoCreate.CreateOrUpdate;
});
Enter fullscreen mode Exit fullscreen mode

Generate migration scripts ahead of deployment instead. Marten and Wolverine both support this and the JasperFx defaults nudge you toward it — but it's opt-in, not the default.


Recent features that changed how I work

The MVC and Minimal API migration tutorials

When I wrote the original article, the docs were heavy on "the Wolverine way" and lighter on "here's how each MVC concept maps." That's been fixed. There are now side-by-side conversion guides for MVC controllers and Minimal APIs, plus a separate filter migration guide covering IActionFilter, IEndpointFilter, IAuthorizationFilter, etc.

These are worth their weight in gold for onboarding people who have a decade of MVC muscle memory. Instead of saying "forget what you know," you can hand them a doc that says "here's [Authorize] in both worlds, here's [ServiceFilter], here's ModelState, here's CreatedAtAction." That's the doc I needed a year ago.

Server-Sent Events via Results.Stream

This was unexpectedly important for the AI assistant work I'm doing. Wolverine endpoints can return ASP.NET Core's IResult, which means SSE just works:

[WolverineGet("/api/sse/events")]
public static IResult GetEvents()
{
    return Results.Stream(async stream =>
    {
        var writer = new StreamWriter(stream);
        for (var i = 0; i < 10; i++)
        {
            await writer.WriteAsync($"data: Event {i}\n\n");
            await writer.FlushAsync();
        }
    }, contentType: "text/event-stream");
}
Enter fullscreen mode Exit fullscreen mode

There's no special "Wolverine streaming infrastructure." It's just standard ASP.NET Core that happens to compose with Wolverine endpoints, which is exactly the right design.

SignalR transport support

WolverineFx.SignalR is now a proper messaging transport. Your browser sends a JSON envelope with { "type": "...", "data": {...} }, Wolverine routes it to the matching handler by type alias, and you can return ResponseToCallingWebSocket<T> to reply to the originating connection or use ToWebSocketGroup(name) to broadcast to a SignalR group:

public record RequestSum(int X, int Y) : WebSocketMessage;
public record SumAnswer(int Value) : WebSocketMessage;

public static class RequestSumHandler
{
    public static ResponseToCallingWebSocket<SumAnswer> Handle(RequestSum msg)
        => new SumAnswer(msg.X + msg.Y).RespondToCallingWebSocket();
}
Enter fullscreen mode Exit fullscreen mode

What I like about this is that the same Handle method works whether the message arrived via HTTP, a queue, or a WebSocket. The transport is just plumbing — your handler doesn't care.

Declarative data requirements

[DocumentExists<T>], [DocumentDoesNotExist<T>], and the Before method pattern with MartenOps.Document<T>().MustExist(...) clean up a lot of "load, null check, return 404" boilerplate. As of Wolverine 5.0, multiple data requirements on the same handler are batched into a single Marten round trip automatically. Small thing, but it's the kind of thing that adds up across hundreds of endpoints.

Honorable mentions

  • Querying into Dictionary<TKey, TValue> properties via LINQ — used to require workarounds, now mostly just works. (Yes, best practice is not to use them. Legacy code is legacy code.)
  • Significantly improved documentation across the board.
  • API versioning support in Wolverine.HTTP.

What still annoys me

Reverse-engineering the generated code

This is my biggest ongoing complaint. Wolverine generates the actual handler-invocation code at runtime (or ahead of time with codegen write). When it works, it's wonderful — zero-allocation pipelines, clean stack traces, compile-time validation. When it doesn't match what I want, I have to:

  1. Run dotnet run -- codegen preview (or the newer wolverine-diagnostics codegen-preview --route "POST /api/foo" which is at least scoped to one endpoint)
  2. Read the generated C# to figure out what middleware order, what session is being used, when the transaction commits
  3. Tweak my handler signature, attribute, or middleware until the generated code matches what I want
  4. Delete Internal/Generated and regenerate, because Wolverine doesn't always notice that my changes invalidated the cached code

There's nothing magical about Auto mode picking up changes — the docs themselves recommend against it now. Dynamic in dev, Static in prod, and in between you're manually deleting generated files and hoping. This is a very powerful design, but the feedback loop when something is wrong is "read generated code, infer convention, mutate input, regenerate." That's not great UX even when you know what you're doing.

For local codegen I recommend a small script for rebuilding generated code that's checked into the repo. Make sure the delete step is scoped — you don't want a typo nuking something it shouldn't.

Interfaces (or naming rules) that appear out of nowhere

This one catches everyone. You're happily using a feature, then you turn on an option and Marten throws something like InvalidProjectionException at startup. The reason? Either your aggregate doesn't implement some interface you've never heard of, or you mistyped a property name. You don't even know how many times I just gave up and named my property Id again.

(Vogen and strongly typed IDs help a little here — they at least catch the mistype before it reaches Marten.)

Things I've tripped on:

  • Switch event store to EventAppendMode.Quick + inline single-stream projection? Your aggregate now needs IRevisioned (or a property mapped to the version metadata) so Marten can move the database-assigned version onto the document.
  • Switch to TenancyStyle.Conjoined? Your projected aggregate type now needs to implement ITenanted with a TenantId property.
  • Want optimistic concurrency on a regular document? IVersioned.
  • Custom side effect? ISideEffect. (Reasonable, but "discovered by interface" is the convention.)

None of these are unreasonable individually. But the discoverability is rough — you only learn about them when validation fails at startup, and the error messages have gotten better but they're still "implement this interface" rather than "here's why you need it." If you're new, you'll spend an afternoon on each of these.

Compiled query landmines

The constraints I listed earlier (no Include, no booleans, no primary constructors) are not loud failures. The C# compiler is happy. The query planner sometimes silently gives you wrong parameter mapping. The pragmatic workaround is IQueryPlan for anything that's not dead-simple, but I wish there were Roslyn analyzers that flagged unsupported patterns at compile time.

Codegen + Aspire / OpenAPI build-time generation

If you use Aspire or Microsoft.Extensions.ApiDescription.Server for build-time OpenAPI, the codegen tooling tries to spin up real connections. The official workaround is a CodeGeneration.IsRunningGeneration() helper that disables transports and persistence during codegen — and yes, you have to write that yourself and remember to use it. It works, but it's the kind of papercut that makes you go "really? this should just be handled somehow."


Where I've landed

A year in, here's how I'd describe my relationship with this stack:

  • Marten as a document store: worth it. JSONB + LINQ + Postgres is a sweet spot for CRUD-heavy domains, and the operational story (no migrations, schema diffing, projection rebuilds) is genuinely better than EF Core for the kind of work I do.
  • Marten as an event store: worth it for the streams that actually need it. I've stopped event-sourcing things just because I can.
  • Wolverine as a messaging framework: worth it. The cascading-message + outbox + handler-as-pure-function model has changed how I write business logic.
  • Wolverine as an HTTP framework: worth it, but the codegen feedback loop is the biggest friction point. I'm still using regular controllers in some places where the team is more comfortable, and I don't feel bad about it.

If you're starting fresh, I'd tell you the same thing I told you last time: commit, but commit selectively. Use Marten for storage, Wolverine for messaging, and pick one slice of your HTTP surface to do the Wolverine.HTTP way as a learning exercise. Don't rewrite everything.

And read the generated code. Seriously. It's the single best thing you can do to understand what these tools are actually doing under your handlers.


Same message to the JasperFx team as last time: sorry about your docs traffic. I have not stopped reading them.

Top comments (0)