<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Pengdows LLC</title>
    <description>The latest articles on DEV Community by Pengdows LLC (@pengdows).</description>
    <link>https://dev.to/pengdows</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pengdows"/>
    <language>en</language>
    <item>
      <title>Dapper Has 464 Open Issues. I Had My pengdows.crud Codebase Audited Against Every One of Them.</title>
      <dc:creator>Pengdows LLC</dc:creator>
      <pubDate>Tue, 21 Apr 2026 22:55:08 +0000</pubDate>
      <link>https://dev.to/pengdows/dapper-has-464-open-issues-i-had-my-pengdowscrud-codebase-audited-against-every-one-of-them-25dk</link>
      <guid>https://dev.to/pengdows/dapper-has-464-open-issues-i-had-my-pengdowscrud-codebase-audited-against-every-one-of-them-25dk</guid>
      <description>&lt;p&gt;When I built pengdows.crud, I wanted every line to be testable. That meant building a fake provider — a full in-process ADO.NET implementation that lets you run tests without a real database. pengdows.crud ships with 94%+ line coverage as a result.&lt;/p&gt;

&lt;p&gt;That same instinct led me to look at Dapper's test coverage: 0.61%. So I wrote 775 unit tests and submitted PR #2199, bringing their line coverage to 86.1%. I know that codebase.&lt;/p&gt;

&lt;p&gt;Dapper currently has 464 open issues. Issue triage has stalled — the &lt;code&gt;needs-triage&lt;/code&gt; label has become a holding state rather than part of an active triage pipeline. Releases still occur, but they don't materially reduce the issue backlog. The maintainers have moved on to DapperAOT, a build-time code generation successor. Those 464 issues are not a backlog being worked down. They are the permanent state of that codebase. I'm not writing this to pile on a library that has done genuinely useful work. I'm writing this because I wanted to know, precisely, whether pengdows.crud makes any of the same mistakes — the same classes of bugs, the same structural patterns. We're both doing a lot of the same things at the ADO.NET level. Similar mistakes are possible. I went into this asking "am I doing this wrong too?" — and had the codebase audited against every theme in Dapper's backlog to find out.&lt;/p&gt;

&lt;p&gt;Before I get to results, the most important context: &lt;strong&gt;pengdows.crud and Dapper aren't competing solutions that made different tradeoffs. They're answers to different questions, independently designed around different constraints.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The core architecture — connection lifecycle ownership as the foundation, &lt;code&gt;SqlContainer&lt;/code&gt; as the execution unit, &lt;code&gt;TableGateway&lt;/code&gt; as the SQL-generation layer — was designed from scratch around the constraint that connection lifetime, parameter naming, and SQL construction must never be the caller's responsibility.&lt;/p&gt;

&lt;p&gt;Dapper asked: &lt;em&gt;How do I make raw ADO.NET less painful at the call site?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;pengdows.crud asked: &lt;em&gt;How do I make connection lifecycle and SQL construction safe and explicit at scale?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Dapper's bugs flow directly from its question. When you optimize for call-site convenience, you push lifetime management, parameter naming, and composition discipline onto the caller. The 464-issue backlog is the accumulated cost of that trade — not a failure of execution, but a consequence of the original design goal.&lt;/p&gt;

&lt;p&gt;pengdows.crud doesn't share those bugs because it never made that trade. The safety properties aren't retrofitted. They're load-bearing, baked in from the start, and that matters — you can't patch your way to a different architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The caller never owns connection lifecycle under any execution path.&lt;/strong&gt; That's the core invariant. Everything that follows is a consequence of it.&lt;/p&gt;

&lt;p&gt;With that established, here's the actual breakdown across every issue cluster in Dapper's backlog.&lt;/p&gt;

&lt;p&gt;Issue classification was performed using a reproducible script against the GitHub Issues API. The script separates bug-like issues from feature requests and questions, then assigns each to a primary category based on heuristic matching. The full classification output is available as CSV for audit and sampling. The script and generated CSVs are in the repository for verification.&lt;/p&gt;

&lt;p&gt;Of Dapper's 464 open issues, 270 classify as bug-like under this analysis (as of April 21, 2026). Here's how they map to pengdows.crud's architecture:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Bug Cluster&lt;/th&gt;
&lt;th&gt;Open Bugs&lt;/th&gt;
&lt;th&gt;% of Bugs&lt;/th&gt;
&lt;th&gt;Outcome in pengdows.crud&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Parameters / type handling&lt;/td&gt;
&lt;td&gt;123&lt;/td&gt;
&lt;td&gt;45.6%&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Eliminated&lt;/strong&gt; — no global handler registry; explicit per-instance construction&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mapping / materialization&lt;/td&gt;
&lt;td&gt;68&lt;/td&gt;
&lt;td&gt;25.2%&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Fail-fast controlled&lt;/strong&gt; — explicit column mapping; throws on bad input&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Async / cancellation / lifetime&lt;/td&gt;
&lt;td&gt;35&lt;/td&gt;
&lt;td&gt;13.0%&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Eliminated&lt;/strong&gt; — caller never owns connection under any path&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Provider compatibility / dialect&lt;/td&gt;
&lt;td&gt;17&lt;/td&gt;
&lt;td&gt;6.3%&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Mitigated&lt;/strong&gt; — dialect layer centralizes; CI tests 11 databases&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Performance / caching / concurrency&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;2.6%&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Eliminated&lt;/strong&gt; — bounded caches; finite key spaces; no global state&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Diagnostics / docs / usability&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;0.4%&lt;/td&gt;
&lt;td&gt;Not applicable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Uncategorized&lt;/td&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;7.0%&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A portion of issues fall outside these categories and are left uncategorized; they were not material to the overall distribution.&lt;/p&gt;

&lt;p&gt;83% of Dapper's open bugs fall into categories that are structurally eliminated or fail-fast controlled in pengdows.crud. The remaining bugs are provider-drift issues that no abstraction layer can fully eliminate — only centralize and detect.&lt;/p&gt;




&lt;h2&gt;
  
  
  Structurally Eliminated
&lt;/h2&gt;

&lt;p&gt;These two categories cannot occur without violating pengdows.crud's invariants. They're not "handled well" — they're unexpressible in the current design.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connection / Reader Lifetime
&lt;/h3&gt;

&lt;p&gt;Dapper borrows your connection. Lifetime is your problem. When an async path throws mid-execution, what gets cleaned up depends on where the exception lands. Dapper cannot fix this without breaking its core contract — the extension method model assumes the caller owns the connection.&lt;/p&gt;

&lt;p&gt;In pengdows.crud, callers never work with a &lt;code&gt;DbConnection&lt;/code&gt; or &lt;code&gt;DbCommand&lt;/code&gt; directly — under any execution path.&lt;/p&gt;

&lt;p&gt;In normal execution, callers build SQL through &lt;code&gt;SqlContainer&lt;/code&gt; and call an execution method. Internally, the context acquires a &lt;code&gt;TrackedConnection&lt;/code&gt;, creates the command, executes, and runs cleanup in a &lt;code&gt;finally&lt;/code&gt; block. The caller never sees any of it. &lt;code&gt;SafeAsyncDisposableBase&lt;/code&gt; underlies every tracked type; &lt;code&gt;Interlocked.Exchange&lt;/code&gt; ensures idempotent disposal — double-dispose is a no-op, not a second cleanup pass.&lt;/p&gt;

&lt;p&gt;The only externally visible streaming surface is &lt;code&gt;ITrackedReader&lt;/code&gt; — and even that is a controlled façade, not a raw provider object. &lt;code&gt;TrackedReader&lt;/code&gt; holds the connection lock for its entire read lifetime, owns command teardown, and auto-disposes when &lt;code&gt;Read()&lt;/code&gt; reaches EOF. The caller streams rows; pengdows.crud owns everything beneath.&lt;/p&gt;

&lt;p&gt;Transactions preserve the same invariant. &lt;code&gt;BeginTransactionAsync()&lt;/code&gt; returns an &lt;code&gt;ITransactionContext&lt;/code&gt;. Internally, a tracked connection is acquired, pinned, and held privately for the transaction's lifetime. The &lt;code&gt;ITransactionContext&lt;/code&gt; exposes commit, rollback, and savepoint semantics — not the connection. All SQL execution within the transaction still routes through &lt;code&gt;SqlContainer&lt;/code&gt; and the same internal acquisition path. On commit or rollback, cleanup runs in &lt;code&gt;finally&lt;/code&gt; regardless of outcome. The caller controls transaction outcome; pengdows.crud owns connection lifetime.&lt;/p&gt;

&lt;p&gt;This invariant holds without exception: connection ownership is never representable at the API boundary. The class of bugs that requires the caller to be the connection lifecycle authority — leaked connections, orphaned commands, partial async cleanup, connection reuse after rollback — cannot occur due to caller misuse.&lt;/p&gt;

&lt;h3&gt;
  
  
  Parameter Naming / Collisions
&lt;/h3&gt;

&lt;p&gt;Dapper's parameter model is caller-controlled. You name parameters, you manage composition, you handle prefix conventions per provider. When you get that wrong — and composition bugs are easy — you get silent incorrect results or runtime errors with unhelpful messages.&lt;/p&gt;

&lt;p&gt;pengdows.crud uses deterministic, namespace-isolated naming for all generated parameters: &lt;code&gt;i0&lt;/code&gt;, &lt;code&gt;i1&lt;/code&gt; for INSERT values; &lt;code&gt;s0&lt;/code&gt;, &lt;code&gt;s1&lt;/code&gt; for UPDATE SET clauses; &lt;code&gt;w0&lt;/code&gt;, &lt;code&gt;w1&lt;/code&gt; for WHERE predicates; &lt;code&gt;v0&lt;/code&gt; for version columns. These namespaces don't collide by construction. Prefix stripping normalizes provider-specific prefixes (&lt;code&gt;@&lt;/code&gt;, &lt;code&gt;:&lt;/code&gt;, &lt;code&gt;?&lt;/code&gt;, &lt;code&gt;$&lt;/code&gt;) on input. Clone counters ensure copied containers get independent parameter sets.&lt;/p&gt;

&lt;p&gt;The parameter container is a custom &lt;code&gt;OrderedDictionary&amp;lt;string, DbParameter&amp;gt;&lt;/code&gt; — per-instance, ordered (critical for positional providers like older Oracle and ODBC drivers), not shared across threads. There is no global parameter state to corrupt.&lt;/p&gt;

&lt;p&gt;Composition collisions require the naming system to produce a collision. It cannot.&lt;/p&gt;




&lt;h2&gt;
  
  
  Controlled (Fail-Fast, Fully Tested)
&lt;/h2&gt;

&lt;p&gt;These categories aren't structurally impossible — provider behavior can still produce surprises — but pengdows.crud handles them explicitly with fail-fast semantics and comprehensive test coverage. The blast radius is contained.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cancellation Semantics
&lt;/h3&gt;

&lt;p&gt;Dapper's cancellation story is a retrofit. The synchronous-first design got async layered on top, and the seams show in open issues for missing &lt;code&gt;CancellationToken&lt;/code&gt; overloads and &lt;code&gt;OperationCanceledException&lt;/code&gt; being swallowed in certain paths.&lt;/p&gt;

&lt;p&gt;In pengdows.crud, cancellation tokens flow through both the semaphore acquisition layer and the execution layer. &lt;code&gt;OperationCanceledException&lt;/code&gt; is never swallowed. Every public async method has a &lt;code&gt;CancellationToken&lt;/code&gt; overload — this is a code review hard requirement, not a backlog item.&lt;/p&gt;

&lt;p&gt;Provider behavior at the network level can still produce surprising cancellation timing (Npgsql and SqlClient behave differently under load). That becomes a provider problem, not an abstraction problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Null / Value Coercion
&lt;/h3&gt;

&lt;p&gt;Dapper has open issues where type coercion fails silently — a null becomes a default value, a boolean coerces to 0 or 1 depending on provider, and nothing throws. Silent defaults are the worst category of data bug because they corrupt data without raising an exception.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;TypeCoercionHelper&lt;/code&gt; throws on bad input. There are no silent defaults. The philosophy is fail-fast, not fail-silent.&lt;/p&gt;

&lt;p&gt;Edge cases remain: &lt;code&gt;DBNull&lt;/code&gt;, driver-specific structs, JSON column handling. These aren't eliminated, but they fail loudly so you know immediately where and why.&lt;/p&gt;

&lt;h3&gt;
  
  
  IN-List Expansion
&lt;/h3&gt;

&lt;p&gt;Dapper's &lt;code&gt;WHERE id IN (@ids)&lt;/code&gt; handling is one of its most-reported problem areas: empty collections generating invalid SQL, NULL semantics ambiguity, and query plan instability from variable-length parameter lists.&lt;/p&gt;

&lt;p&gt;Empty collections are rejected explicitly. NULL semantics are handled correctly. On PostgreSQL, expansion uses &lt;code&gt;ANY(@param)&lt;/code&gt; with a native array — one parameter, correct semantics, stable query plan. PostgreSQL's query planner caches plans by parameter count, so a 5-element list and a 6-element list produce different plan cache entries; &lt;code&gt;ANY(@param)&lt;/code&gt; sidesteps this entirely. For other providers, parameter lists use power-of-2 bucketing (round up to 1, 2, 4, 8, 16...) to limit plan cache pollution.&lt;/p&gt;

&lt;p&gt;Parameter limits are not an edge case left to the provider. Every dialect declares a hard ceiling as part of its contract — PostgreSQL at 32,767, SQLite at 999, MySQL/MariaDB and Oracle and DuckDB at 65,535. During command materialization, pengdows.crud checks &lt;code&gt;_parameters.Count&lt;/code&gt; against &lt;code&gt;_context.MaxParameterLimit&lt;/code&gt;. If the limit is exceeded, execution is blocked and &lt;code&gt;InvalidOperationException&lt;/code&gt; is thrown naming both the limit and the database product — before a connection is opened, before a single byte reaches the server.&lt;/p&gt;

&lt;p&gt;Dapper expands the list and lets the provider fail. pengdows.crud fails at construction time with a message that tells you exactly what went wrong and on which database.&lt;/p&gt;

&lt;p&gt;That said, "enforced" doesn't mean "efficient." A collection of 50,000 IDs still produces a bad query shape regardless of how cleanly the limit is handled. At that scale the right answer is a temp table, a bulk insert, or a join — not an expanded IN-list. pengdows.crud catches the limit violation; it doesn't rewrite your query strategy for you.&lt;/p&gt;




&lt;h2&gt;
  
  
  Mitigated (Isolated, Not Eliminated)
&lt;/h2&gt;

&lt;p&gt;These categories carry real residual risk. pengdows.crud centralizes and contains exposure, but cannot make external behavior deterministic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Provider-Specific Quirks and Version Drift
&lt;/h3&gt;

&lt;p&gt;No library eliminates provider bugs. pengdows.crud's &lt;code&gt;RemapDbType()&lt;/code&gt; handles type remapping per provider. &lt;code&gt;GuidStorageFormat&lt;/code&gt; handles the fact that Oracle, MySQL, and SQL Server all store GUIDs differently. &lt;code&gt;AdvancedTypeRegistry&lt;/code&gt; handles provider-specific type edge cases. &lt;code&gt;MakeParameterName()&lt;/code&gt; and &lt;code&gt;WrapObjectName()&lt;/code&gt; own their respective concerns rather than delegating to callers.&lt;/p&gt;

&lt;p&gt;The real win is centralization: when a provider changes behavior, one place needs to change. 11-database Testcontainers integration tests in CI make drift detectable. The TiDB dialect has a comment noting a &lt;code&gt;MySql.Data&lt;/code&gt; prepare-statement incompatibility with no version numbers or upstream issue link — that's the visible symptom of this category. Version drift happens; the question is whether you find it in CI or in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The accurate claim: provider bugs are isolated and test-detectable, not impossible.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Metadata Caching
&lt;/h3&gt;

&lt;p&gt;Dapper's global static &lt;code&gt;ConcurrentDictionary&lt;/code&gt; caches compiled deserializers keyed by arbitrary SQL strings. Two problems: global scope means cross-query contamination, and the key space is unbounded.&lt;/p&gt;

&lt;p&gt;pengdows.crud uses a different architecture. &lt;code&gt;SqlContainer&lt;/code&gt; parameters use a per-instance custom &lt;code&gt;OrderedDictionary&lt;/code&gt; — nothing shared, nothing global. Query and parameter-name caches use &lt;code&gt;BoundedCache&lt;/code&gt; inside &lt;code&gt;ConcurrentDictionary&amp;lt;SupportedDatabase, BoundedCache&amp;lt;...&amp;gt;&amp;gt;&lt;/code&gt; — LRU eviction with 32–512 entry caps, keyed by a finite enum. Metadata registry uses &lt;code&gt;ConcurrentDictionary&amp;lt;Type, TableInfo&amp;gt;&lt;/code&gt; — keyed by entity &lt;code&gt;Type&lt;/code&gt;, which is finite in a loaded assembly, not by arbitrary SQL strings.&lt;/p&gt;

&lt;p&gt;Dapper's problem was global dictionaries keyed by arbitrary query strings. That pattern doesn't exist here. Unbounded growth isn't just "handled" — the key space design removes the growth vector.&lt;/p&gt;

&lt;p&gt;The residual risk is operational: &lt;code&gt;TypeMapRegistry&lt;/code&gt; entries live for the lifetime of the &lt;code&gt;DatabaseContext&lt;/code&gt; instance. If a schema changes during a rolling deploy, cached &lt;code&gt;TableInfo&lt;/code&gt; will not reflect it until the process restarts. There is no runtime invalidation. Each &lt;code&gt;DatabaseContext&lt;/code&gt; maintains its own isolated registry — there is no cross-context contamination — but within a context, pengdows.crud assumes schema stability for the process lifetime.&lt;/p&gt;

&lt;h3&gt;
  
  
  Observability
&lt;/h3&gt;

&lt;p&gt;Dapper's logging extensibility is one of its most-requested missing features. pengdows.crud has built-in structured observability. The notable design decision: parameter values are deliberately never logged.&lt;/p&gt;

&lt;p&gt;That's the right security default — logging parameter values is how credentials end up in log aggregators and PII ends up in SIEM systems. Command text, timing, and execution metadata are captured. Values stay out of the log.&lt;/p&gt;

&lt;p&gt;The tradeoff is real: debugging parameter-specific issues requires reproduction in a test harness, not log inspection. You cannot read a log and see what value was passed. That's the cost of the security boundary, and it's deliberate.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Accurate Summary
&lt;/h2&gt;

&lt;p&gt;Here's what the audit actually established:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;State&lt;/th&gt;
&lt;th&gt;Categories&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Eliminated&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Connection/reader lifetime ownership; parameter naming collisions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Controlled&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cancellation semantics; null/value coercion; IN-list expansion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Mitigated&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Provider quirks; version drift; metadata staleness; observability tradeoffs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The strongest claim — and it holds — is this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pengdows.crud removes caller-induced failure modes. It does not remove provider-induced failure modes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Dapper's design pushes connection lifetime, parameter naming, SQL construction discipline, and transaction scoping onto the developer. That was a deliberate choice in service of call-site elegance, and it was coherent. pengdows.crud was independently designed around the opposite constraint: those concerns belong to pengdows.crud, not the caller.&lt;/p&gt;

&lt;p&gt;Most of the bugs in Dapper's 464-issue backlog exist because the caller was handed responsibility the library didn't keep. When the caller owns connection lifetime, callers leak connections. When the caller names parameters, callers create collisions. When the library provides thin provider abstraction, provider differences become caller bugs.&lt;/p&gt;

&lt;p&gt;pengdows.crud owns those responsibilities. So those caller-induced bugs don't have a place to live.&lt;/p&gt;

&lt;p&gt;The database is still external. Providers still have bugs. Schema still changes. Those are real risks and this article doesn't pretend otherwise — the Mitigated category exists for exactly that reason.&lt;/p&gt;

&lt;p&gt;But Dapper's backlog is not pengdows.crud's backlog. The failure modes are different because the responsibilities were never handed to the caller in the first place.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;pengdows.crud is a SQL-first, strongly-typed data access layer for .NET 8+ supporting 12 databases with full connection lifecycle management, explicit parameter construction, and dialect-native SQL generation. &lt;a href="https://www.nuget.org/packages/pengdows.crud" rel="noopener noreferrer"&gt;NuGet&lt;/a&gt; | &lt;a href="https://github.com/pengdows/pengdows.crud" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>dapper</category>
      <category>adonet</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Your Data Access Layer Doesn't Understand Databases</title>
      <dc:creator>Pengdows LLC</dc:creator>
      <pubDate>Sun, 29 Mar 2026 19:14:24 +0000</pubDate>
      <link>https://dev.to/pengdows/your-data-access-layer-doent-understand-databases-33jc</link>
      <guid>https://dev.to/pengdows/your-data-access-layer-doent-understand-databases-33jc</guid>
      <description>&lt;p&gt;Here's what nobody in the data access space wants to admit: the tools built to simplify database work have quietly offloaded the hardest parts back onto your application. Not by accident — by design. They model a pleasant fiction of what a database is, and when reality diverges, you pay for it in conditionals, workarounds, retries, and production incidents.&lt;/p&gt;

&lt;p&gt;This is not a complaint about generated SQL. SQL quality is a separate argument, and an old one. The problem runs deeper.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connection behavior is not a performance concern. It is a correctness concern.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most data access layers don't model that. Your application does — in feature flags, special cases, and debugging sessions you didn't budget for.&lt;/p&gt;

&lt;p&gt;Connection lifetime, concurrency, and identity are not independent concerns. They are the database.&lt;/p&gt;




&lt;h2&gt;
  
  
  Not all databases are the same machine
&lt;/h2&gt;

&lt;p&gt;Most data access libraries are built around one mental model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The server is already running&lt;/li&gt;
&lt;li&gt;You connect&lt;/li&gt;
&lt;li&gt;You execute commands&lt;/li&gt;
&lt;li&gt;You commit or roll back&lt;/li&gt;
&lt;li&gt;You disconnect&lt;/li&gt;
&lt;li&gt;The server waits for the next client&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SQL Server, PostgreSQL, Oracle, and MySQL broadly fit that model. For those databases, the standard approach works well enough that the cracks stay hidden for a while.&lt;/p&gt;

&lt;p&gt;But not every database works that way.&lt;/p&gt;




&lt;h2&gt;
  
  
  In-memory embedded databases
&lt;/h2&gt;

&lt;p&gt;SQLite and DuckDB in &lt;code&gt;:memory:&lt;/code&gt; mode have a fundamental behavioral difference that most abstractions ignore entirely: &lt;strong&gt;the database is the connection&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Open one connection, you have a database. Open a second connection, you don't get another connection to the same database. You get a different database. Empty. Gone.&lt;/p&gt;

&lt;p&gt;That's not a quirk. That's the operational model.&lt;/p&gt;

&lt;p&gt;A library built around "open and close connections freely per operation" will silently destroy your in-memory database between calls. No error. No warning. Just an empty database where your data used to be.&lt;/p&gt;




&lt;h2&gt;
  
  
  File-based embedded databases
&lt;/h2&gt;

&lt;p&gt;File-based SQLite and DuckDB are not miniature SQL Servers. Their write behavior is fundamentally different:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One writer at a time, enforced at the engine level&lt;/li&gt;
&lt;li&gt;Concurrent write attempts result in lock contention, busy timeouts, and failures&lt;/li&gt;
&lt;li&gt;The journal mode and transaction settings interact with connection behavior in ways that bite you if you get them wrong&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The standard advice — "open a connection, run a command, close it, repeat" — actively works against you here. Under concurrent load you'll see lock errors. Under sequential load you'll see busy timeouts. The "simple" connection pattern creates exactly the contention the database can't handle.&lt;/p&gt;

&lt;p&gt;This is so commonly mishandled that most dedicated SQLite tooling gets it wrong too.&lt;/p&gt;




&lt;h2&gt;
  
  
  LocalDB
&lt;/h2&gt;

&lt;p&gt;Microsoft's LocalDB was a genuinely good idea: full SQL Server semantics, no server install, attach a file and go. Great for local development and testing.&lt;/p&gt;

&lt;p&gt;Until idle unload enters the picture.&lt;/p&gt;

&lt;p&gt;If no connection is held for a configurable period, LocalDB unloads the database. Not an error — an unload. Your next operation reconnects and reattaches, which adds latency and can cause failures during test runs where operations are spaced out.&lt;/p&gt;

&lt;p&gt;The fix is a sentinel connection: one persistent connection held open specifically to prevent idle unload. Not for running queries. Just to keep the database alive.&lt;/p&gt;

&lt;p&gt;No mainstream data access library models this. You find out about it from a Stack Overflow answer at 11pm.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the libraries actually do
&lt;/h2&gt;

&lt;p&gt;This is not speculation. Here's what each major option in the .NET ecosystem actually models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Entity Framework Core&lt;/strong&gt; opens and closes connections per query and per &lt;code&gt;SaveChanges()&lt;/code&gt; call by default. It delegates pooling entirely to the ADO.NET provider — EF has no awareness of pool pressure or saturation. When you use an in-memory SQLite database with EF, the official workaround documented by Microsoft is to manually open the connection on the &lt;code&gt;DbContext&lt;/code&gt; and hold it open for the lifetime of the context. The developer absorbs the problem the library doesn't solve. There is no connection policy. There is a workaround in the docs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NHibernate&lt;/strong&gt; ties connection lifetime to the &lt;code&gt;ISession&lt;/code&gt;. It has a more explicit unit-of-work model than EF, which helps with some lifetime issues, but the underlying assumption is unchanged: stable, server-style backend, many connections available, connection lifetime is a performance concern. No single-writer enforcement, no keepalive, no concept of connection-bound database identity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dapper&lt;/strong&gt; doesn't touch connection lifetime at all. You open it, you pass it in, Dapper runs the command, you close it. That's the design — deliberately minimal. It's not a flaw in Dapper. But it means connection policy is 100% on the caller, every time, with no structure and no guardrails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Raw ADO.NET&lt;/strong&gt; is the same. The documented guidance is to use &lt;code&gt;using&lt;/code&gt; blocks and dispose promptly. That's the policy: a coding convention. Not a library feature, not an enforced invariant — a convention you either follow or don't.&lt;/p&gt;

&lt;p&gt;None of them model connection policy as a function of database behavior. They model lifetime. They do not model constraints. Connection policy is either delegated to the provider, left to convention, or documented as a workaround.&lt;/p&gt;




&lt;h2&gt;
  
  
  The connection-per-command trap
&lt;/h2&gt;

&lt;p&gt;Standard advice across almost every data access library is "use one connection per command." Open, execute, consume results, close. Your DBA will approve. Your server database will be healthier.&lt;/p&gt;

&lt;p&gt;That advice has real merit for server databases. It reduces leaks. It shortens connection lifetime. It keeps the pool healthy under normal load.&lt;/p&gt;

&lt;p&gt;But follow it everywhere and you create three new problems:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can't use in-memory databases for testing.&lt;/strong&gt; One connection per command means a new empty database on every operation. Your tests pass against nothing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File-based embedded databases become unreliable.&lt;/strong&gt; The write serialization problem doesn't go away because you're closing connections quickly — it gets worse, because you're now racing to reacquire write locks constantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LocalDB unloads between operations.&lt;/strong&gt; No persistent connection means no sentinel. Your test suite reconnects and reattaches on every run. Or fails partway through when the unload window closes before your next operation.&lt;/p&gt;

&lt;p&gt;So you add feature flags. Database-specific code paths. If-SQLite-do-this, if-LocalDB-do-that. Your business logic now contains your database topology.&lt;/p&gt;

&lt;p&gt;That is the abstraction failing at its core job.&lt;/p&gt;




&lt;h2&gt;
  
  
  Pool saturation: when "doing it right" still breaks
&lt;/h2&gt;

&lt;p&gt;Say you got through all of the above. Feature flags in place. Connection handling tuned per database type. DBA is satisfied, cloud bill is down, everything works.&lt;/p&gt;

&lt;p&gt;Then you get a traffic spike.&lt;/p&gt;

&lt;p&gt;Requests pile up. Each one opens a connection. The pool has a maximum size — every pool does. Requests start waiting. Wait times compound. At this point you don't get slow queries — you get connection acquisition failures. SQL Server throws &lt;code&gt;Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool&lt;/code&gt;. PostgreSQL reports &lt;code&gt;remaining connection slots are reserved&lt;/code&gt;. SQLite escalates busy errors into lock failures.&lt;/p&gt;

&lt;p&gt;You need something that governs connections. Not just "open and close promptly" — actual enforcement: when the pool is full, make everything else wait. Bounded concurrency tied to pool size, not just etiquette.&lt;/p&gt;

&lt;p&gt;None of the libraries above have this. ADO.NET pooling will queue and timeout, but it doesn't expose the control surface you need to shape behavior under load. You find out your limits when production hits them.&lt;/p&gt;




&lt;h2&gt;
  
  
  What you actually need: routing by intent
&lt;/h2&gt;

&lt;p&gt;Databases already differentiate reads and writes. Your access layer doesn't.&lt;/p&gt;

&lt;p&gt;That missing distinction has consequences. If the layer exposed it, it could enforce:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read-only sessions routed to a read replica, drawing from a separate pool&lt;/li&gt;
&lt;li&gt;Write operations serialized where the database requires it&lt;/li&gt;
&lt;li&gt;Each pool governed independently, sized to its workload&lt;/li&gt;
&lt;li&gt;For embedded databases, the write limit set to one — enforced by the layer, not scattered across application code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't new capability. It's the modeling that's missing. Databases have always had these constraints. The access layer just never encoded them.&lt;/p&gt;




&lt;h2&gt;
  
  
  What modeling it correctly looks like
&lt;/h2&gt;

&lt;p&gt;Connection behavior is a property of the database. It belongs in the data access layer, encoded explicitly, not left to convention or absorbed by the application.&lt;/p&gt;

&lt;p&gt;Here's what that looks like in practice. pengdows.crud exposes connection policy as a first-class configuration decision through &lt;code&gt;DbMode&lt;/code&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;When to use it&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Standard&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Pool per operation, open late, close early&lt;/td&gt;
&lt;td&gt;Server databases: SQL Server, PostgreSQL, Oracle, MySQL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;KeepAlive&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Holds a sentinel connection to prevent idle unload&lt;/td&gt;
&lt;td&gt;LocalDB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;SingleWriter&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Many concurrent readers, one serialized writer&lt;/td&gt;
&lt;td&gt;File-based SQLite, file-based DuckDB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;SingleConnection&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;One connection, period&lt;/td&gt;
&lt;td&gt;In-memory &lt;code&gt;:memory:&lt;/code&gt; databases&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Best&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Auto-selects the correct mode based on database type&lt;/td&gt;
&lt;td&gt;When you want the right answer without thinking about it&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;Best&lt;/code&gt; maps automatically: SQLite/DuckDB &lt;code&gt;:memory:&lt;/code&gt; gets &lt;code&gt;SingleConnection&lt;/code&gt;; file-based SQLite/DuckDB gets &lt;code&gt;SingleWriter&lt;/code&gt;; LocalDB gets &lt;code&gt;KeepAlive&lt;/code&gt;; everything else gets &lt;code&gt;Standard&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Configuration looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// File-based SQLite — enforces single writer, concurrent readers&lt;/span&gt;
&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;DatabaseContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;DatabaseContextConfiguration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;ConnectionString&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;connStr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;DbMode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;DbMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SingleWriter&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;SqliteFactory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Instance&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Or let the library decide&lt;/span&gt;
&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;DatabaseContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;connStr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Microsoft.Data.SqlClient"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;DbMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Best&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Correctness is invariant. Performance is tunable. No conditionals. No feature flags. No if-SQLite-else-SqlServer code paths. The layer absorbs the difference because it models the difference.&lt;/p&gt;

&lt;p&gt;Pool governance is built in separately from DbMode — a turnstile-based reader/writer governor with bounded permits, drain support, and a telemetry snapshot. When you're at pool capacity, operations wait in an orderly queue rather than failing. Under contention, the system degrades predictably instead of falling off a cliff.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real failure
&lt;/h2&gt;

&lt;p&gt;Writing your own SQL does not fix a data access layer that has no opinion about connection policy.&lt;/p&gt;

&lt;p&gt;You can drop EF entirely. You can use Dapper. You can write raw ADO.NET. The connection problem follows you. Because the problem isn't the SQL layer — it's the operational model underneath it.&lt;/p&gt;

&lt;p&gt;The databases have rules. Different rules, depending on the database. Those rules affect correctness, not just performance. A library that doesn't model them pushes that complexity into your application, where it's harder to see, harder to test, and easier to get wrong.&lt;/p&gt;

&lt;p&gt;Most data access layers don't understand databases.&lt;/p&gt;

&lt;p&gt;They model the happy path of one class of database.&lt;/p&gt;

&lt;p&gt;Everything else is your problem.&lt;/p&gt;

</description>
      <category>database</category>
      <category>dotnet</category>
      <category>entityframework</category>
      <category>performance</category>
    </item>
    <item>
      <title>Hangfire Had a DB Support Problem. I Fixed It. You're Welcome.</title>
      <dc:creator>Pengdows LLC</dc:creator>
      <pubDate>Fri, 27 Mar 2026 19:35:02 +0000</pubDate>
      <link>https://dev.to/pengdows/hangfire-had-a-db-support-problem-i-fixed-it-youre-welcome-2hd9</link>
      <guid>https://dev.to/pengdows/hangfire-had-a-db-support-problem-i-fixed-it-youre-welcome-2hd9</guid>
      <description>&lt;p&gt;HangFire proper is hugely popular but the official implementation only supports Microsoft SQL Server. They could have made it much more database independent, but didn't. I suspect the original authors were scratching an itch. Hangfire feels to me like someone who either didn't want to use scheduled tasks/cron jobs, or didn't know how. So they wrote something that would act as a cron job inside their code. This led them to only support their environment — they solved their problem for them. Nothing wrong with that — pengdows.crud is also scratching an itch. That doesn't make software bad. It means it wasn't born of careful planning to support everything from day one, it is "I need this, it doesn't exist, let me write it." The difference is the itch. Hangfire's itch was "I need a job scheduler in my environment." pengdows.crud's itch was "nobody handles cross-database work in .NET correctly." One of those itches produces a single-database solution. The other one doesn't.&lt;/p&gt;

&lt;p&gt;Let's be honest about what the two biggest problems with Hangfire actually are. First, connection handling — pool exhaustion under spike load, connection leaking, and single-writer databases like SQLite falling apart under write contention. Second, lock contention — distributed lock implementations that produce violations, deadlocks, or overlapping ownership under burst load. I am going to tell you I solved both. Then I am going to show you the receipts.&lt;/p&gt;

&lt;p&gt;When other people started extending Hangfire to work with their DBs, those people had to re-solve problems that had already been solved elsewhere. The road to hell is paved with good intentions, and the Hangfire storage ecosystem is well-paved. Here is what the community has produced:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Database&lt;/th&gt;
&lt;th&gt;Package&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SQL Server&lt;/td&gt;
&lt;td&gt;Hangfire.SqlServer (official)&lt;/td&gt;
&lt;td&gt;✅ Maintained&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PostgreSQL&lt;/td&gt;
&lt;td&gt;Hangfire.PostgreSql&lt;/td&gt;
&lt;td&gt;✅ Maintained&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MySQL&lt;/td&gt;
&lt;td&gt;Hangfire.MySqlStorage&lt;/td&gt;
&lt;td&gt;⚠️ Known deadlock bugs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MySQL&lt;/td&gt;
&lt;td&gt;Hangfire.Storage.MySql&lt;/td&gt;
&lt;td&gt;⚠️ Fork created to fix the above, stuck in beta, author moved on&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SQLite&lt;/td&gt;
&lt;td&gt;Hangfire.Storage.SQLite&lt;/td&gt;
&lt;td&gt;⚠️ Alive but threadbare&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Firebird&lt;/td&gt;
&lt;td&gt;Hangfire.Firebird&lt;/td&gt;
&lt;td&gt;☠️ Dead since 2015&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Oracle&lt;/td&gt;
&lt;td&gt;Hangfire.FluentNHibernateStorage&lt;/td&gt;
&lt;td&gt;☠️ Via NHibernate, last real work 2022, SQLite/Access/SQL CE broken by its own README&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dameng&lt;/td&gt;
&lt;td&gt;DMStorage.Hangfire&lt;/td&gt;
&lt;td&gt;❓ Chinese ecosystem only&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That isn't the edge case, it is systemic. MySQL has two packages because the first one had deadlock bugs bad enough that someone forked it. The fork has been stuck in beta for years. The author's own README says he got fed up and wrote a different scheduler entirely. The NHibernate-backed provider claims to support six databases — its own README admits that SQLite, MS Access, and SQL Server Compact "proved not to work" and there's no plan to fix them. As a sidenote, SQL CE requires careful handling most people never handle correctly, my SingleWriter mode. Firebird hasn't been touched since 2015. They still install. Whether they behave correctly against a current version of their target database is a question you get to answer in production.&lt;/p&gt;




&lt;h2&gt;
  
  
  The SQLite Detour
&lt;/h2&gt;

&lt;p&gt;So when I started readying pengdows.crud 2.0, I really wanted to show off my new SingleWriter mode. If you aren't familiar with pengdows.crud that is ok, go check it out on Github (&lt;a href="https://github.com/pengdows/pengdows.crud" rel="noopener noreferrer"&gt;https://github.com/pengdows/pengdows.crud&lt;/a&gt;). My crud has 5 different connection modes reflecting the reality of database behavior:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SingleConnection&lt;/strong&gt; — all database work has to be done via a single connection. This is the mode that SQLite and DuckDB :memory: databases have to use. If you try to connect a second connection you literally get a new DB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SingleWriter&lt;/strong&gt; — these databases can have as many readers as you like (within reason), but all writes have to be serialized. If two connections try to write simultaneously you either end up with a corrupted database or it throws "database is busy" and one of them fails. Neither is acceptable in production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;KeepAlive&lt;/strong&gt; — created for SQL Server LocalDB, to keep the server alive. I hold 1 connection open for the life of the app. That connection is never used for anything other than keeping the server alive.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standard&lt;/strong&gt; — my default mode. Open a connection, run your SQL, consume the results, close the connection the moment it is available to be closed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Best&lt;/strong&gt; — choose the best one for the DB you are using.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;pengdows.crud allows you to change how your app interacts with your DB without changing your code. With 2.0 I revamped some things and that had the effect of really optimizing my single writer — only 1 writer is allowed at a time, and it is NOT a pinned connection, it is controlled through a governor. In short this makes DuckDB and SQLite safe to use under moderate write load, and I have a turnstile that prevents writer starvation.&lt;/p&gt;

&lt;p&gt;With this new feature, I was really just looking for a SQLite project to rewrite and show it off. Hangfire is notorious for pool exhaustion. If I can make SQLite work reliably under Hangfire that helps Hangfire users, helps developers, and shows off what pengdows.crud can do.&lt;/p&gt;

&lt;p&gt;When I started really digging in though, I found that it was not written as I expected, and a lot of the other storage providers seem to be suffering from code rot and abandonment. I thought to myself — well pengdows.crud supports 14 relational database systems, so lets write it once. I whipped out pengdows.poco.mint (available either as a NuGet command line tool or a self-contained WebUI Docker image), spun up a SQL Server container, ran the scripts to get POCOs based on the tables, and got to work.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Lock Problem
&lt;/h2&gt;

&lt;p&gt;As I said, Hangfire feels like a scratch-an-itch project meant for one environment. Evidence of this showed up right here. The SQL Server code uses a SQL Server-only stored procedure to keep other processes from grabbing a task. So every other storage implementation has to solve this problem in a new way. It wasn't even necessary for SQLite since it only allows one writer, but the author created a table named "Lock" to solve the problem anyway — so I borrowed that approach. "Lock" is a reserved word in several of my supported databases, so I renamed it to "hf_lock" and moved on. It is not as fast as the stored procedure on SQL Server, but it is uniform across all 13 supported databases, and the benchmarks show it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Supported and What Isn't
&lt;/h2&gt;

&lt;p&gt;pengdows.hangfire supports 13 databases. Here is how that stacks up against what existed before:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Database&lt;/th&gt;
&lt;th&gt;Had Support Before&lt;/th&gt;
&lt;th&gt;pengdows.hangfire&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SQL Server&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PostgreSQL&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MySQL&lt;/td&gt;
&lt;td&gt;✅ ⚠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SQLite&lt;/td&gt;
&lt;td&gt;✅ ⚠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Firebird&lt;/td&gt;
&lt;td&gt;✅ ☠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Oracle&lt;/td&gt;
&lt;td&gt;✅ ☠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MariaDB&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CockroachDB&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DuckDB&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;YugabyteDB&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TiDB&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Aurora MySQL&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Aurora PostgreSQL&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Seven databases that have never had Hangfire storage support before. The ones that did exist either had known bugs, were forks of broken packages, or hadn't been touched in a decade.&lt;/p&gt;

&lt;p&gt;Two databases from the pengdows.crud supported list aren't in pengdows.hangfire. Dameng has no Docker image to test against and no modern .NET provider. When those exist, adding it is trivial — the door is open. Snowflake is a different problem entirely. pengdows.crud includes a Snowflake dialect, but Snowflake is designed for analytical workloads — columnar storage, warehouse-level concurrency, high per-query latency. Hangfire needs row-level locking, low-latency queue polling, high-frequency small writes and deletes, and reliable distributed lock semantics. These requirements are fundamentally at odds with Snowflake's architecture. It isn't a missing SQL feature, it's the wrong database for the workload. If that changes, the support will follow.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Receipts
&lt;/h2&gt;

&lt;p&gt;Here is what I said I would show you.&lt;/p&gt;

&lt;p&gt;111 abstract facts, instantiated across 11 databases via Testcontainers, producing 1,204 concrete test cases. Real engines, not mocks. Connection behavior, transaction mutations, expiration cleanup, counter aggregation, queue fetch/claim/ack — covered.&lt;/p&gt;

&lt;p&gt;That handles correctness. For contention there is a separate stress suite, and that is where it gets interesting.&lt;/p&gt;

&lt;p&gt;200 workers. One resource. All contending simultaneously. Required invariants: zero ownership violations, zero interval overlaps, max concurrent owners never exceeding 1. That test passes against SQL Server at full pool size. It passes again with pool size forced down to 40 — where timeouts and pool rejections are expected but successful acquisitions still cannot overlap. It passes with 200 workers spread across 20 resources with 80% of traffic on 2 hot keys. SQLite and DuckDB each have their own 200-worker SingleWriter variants. All pass.&lt;/p&gt;

&lt;p&gt;Correctness is verified three ways: live CAS-style ownership conflicts, max concurrent holders per resource, and post-run overlap analysis on recorded hold intervals. You cannot get a false pass on one check and slip through — all three have to agree.&lt;/p&gt;

&lt;p&gt;There is also a crash path. Process killed mid-lock, TTL set to 15 seconds — the dead lock was stolen and cleaned up in 14.8 seconds. Lock rows do not orphan.&lt;/p&gt;

&lt;p&gt;The MySQL fork in the ecosystem exists because the original had deadlock bugs under burst load. The fork's author eventually gave up and wrote a different scheduler. These tests are the direct answer to that problem. 40 passed. 0 failed.&lt;/p&gt;

&lt;p&gt;These are not one-off validation runs. The unit tests, integration tests, and stress suite are part of the open source library. They run on every change. If you pull the repo, you get the tests.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Built This
&lt;/h2&gt;

&lt;p&gt;So yeah, I wrote this for three reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Help Hangfire users — you now have 13 databases to choose from, connection handling that doesn't fall apart under load, and lock contention that has been stress tested to 200 concurrent workers.&lt;/li&gt;
&lt;li&gt;Help Hangfire developers — pool saturation and lock violations are solved problems if you use the right storage.&lt;/li&gt;
&lt;li&gt;Show off pengdows.crud and generate some interest.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This whole thing is me proving out my own scratch-an-itch software. pengdows.crud exists because nobody was handling cross-database work in .NET correctly. pengdows.hangfire exists because I needed a real-world, production-grade project to prove it works. Hangfire users get more database options, better connection handling, and a lock implementation that holds under pressure. The world gets a robust example of what pengdows.crud can actually do. That's not a bad outcome for an itch.&lt;/p&gt;

&lt;p&gt;The package is &lt;code&gt;pengdows.hangfire&lt;/code&gt;. It's on NuGet. Go kick the tires.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>hangfire</category>
      <category>multidatabase</category>
      <category>productivity</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
