<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Pengdows LLC</title>
    <description>The latest articles on DEV Community by Pengdows LLC (@pengdows).</description>
    <link>https://dev.to/pengdows</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pengdows"/>
    <language>en</language>
    <item>
      <title>Your Data Access Layer Doen't Understand Databases</title>
      <dc:creator>Pengdows LLC</dc:creator>
      <pubDate>Sun, 29 Mar 2026 19:14:24 +0000</pubDate>
      <link>https://dev.to/pengdows/your-data-access-layer-doent-understand-databases-33jc</link>
      <guid>https://dev.to/pengdows/your-data-access-layer-doent-understand-databases-33jc</guid>
      <description>&lt;p&gt;Here's what nobody in the data access space wants to admit: the tools built to simplify database work have quietly offloaded the hardest parts back onto your application. Not by accident — by design. They model a pleasant fiction of what a database is, and when reality diverges, you pay for it in conditionals, workarounds, retries, and production incidents.&lt;/p&gt;

&lt;p&gt;This is not a complaint about generated SQL. SQL quality is a separate argument, and an old one. The problem runs deeper.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connection behavior is not a performance concern. It is a correctness concern.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most data access layers don't model that. Your application does — in feature flags, special cases, and debugging sessions you didn't budget for.&lt;/p&gt;

&lt;p&gt;Connection lifetime, concurrency, and identity are not independent concerns. They are the database.&lt;/p&gt;




&lt;h2&gt;
  
  
  Not all databases are the same machine
&lt;/h2&gt;

&lt;p&gt;Most data access libraries are built around one mental model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The server is already running&lt;/li&gt;
&lt;li&gt;You connect&lt;/li&gt;
&lt;li&gt;You execute commands&lt;/li&gt;
&lt;li&gt;You commit or roll back&lt;/li&gt;
&lt;li&gt;You disconnect&lt;/li&gt;
&lt;li&gt;The server waits for the next client&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SQL Server, PostgreSQL, Oracle, and MySQL broadly fit that model. For those databases, the standard approach works well enough that the cracks stay hidden for a while.&lt;/p&gt;

&lt;p&gt;But not every database works that way.&lt;/p&gt;




&lt;h2&gt;
  
  
  In-memory embedded databases
&lt;/h2&gt;

&lt;p&gt;SQLite and DuckDB in &lt;code&gt;:memory:&lt;/code&gt; mode have a fundamental behavioral difference that most abstractions ignore entirely: &lt;strong&gt;the database is the connection&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Open one connection, you have a database. Open a second connection, you don't get another connection to the same database. You get a different database. Empty. Gone.&lt;/p&gt;

&lt;p&gt;That's not a quirk. That's the operational model.&lt;/p&gt;

&lt;p&gt;A library built around "open and close connections freely per operation" will silently destroy your in-memory database between calls. No error. No warning. Just an empty database where your data used to be.&lt;/p&gt;




&lt;h2&gt;
  
  
  File-based embedded databases
&lt;/h2&gt;

&lt;p&gt;File-based SQLite and DuckDB are not miniature SQL Servers. Their write behavior is fundamentally different:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One writer at a time, enforced at the engine level&lt;/li&gt;
&lt;li&gt;Concurrent write attempts result in lock contention, busy timeouts, and failures&lt;/li&gt;
&lt;li&gt;The journal mode and transaction settings interact with connection behavior in ways that bite you if you get them wrong&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The standard advice — "open a connection, run a command, close it, repeat" — actively works against you here. Under concurrent load you'll see lock errors. Under sequential load you'll see busy timeouts. The "simple" connection pattern creates exactly the contention the database can't handle.&lt;/p&gt;

&lt;p&gt;This is so commonly mishandled that most dedicated SQLite tooling gets it wrong too.&lt;/p&gt;




&lt;h2&gt;
  
  
  LocalDB
&lt;/h2&gt;

&lt;p&gt;Microsoft's LocalDB was a genuinely good idea: full SQL Server semantics, no server install, attach a file and go. Great for local development and testing.&lt;/p&gt;

&lt;p&gt;Until idle unload enters the picture.&lt;/p&gt;

&lt;p&gt;If no connection is held for a configurable period, LocalDB unloads the database. Not an error — an unload. Your next operation reconnects and reattaches, which adds latency and can cause failures during test runs where operations are spaced out.&lt;/p&gt;

&lt;p&gt;The fix is a sentinel connection: one persistent connection held open specifically to prevent idle unload. Not for running queries. Just to keep the database alive.&lt;/p&gt;

&lt;p&gt;No mainstream data access library models this. You find out about it from a Stack Overflow answer at 11pm.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the libraries actually do
&lt;/h2&gt;

&lt;p&gt;This is not speculation. Here's what each major option in the .NET ecosystem actually models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Entity Framework Core&lt;/strong&gt; opens and closes connections per query and per &lt;code&gt;SaveChanges()&lt;/code&gt; call by default. It delegates pooling entirely to the ADO.NET provider — EF has no awareness of pool pressure or saturation. When you use an in-memory SQLite database with EF, the official workaround documented by Microsoft is to manually open the connection on the &lt;code&gt;DbContext&lt;/code&gt; and hold it open for the lifetime of the context. The developer absorbs the problem the library doesn't solve. There is no connection policy. There is a workaround in the docs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NHibernate&lt;/strong&gt; ties connection lifetime to the &lt;code&gt;ISession&lt;/code&gt;. It has a more explicit unit-of-work model than EF, which helps with some lifetime issues, but the underlying assumption is unchanged: stable, server-style backend, many connections available, connection lifetime is a performance concern. No single-writer enforcement, no keepalive, no concept of connection-bound database identity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dapper&lt;/strong&gt; doesn't touch connection lifetime at all. You open it, you pass it in, Dapper runs the command, you close it. That's the design — deliberately minimal. It's not a flaw in Dapper. But it means connection policy is 100% on the caller, every time, with no structure and no guardrails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Raw ADO.NET&lt;/strong&gt; is the same. The documented guidance is to use &lt;code&gt;using&lt;/code&gt; blocks and dispose promptly. That's the policy: a coding convention. Not a library feature, not an enforced invariant — a convention you either follow or don't.&lt;/p&gt;

&lt;p&gt;None of them model connection policy as a function of database behavior. They model lifetime. They do not model constraints. Connection policy is either delegated to the provider, left to convention, or documented as a workaround.&lt;/p&gt;




&lt;h2&gt;
  
  
  The connection-per-command trap
&lt;/h2&gt;

&lt;p&gt;Standard advice across almost every data access library is "use one connection per command." Open, execute, consume results, close. Your DBA will approve. Your server database will be healthier.&lt;/p&gt;

&lt;p&gt;That advice has real merit for server databases. It reduces leaks. It shortens connection lifetime. It keeps the pool healthy under normal load.&lt;/p&gt;

&lt;p&gt;But follow it everywhere and you create three new problems:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can't use in-memory databases for testing.&lt;/strong&gt; One connection per command means a new empty database on every operation. Your tests pass against nothing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File-based embedded databases become unreliable.&lt;/strong&gt; The write serialization problem doesn't go away because you're closing connections quickly — it gets worse, because you're now racing to reacquire write locks constantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LocalDB unloads between operations.&lt;/strong&gt; No persistent connection means no sentinel. Your test suite reconnects and reattaches on every run. Or fails partway through when the unload window closes before your next operation.&lt;/p&gt;

&lt;p&gt;So you add feature flags. Database-specific code paths. If-SQLite-do-this, if-LocalDB-do-that. Your business logic now contains your database topology.&lt;/p&gt;

&lt;p&gt;That is the abstraction failing at its core job.&lt;/p&gt;




&lt;h2&gt;
  
  
  Pool saturation: when "doing it right" still breaks
&lt;/h2&gt;

&lt;p&gt;Say you got through all of the above. Feature flags in place. Connection handling tuned per database type. DBA is satisfied, cloud bill is down, everything works.&lt;/p&gt;

&lt;p&gt;Then you get a traffic spike.&lt;/p&gt;

&lt;p&gt;Requests pile up. Each one opens a connection. The pool has a maximum size — every pool does. Requests start waiting. Wait times compound. At this point you don't get slow queries — you get connection acquisition failures. SQL Server throws &lt;code&gt;Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool&lt;/code&gt;. PostgreSQL reports &lt;code&gt;remaining connection slots are reserved&lt;/code&gt;. SQLite escalates busy errors into lock failures.&lt;/p&gt;

&lt;p&gt;You need something that governs connections. Not just "open and close promptly" — actual enforcement: when the pool is full, make everything else wait. Bounded concurrency tied to pool size, not just etiquette.&lt;/p&gt;

&lt;p&gt;None of the libraries above have this. ADO.NET pooling will queue and timeout, but it doesn't expose the control surface you need to shape behavior under load. You find out your limits when production hits them.&lt;/p&gt;




&lt;h2&gt;
  
  
  What you actually need: routing by intent
&lt;/h2&gt;

&lt;p&gt;Databases already differentiate reads and writes. Your access layer doesn't.&lt;/p&gt;

&lt;p&gt;That missing distinction has consequences. If the layer exposed it, it could enforce:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read-only sessions routed to a read replica, drawing from a separate pool&lt;/li&gt;
&lt;li&gt;Write operations serialized where the database requires it&lt;/li&gt;
&lt;li&gt;Each pool governed independently, sized to its workload&lt;/li&gt;
&lt;li&gt;For embedded databases, the write limit set to one — enforced by the layer, not scattered across application code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't new capability. It's the modeling that's missing. Databases have always had these constraints. The access layer just never encoded them.&lt;/p&gt;




&lt;h2&gt;
  
  
  What modeling it correctly looks like
&lt;/h2&gt;

&lt;p&gt;Connection behavior is a property of the database. It belongs in the data access layer, encoded explicitly, not left to convention or absorbed by the application.&lt;/p&gt;

&lt;p&gt;Here's what that looks like in practice. pengdows.crud exposes connection policy as a first-class configuration decision through &lt;code&gt;DbMode&lt;/code&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;When to use it&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Standard&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Pool per operation, open late, close early&lt;/td&gt;
&lt;td&gt;Server databases: SQL Server, PostgreSQL, Oracle, MySQL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;KeepAlive&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Holds a sentinel connection to prevent idle unload&lt;/td&gt;
&lt;td&gt;LocalDB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;SingleWriter&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Many concurrent readers, one serialized writer&lt;/td&gt;
&lt;td&gt;File-based SQLite, file-based DuckDB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;SingleConnection&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;One connection, period&lt;/td&gt;
&lt;td&gt;In-memory &lt;code&gt;:memory:&lt;/code&gt; databases&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Best&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Auto-selects the correct mode based on database type&lt;/td&gt;
&lt;td&gt;When you want the right answer without thinking about it&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;Best&lt;/code&gt; maps automatically: SQLite/DuckDB &lt;code&gt;:memory:&lt;/code&gt; gets &lt;code&gt;SingleConnection&lt;/code&gt;; file-based SQLite/DuckDB gets &lt;code&gt;SingleWriter&lt;/code&gt;; LocalDB gets &lt;code&gt;KeepAlive&lt;/code&gt;; everything else gets &lt;code&gt;Standard&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Configuration looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// File-based SQLite — enforces single writer, concurrent readers&lt;/span&gt;
&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;DatabaseContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;DatabaseContextConfiguration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;ConnectionString&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;connStr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;DbMode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;DbMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SingleWriter&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;SqliteFactory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Instance&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Or let the library decide&lt;/span&gt;
&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;DatabaseContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;connStr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Microsoft.Data.SqlClient"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;DbMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Best&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Correctness is invariant. Performance is tunable. No conditionals. No feature flags. No if-SQLite-else-SqlServer code paths. The layer absorbs the difference because it models the difference.&lt;/p&gt;

&lt;p&gt;Pool governance is built in separately from DbMode — a turnstile-based reader/writer governor with bounded permits, drain support, and a telemetry snapshot. When you're at pool capacity, operations wait in an orderly queue rather than failing. Under contention, the system degrades predictably instead of falling off a cliff.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real failure
&lt;/h2&gt;

&lt;p&gt;Writing your own SQL does not fix a data access layer that has no opinion about connection policy.&lt;/p&gt;

&lt;p&gt;You can drop EF entirely. You can use Dapper. You can write raw ADO.NET. The connection problem follows you. Because the problem isn't the SQL layer — it's the operational model underneath it.&lt;/p&gt;

&lt;p&gt;The databases have rules. Different rules, depending on the database. Those rules affect correctness, not just performance. A library that doesn't model them pushes that complexity into your application, where it's harder to see, harder to test, and easier to get wrong.&lt;/p&gt;

&lt;p&gt;Most data access layers don't understand databases.&lt;/p&gt;

&lt;p&gt;They model the happy path of one class of database.&lt;/p&gt;

&lt;p&gt;Everything else is your problem.&lt;/p&gt;

</description>
      <category>database</category>
      <category>dotnet</category>
      <category>entityframework</category>
      <category>performance</category>
    </item>
    <item>
      <title>Hangfire Had a DB Support Problem. I Fixed It. You're Welcome.</title>
      <dc:creator>Pengdows LLC</dc:creator>
      <pubDate>Fri, 27 Mar 2026 19:35:02 +0000</pubDate>
      <link>https://dev.to/pengdows/hangfire-had-a-db-support-problem-i-fixed-it-youre-welcome-2hd9</link>
      <guid>https://dev.to/pengdows/hangfire-had-a-db-support-problem-i-fixed-it-youre-welcome-2hd9</guid>
      <description>&lt;p&gt;HangFire proper is hugely popular but the official implementation only supports Microsoft SQL Server. They could have made it much more database independent, but didn't. I suspect the original authors were scratching an itch. Hangfire feels to me like someone who either didn't want to use scheduled tasks/cron jobs, or didn't know how. So they wrote something that would act as a cron job inside their code. This led them to only support their environment — they solved their problem for them. Nothing wrong with that — pengdows.crud is also scratching an itch. That doesn't make software bad. It means it wasn't born of careful planning to support everything from day one, it is "I need this, it doesn't exist, let me write it." The difference is the itch. Hangfire's itch was "I need a job scheduler in my environment." pengdows.crud's itch was "nobody handles cross-database work in .NET correctly." One of those itches produces a single-database solution. The other one doesn't.&lt;/p&gt;

&lt;p&gt;Let's be honest about what the two biggest problems with Hangfire actually are. First, connection handling — pool exhaustion under spike load, connection leaking, and single-writer databases like SQLite falling apart under write contention. Second, lock contention — distributed lock implementations that produce violations, deadlocks, or overlapping ownership under burst load. I am going to tell you I solved both. Then I am going to show you the receipts.&lt;/p&gt;

&lt;p&gt;When other people started extending Hangfire to work with their DBs, those people had to re-solve problems that had already been solved elsewhere. The road to hell is paved with good intentions, and the Hangfire storage ecosystem is well-paved. Here is what the community has produced:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Database&lt;/th&gt;
&lt;th&gt;Package&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SQL Server&lt;/td&gt;
&lt;td&gt;Hangfire.SqlServer (official)&lt;/td&gt;
&lt;td&gt;✅ Maintained&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PostgreSQL&lt;/td&gt;
&lt;td&gt;Hangfire.PostgreSql&lt;/td&gt;
&lt;td&gt;✅ Maintained&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MySQL&lt;/td&gt;
&lt;td&gt;Hangfire.MySqlStorage&lt;/td&gt;
&lt;td&gt;⚠️ Known deadlock bugs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MySQL&lt;/td&gt;
&lt;td&gt;Hangfire.Storage.MySql&lt;/td&gt;
&lt;td&gt;⚠️ Fork created to fix the above, stuck in beta, author moved on&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SQLite&lt;/td&gt;
&lt;td&gt;Hangfire.Storage.SQLite&lt;/td&gt;
&lt;td&gt;⚠️ Alive but threadbare&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Firebird&lt;/td&gt;
&lt;td&gt;Hangfire.Firebird&lt;/td&gt;
&lt;td&gt;☠️ Dead since 2015&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Oracle&lt;/td&gt;
&lt;td&gt;Hangfire.FluentNHibernateStorage&lt;/td&gt;
&lt;td&gt;☠️ Via NHibernate, last real work 2022, SQLite/Access/SQL CE broken by its own README&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dameng&lt;/td&gt;
&lt;td&gt;DMStorage.Hangfire&lt;/td&gt;
&lt;td&gt;❓ Chinese ecosystem only&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That isn't the edge case, it is systemic. MySQL has two packages because the first one had deadlock bugs bad enough that someone forked it. The fork has been stuck in beta for years. The author's own README says he got fed up and wrote a different scheduler entirely. The NHibernate-backed provider claims to support six databases — its own README admits that SQLite, MS Access, and SQL Server Compact "proved not to work" and there's no plan to fix them. As a sidenote, SQL CE requires careful handling most people never handle correctly, my SingleWriter mode. Firebird hasn't been touched since 2015. They still install. Whether they behave correctly against a current version of their target database is a question you get to answer in production.&lt;/p&gt;




&lt;h2&gt;
  
  
  The SQLite Detour
&lt;/h2&gt;

&lt;p&gt;So when I started readying pengdows.crud 2.0, I really wanted to show off my new SingleWriter mode. If you aren't familiar with pengdows.crud that is ok, go check it out on Github (&lt;a href="https://github.com/pengdows/pengdows.crud" rel="noopener noreferrer"&gt;https://github.com/pengdows/pengdows.crud&lt;/a&gt;). My crud has 5 different connection modes reflecting the reality of database behavior:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SingleConnection&lt;/strong&gt; — all database work has to be done via a single connection. This is the mode that SQLite and DuckDB :memory: databases have to use. If you try to connect a second connection you literally get a new DB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SingleWriter&lt;/strong&gt; — these databases can have as many readers as you like (within reason), but all writes have to be serialized. If two connections try to write simultaneously you either end up with a corrupted database or it throws "database is busy" and one of them fails. Neither is acceptable in production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;KeepAlive&lt;/strong&gt; — created for SQL Server LocalDB, to keep the server alive. I hold 1 connection open for the life of the app. That connection is never used for anything other than keeping the server alive.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standard&lt;/strong&gt; — my default mode. Open a connection, run your SQL, consume the results, close the connection the moment it is available to be closed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Best&lt;/strong&gt; — choose the best one for the DB you are using.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;pengdows.crud allows you to change how your app interacts with your DB without changing your code. With 2.0 I revamped some things and that had the effect of really optimizing my single writer — only 1 writer is allowed at a time, and it is NOT a pinned connection, it is controlled through a governor. In short this makes DuckDB and SQLite safe to use under moderate write load, and I have a turnstile that prevents writer starvation.&lt;/p&gt;

&lt;p&gt;With this new feature, I was really just looking for a SQLite project to rewrite and show it off. Hangfire is notorious for pool exhaustion. If I can make SQLite work reliably under Hangfire that helps Hangfire users, helps developers, and shows off what pengdows.crud can do.&lt;/p&gt;

&lt;p&gt;When I started really digging in though, I found that it was not written as I expected, and a lot of the other storage providers seem to be suffering from code rot and abandonment. I thought to myself — well pengdows.crud supports 14 relational database systems, so lets write it once. I whipped out pengdows.poco.mint (available either as a NuGet command line tool or a self-contained WebUI Docker image), spun up a SQL Server container, ran the scripts to get POCOs based on the tables, and got to work.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Lock Problem
&lt;/h2&gt;

&lt;p&gt;As I said, Hangfire feels like a scratch-an-itch project meant for one environment. Evidence of this showed up right here. The SQL Server code uses a SQL Server-only stored procedure to keep other processes from grabbing a task. So every other storage implementation has to solve this problem in a new way. It wasn't even necessary for SQLite since it only allows one writer, but the author created a table named "Lock" to solve the problem anyway — so I borrowed that approach. "Lock" is a reserved word in several of my supported databases, so I renamed it to "hf_lock" and moved on. It is not as fast as the stored procedure on SQL Server, but it is uniform across all 13 supported databases, and the benchmarks show it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Supported and What Isn't
&lt;/h2&gt;

&lt;p&gt;pengdows.hangfire supports 13 databases. Here is how that stacks up against what existed before:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Database&lt;/th&gt;
&lt;th&gt;Had Support Before&lt;/th&gt;
&lt;th&gt;pengdows.hangfire&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SQL Server&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PostgreSQL&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MySQL&lt;/td&gt;
&lt;td&gt;✅ ⚠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SQLite&lt;/td&gt;
&lt;td&gt;✅ ⚠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Firebird&lt;/td&gt;
&lt;td&gt;✅ ☠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Oracle&lt;/td&gt;
&lt;td&gt;✅ ☠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MariaDB&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CockroachDB&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DuckDB&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;YugabyteDB&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TiDB&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Aurora MySQL&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Aurora PostgreSQL&lt;/td&gt;
&lt;td&gt;❌ Never&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Seven databases that have never had Hangfire storage support before. The ones that did exist either had known bugs, were forks of broken packages, or hadn't been touched in a decade.&lt;/p&gt;

&lt;p&gt;Two databases from the pengdows.crud supported list aren't in pengdows.hangfire. Dameng has no Docker image to test against and no modern .NET provider. When those exist, adding it is trivial — the door is open. Snowflake is a different problem entirely. pengdows.crud includes a Snowflake dialect, but Snowflake is designed for analytical workloads — columnar storage, warehouse-level concurrency, high per-query latency. Hangfire needs row-level locking, low-latency queue polling, high-frequency small writes and deletes, and reliable distributed lock semantics. These requirements are fundamentally at odds with Snowflake's architecture. It isn't a missing SQL feature, it's the wrong database for the workload. If that changes, the support will follow.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Receipts
&lt;/h2&gt;

&lt;p&gt;Here is what I said I would show you.&lt;/p&gt;

&lt;p&gt;111 abstract facts, instantiated across 11 databases via Testcontainers, producing 1,204 concrete test cases. Real engines, not mocks. Connection behavior, transaction mutations, expiration cleanup, counter aggregation, queue fetch/claim/ack — covered.&lt;/p&gt;

&lt;p&gt;That handles correctness. For contention there is a separate stress suite, and that is where it gets interesting.&lt;/p&gt;

&lt;p&gt;200 workers. One resource. All contending simultaneously. Required invariants: zero ownership violations, zero interval overlaps, max concurrent owners never exceeding 1. That test passes against SQL Server at full pool size. It passes again with pool size forced down to 40 — where timeouts and pool rejections are expected but successful acquisitions still cannot overlap. It passes with 200 workers spread across 20 resources with 80% of traffic on 2 hot keys. SQLite and DuckDB each have their own 200-worker SingleWriter variants. All pass.&lt;/p&gt;

&lt;p&gt;Correctness is verified three ways: live CAS-style ownership conflicts, max concurrent holders per resource, and post-run overlap analysis on recorded hold intervals. You cannot get a false pass on one check and slip through — all three have to agree.&lt;/p&gt;

&lt;p&gt;There is also a crash path. Process killed mid-lock, TTL set to 15 seconds — the dead lock was stolen and cleaned up in 14.8 seconds. Lock rows do not orphan.&lt;/p&gt;

&lt;p&gt;The MySQL fork in the ecosystem exists because the original had deadlock bugs under burst load. The fork's author eventually gave up and wrote a different scheduler. These tests are the direct answer to that problem. 40 passed. 0 failed.&lt;/p&gt;

&lt;p&gt;These are not one-off validation runs. The unit tests, integration tests, and stress suite are part of the open source library. They run on every change. If you pull the repo, you get the tests.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Built This
&lt;/h2&gt;

&lt;p&gt;So yeah, I wrote this for three reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Help Hangfire users — you now have 13 databases to choose from, connection handling that doesn't fall apart under load, and lock contention that has been stress tested to 200 concurrent workers.&lt;/li&gt;
&lt;li&gt;Help Hangfire developers — pool saturation and lock violations are solved problems if you use the right storage.&lt;/li&gt;
&lt;li&gt;Show off pengdows.crud and generate some interest.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This whole thing is me proving out my own scratch-an-itch software. pengdows.crud exists because nobody was handling cross-database work in .NET correctly. pengdows.hangfire exists because I needed a real-world, production-grade project to prove it works. Hangfire users get more database options, better connection handling, and a lock implementation that holds under pressure. The world gets a robust example of what pengdows.crud can actually do. That's not a bad outcome for an itch.&lt;/p&gt;

&lt;p&gt;The package is &lt;code&gt;pengdows.hangfire&lt;/code&gt;. It's on NuGet. Go kick the tires.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>hangfire</category>
      <category>multidatabase</category>
      <category>productivity</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
