Concurrent Transactions on Midnight: UTXO Race Conditions and How to Work Around Them
I spent an afternoon last month chasing a bug that made no sense. My TypeScript app was submitting two contract calls in quick succession — nothing fancy, just two independent operations that happened to fire close together. One would succeed. The other would fail with something like "transaction rejected: stale UTXO." Every time. Not randomly — always the second one.
The root cause turned out to be a fundamental property of how Midnight handles transaction fees. Once I understood it, the fix was straightforward. But the path to understanding it was not obvious from the docs, so this guide lays it out plainly.
Why UTXOs Make Concurrency Hard
Midnight uses a UTXO model for its fee token, DUST. UTXO stands for unspent transaction output — each unit of DUST in your wallet exists as a discrete coin, not a running balance. When you pay fees, you're not subtracting from a balance; you're consuming a specific coin and optionally creating a new one as change.
This matters for concurrency because of how wallet state works. Before your wallet can build a transaction, it needs to know which DUST coins you currently have available. It picks one (or more) from the unspent set, includes it as an input to the transaction, and marks it as spent.
The problem: if you build two transactions in parallel, both of them look at the wallet's UTXO set before either transaction has been submitted. Both pick the same DUST coin as an input. The first transaction makes it to the node and gets included in a block — that DUST coin is now spent. The second transaction arrives at the node referencing the same coin. The node rejects it because the coin is gone. Stale UTXO.
This is not a bug in the SDK or the node. It's the correct behavior of a UTXO system. The issue is that your application submitted two transactions that both depended on the same input coin.
What "Stale UTXO" Actually Means
The error surfaces in a few different ways depending on where in the submission pipeline things break down:
Error: transaction rejected: stale UTXO
Error: input UTXO already spent
Error: transaction validation failed: spent inputs
All of these mean the same thing: one of the UTXO inputs referenced in your transaction was already consumed by a previous transaction. The node has no way to apply your transaction because the coin it's trying to spend doesn't exist in the unspent set anymore.
The critical detail is timing. If both transactions are submitted far enough apart that the first one confirms and the wallet re-syncs before the second one is built, there's no conflict — the wallet knows the first coin is gone and picks a different one for the second transaction. The stale UTXO error only happens when the two transactions are built close enough together that the wallet doesn't know about the first one's impact on the UTXO set yet.
Reproducing the Problem
Here's the minimal version of the race:
import { WalletBuilder } from '@midnight-ntwrk/wallet';
import { NetworkId } from '@midnight-ntwrk/zswap';
import { filter, firstValueFrom } from 'rxjs';
const wallet = await WalletBuilder.build(
process.env.INDEXER_HTTP_URI!,
process.env.INDEXER_WS_URI!,
process.env.PROVER_SERVER_URI!,
process.env.SUBSTRATE_NODE_URI!,
process.env.WALLET_SEED_PHRASE!,
NetworkId.TestNet,
'info',
);
wallet.start();
await firstValueFrom(wallet.state().pipe(filter((s) => s.isSynced)));
// This will fail — both transactions select the same DUST UTXO
const [tx1, tx2] = await Promise.all([
contract.callTx.operationA(arg1),
contract.callTx.operationB(arg2),
]);
Running these concurrently via Promise.all gives both transaction builders access to the same wallet state. Both see the same DUST coins available. Both select the same input. One wins, one fails.
Fix 1: Sequential Queuing
The simplest fix is to not submit transactions concurrently from the same wallet. Submit the first one, wait for confirmation, then submit the second one.
async function waitForConfirmation(
wallet: Wallet,
submittedTxHash: string,
): Promise<void> {
await firstValueFrom(
wallet.state().pipe(
filter((state) => {
// The wallet re-syncs after each block. Once the tx hash
// no longer appears in pending, it has been finalized.
return !state.pendingTransactions?.includes(submittedTxHash);
}),
),
);
}
// Submit sequentially — each waits for the previous to confirm
const tx1Result = await contract.callTx.operationA(arg1);
await waitForConfirmation(wallet, tx1Result.txHash);
const tx2Result = await contract.callTx.operationB(arg2);
await waitForConfirmation(wallet, tx2Result.txHash);
This works reliably. The downside is throughput: if you have ten operations to submit, they must process one at a time, each waiting for the previous block confirmation. On Midnight's testnet, block times are a few seconds, so this can get slow.
For most applications, sequential queuing is the right choice. Parallel transaction submission is only worth the complexity if you have proven throughput requirements that sequential processing can't satisfy.
Fix 2: A Transaction Queue with Backpressure
If you need to handle bursts of transactions without waiting synchronously for each one, a queue with backpressure gives you the control flow you need while keeping submissions sequential at the wallet level.
class TransactionQueue {
private queue: Array<() => Promise<void>> = [];
private running = false;
enqueue<T>(task: () => Promise<T>): Promise<T> {
return new Promise((resolve, reject) => {
this.queue.push(async () => {
try {
resolve(await task());
} catch (err) {
reject(err);
}
});
this.drain();
});
}
private async drain(): Promise<void> {
if (this.running) return;
this.running = true;
while (this.queue.length > 0) {
const next = this.queue.shift()!;
await next();
}
this.running = false;
}
}
const txQueue = new TransactionQueue();
// These calls return immediately and resolve when their transaction confirms.
// Internally they execute one at a time.
const resultA = txQueue.enqueue(() => contract.callTx.operationA(arg1));
const resultB = txQueue.enqueue(() => contract.callTx.operationB(arg2));
const [a, b] = await Promise.all([resultA, resultB]);
The queue serializes the actual wallet interactions while letting your application code stay async. Callers can submit work and await results without knowing that the underlying execution is sequential.
Fix 3: Multiple DUST Wallets for Real Parallelism
If you genuinely need parallel transaction throughput — you're building an automated service that processes hundreds of operations per hour and the sequential bottleneck is measurable — the real solution is multiple wallet instances, each with its own DUST.
The key insight: the stale UTXO problem only happens when two transactions share DUST inputs. If each wallet has its own distinct set of DUST coins, parallel submissions from different wallets don't conflict.
import { WalletBuilder } from '@midnight-ntwrk/wallet';
import { NetworkId } from '@midnight-ntwrk/zswap';
import { filter, firstValueFrom } from 'rxjs';
async function buildSyncedWallet(seedPhrase: string): Promise<Wallet> {
const wallet = await WalletBuilder.build(
process.env.INDEXER_HTTP_URI!,
process.env.INDEXER_WS_URI!,
process.env.PROVER_SERVER_URI!,
process.env.SUBSTRATE_NODE_URI!,
seedPhrase,
NetworkId.TestNet,
'info',
);
wallet.start();
await firstValueFrom(wallet.state().pipe(filter((s) => s.isSynced)));
return wallet;
}
// Initialize a pool of wallets with separate seed phrases and separate DUST funding
const walletPool = await Promise.all([
buildSyncedWallet(process.env.WALLET_SEED_1!),
buildSyncedWallet(process.env.WALLET_SEED_2!),
buildSyncedWallet(process.env.WALLET_SEED_3!),
]);
With a pool in place, you route transactions to wallets round-robin or by picking the wallet with the lowest pending load:
class WalletPool {
private wallets: Wallet[];
private pendingCounts: number[];
private index = 0;
constructor(wallets: Wallet[]) {
this.wallets = wallets;
this.pendingCounts = new Array(wallets.length).fill(0);
}
acquire(): { wallet: Wallet; release: () => void } {
// Pick the wallet with the fewest pending transactions
let minIdx = 0;
for (let i = 1; i < this.pendingCounts.length; i++) {
if (this.pendingCounts[i] < this.pendingCounts[minIdx]) minIdx = i;
}
this.pendingCounts[minIdx]++;
const wallet = this.wallets[minIdx];
return {
wallet,
release: () => {
this.pendingCounts[minIdx]--;
},
};
}
}
const pool = new WalletPool(walletPool);
async function submitWithPool(
operation: (wallet: Wallet) => Promise<TransactionResult>,
): Promise<TransactionResult> {
const { wallet, release } = pool.acquire();
try {
return await operation(wallet);
} finally {
release();
}
}
// Now these run in parallel without UTXO conflicts
const [resultA, resultB, resultC] = await Promise.all([
submitWithPool((w) => contract.callTx.operationA(arg1, { wallet: w })),
submitWithPool((w) => contract.callTx.operationB(arg2, { wallet: w })),
submitWithPool((w) => contract.callTx.operationC(arg3, { wallet: w })),
]);
The operational cost: each wallet in your pool needs to be pre-funded with DUST. You'll need to monitor balances and top up periodically. This is manageable for a service you operate, but it's real infrastructure overhead.
What About DUST Regeneration?
Midnight has a DUST regeneration mechanism — after a transaction is included in a block, some DUST comes back as a reward. The regenerated amount is less than what was spent, so DUST has a net cost, but regeneration means a busy wallet won't drain to zero as fast as a naive accounting would suggest.
This regeneration does not help with the concurrent transaction problem. The regenerated DUST arrives after the block that included your transaction — it's a new UTXO that exists in a future block. Your second transaction, built before the first one confirms, has no knowledge of this future coin. The race condition exists at build time, not at submission time.
DUST regeneration matters for capacity planning (how often you need to top up wallets), not for concurrency (how you sequence transaction submissions).
Choosing the Right Fix
Sequential queuing is the right default. If your application isn't bottlenecked by transaction throughput, serialize your submissions and don't think about it again.
Queue with backpressure is the right upgrade when you have burst traffic — many operations arriving simultaneously that need to be handled without blocking your application layer. The queue serializes wallet interactions transparently.
Multiple wallets is the right answer when you have measured, sustained throughput requirements that sequential processing genuinely can't meet. Expect the operational overhead of funding and monitoring multiple wallet instances.
The stale UTXO error is not a bug you can work around with retries. Retrying a rejected transaction won't help — the input coin is gone. The fix is always at the architectural level: ensure that no two transactions built from the same wallet state are submitted to the network at the same time.
Quick Debugging Checklist
If you're hitting stale UTXO errors and aren't sure why:
Check for
Promise.allaround contract calls. This is almost always the source. Replace with sequential awaits and see if the error goes away.Check for event-driven parallelism. If your app listens to events and fires transactions in response, two events arriving simultaneously can trigger the same race. A queue solves this.
Check wallet sync timing. If you're building transactions before the wallet has fully synced, you might be working with a stale view of the UTXO set. Always wait for
isSyncedbefore submitting.Don't retry blindly. A stale UTXO transaction cannot be fixed by resubmitting it. The input is gone. Build a new transaction from fresh wallet state instead.
The UTXO model is not an obstacle — it's what makes Midnight's shielded token system possible. Once you understand the sequencing requirement, you can build reliable, high-throughput applications on top of it.
Top comments (0)