Building Batch Transactions in Midnight: Multi-Recipient Settlements and Complex Flows
The error hit me at 2am during a hackathon. POOL_INVALID_TX. I'd been building a settlement contract that needed to pay out seven recipients from a single contract call, and the transaction kept getting rejected by the block author. Not a logic error. Not a proof failure. The transaction was too big.
Error 1010 on Midnight is POOL_INVALID_TX, composed as AUTHOR(1000) + 10. It means the block author rejected the transaction before it ever reached execution. The most common cause: you exceeded the block weight limit. One moment your transaction works fine in testing with three recipients, then you add two more and everything breaks in preprod.
This guide is about building transactions that survive that constraint—and about understanding the one mental model shift that changes how you architect everything else.
The Mental Model: Guaranteed vs. Fallible
Before touching code, you need to internalize something that isn't obvious from the docs: Midnight transactions don't execute as a single atomic unit. They execute in two phases.
The guaranteed segment runs first. Fee payment happens here. Operations in this segment are expected to be fast and deterministic—the node verifies them before the transaction enters the mempool. If the guaranteed segment fails, the entire transaction is rejected.
Fallible segments run after. Each fallible segment executes atomically but independently. If a fallible segment reverts, the guaranteed segment's effects still stand. Your fees are still taken.
Transaction execution:
[guaranteed segment] → fees deducted, always executes
↓
[fallible segment 1] → may revert independently
[fallible segment 2] → may revert independently
[fallible segment N] → may revert independently
This gives you three possible outcomes after submission:
type TransactionResult =
| { status: 'SucceedEntirely' } // everything ran
| { status: 'FailFallible' } // guaranteed ran, ≥1 fallible reverted
| { status: 'FailEntirely' } // rejected before execution
The counter-intuitive part: FailFallible isn't a clean rollback. The guaranteed portion executed. You paid fees. Any state changes from the guaranteed segment are permanent. Only the fallible segment(s) reverted.
This shapes every design decision you'll make for multi-party settlements.
Multi-Recipient Settlements: The Basic Pattern
Let's say you have a revenue-sharing contract that needs to distribute funds to multiple recipients. Here's the naive version people usually write first:
import { Transaction, ZswapOffer } from '@midnight-ntwrk/ledger';
import { submitTx } from '@midnight-ntwrk/midnight-js-contracts';
// Don't do this for large recipient sets
async function distributeRevenue(
providers: ContractProviders,
recipients: Array<{ address: string; amount: bigint }>
): Promise<void> {
const result = await contract.callTx.distribute(recipients);
// Works for 3 recipients, blows up at 7
}
This works until it doesn't. The transaction weight scales with each recipient's ZswapOutput being added to the fallible offer. Eventually you hit the block weight ceiling.
The correct approach splits recipient handling across the transaction's segment structure. Operations that must succeed (fee payment, internal state updates) belong in the guaranteed segment. Recipient payouts, which may need to be individually retryable, belong in fallible segments.
Here's how to compose this manually using the ledger API:
import {
Transaction,
type Proofish,
type Signaturish,
type Bindingish,
} from '@midnight-ntwrk/ledger';
async function buildSettlementTransaction(
providers: ContractProviders,
recipientBatch: RecipientBatch
): Promise<Transaction<Signaturish, Proofish, Bindingish>> {
// Build the core transaction with guaranteed segment
// (fee payment, contract state mutation)
const guaranteedTx = await buildGuaranteedPortion(providers, recipientBatch.total);
// Build recipient payout as fallible portion
const fallibleTx = await buildFalliblePayouts(providers, recipientBatch.recipients);
// Merge: guaranteed + fallible compose into one transaction
return guaranteedTx.merge(fallibleTx);
}
The merge() method on Transaction combines two transactions. One constraint: if both transactions have contract interactions, or they spend the same coins, merge throws. Structure your builds to avoid overlap.
For cleaner composition, use Transaction.fromPartsRandomized() when building components that will be merged later—it randomizes segment IDs to prevent collisions:
import { Transaction, ZswapOffer } from '@midnight-ntwrk/ledger';
function buildRecipientPayout(
outputs: ZswapOutput[],
): Transaction<Signaturish, Proofish, Bindingish> {
const offer = ZswapOffer.fromOutputs(outputs);
// fromPartsRandomized avoids segment ID collisions when merging
return Transaction.fromPartsRandomized(
ZswapOffer.empty(), // no guaranteed offer for this portion
offer, // fallible: recipient payouts go here
new Map() // no contract intents in this sub-tx
);
}
Atomic Multi-Operation Execution
When you need multiple contract actions to execute as one atomic unit, addCalls() is your tool. It adds contract calls to a transaction and automatically places them in the appropriate segment based on the operation type.
import { Transaction } from '@midnight-ntwrk/ledger';
import { type ContractCall } from '@midnight-ntwrk/midnight-js-contracts';
async function atomicMultiOperation(
providers: ContractProviders,
operations: ContractCall[]
): Promise<string> {
// Start with empty transaction
let tx = Transaction.fromParts(
ZswapOffer.empty(),
ZswapOffer.empty(),
new Map()
);
// addCalls() manages segment placement automatically
for (const operation of operations) {
tx = await tx.addCalls([operation]);
}
// Cost check before we commit to proving
const cost = tx.cost();
if (cost.weight > BLOCK_WEIGHT_LIMIT) {
throw new Error(`Transaction weight ${cost.weight} exceeds limit ${BLOCK_WEIGHT_LIMIT}`);
}
// Prove, bind, submit
const proven = await tx.prove(providers.prover, providers.costModel);
const bound = proven.bind();
return providers.midnight.submitTx(bound);
}
The key thing addCalls() does that manual construction doesn't: it correctly places Zswap components into the guaranteed or fallible section based on the call's characteristics. Write the ZswapOffer placement by hand and you'll likely get it wrong.
Block Weight Constraints and Error 1010
Error 1010—POOL_INVALID_TX—is the block author telling you the transaction is invalid before execution. The decomposition AUTHOR(1000) + 10 tells you where the rejection came from: AUTHOR (code 1000) is the block author policy, and 10 is the specific violation within that policy.
Exceeding block weight is the most common AUTHOR(10) trigger, but not the only one. The block weight budget covers:
- Proof verification time: ZK proofs have variable verification cost
- State read/write operations: each ZswapInput, ZswapOutput, and ZswapTransient contributes
- Contract execution steps: runtime operations within your Compact contract
You can check weight before submitting:
async function checkTransactionFeasibility(
tx: Transaction<Signaturish, Proofish, Bindingish>,
providers: ContractProviders,
): Promise<{ feasible: boolean; weight: number; estimatedFee: bigint }> {
// mockProve gives a proven-like structure without full proof generation
// This is fast and cheap—use it for preflight checks
const mockProven = await tx.mockProve(providers.costModel);
const cost = mockProven.cost();
const fees = mockProven.fees(providers.costModel);
return {
feasible: cost.weight <= BLOCK_WEIGHT_LIMIT,
weight: cost.weight,
estimatedFee: fees,
};
}
mockProve() is the escape hatch here. Full proof generation is expensive—think seconds to minutes depending on circuit complexity. mockProve() creates a proof-like structure sufficient for fee estimation and weight checking, without the full ZK computation. Use it aggressively during development to catch weight violations early.
The practical weight budget I've found through testing: aim for transactions under 70% of the theoretical maximum. The limit isn't static—it depends on current block fill—so leaving headroom matters. With competitive mempool conditions, transactions near the ceiling get deprioritized.
Splitting Large Operations Across Multiple Transactions
When a single transaction can't fit your operation, you sequence. This sounds simple, but the execution model complicates it. Remember: guaranteed segments always execute. If you split a seven-recipient payout across two transactions and the second fails at the network level, the first is already confirmed. You have a partial settlement state.
Three strategies for handling this:
1. Idempotent State Machine
Design your contract to track settlement progress. Each transaction advances state; the contract knows who has been paid.
// In your Compact contract:
// paid_recipients: Map<Address, Boolean>
// settlement_id: Field (unique per settlement round)
// In TypeScript:
async function settleWithIdempotency(
providers: ContractProviders,
settlementId: string,
recipients: Recipient[]
): Promise<void> {
const batches = chunkByWeight(recipients, TARGET_WEIGHT_PER_TX);
for (const batch of batches) {
const alreadyPaid = await contract.query.getPaidStatus(settlementId, batch);
const unpaid = batch.filter(r => !alreadyPaid.has(r.address));
if (unpaid.length === 0) continue; // idempotent: skip if already done
await contract.callTx.settleRecipients(settlementId, unpaid);
}
}
2. Two-Phase Commit Pattern
Reserve funds atomically, then release in batches. The reserve happens in a single guaranteed segment (can't be partially reverted). Releases are fallible and can be retried independently.
async function twoPhaseSettle(
providers: ContractProviders,
recipients: Recipient[]
): Promise<void> {
const totalAmount = recipients.reduce((sum, r) => sum + r.amount, 0n);
// Phase 1: Lock funds (guaranteed segment - atomic)
const reservationId = await contract.callTx.reserveFunds(totalAmount);
// Phase 2: Release per batch (fallible - each is independently retryable)
const batches = chunkByWeight(recipients, TARGET_WEIGHT_PER_TX);
for (const [idx, batch] of batches.entries()) {
await retryWithBackoff(() =>
contract.callTx.releaseBatch(reservationId, batch, idx)
);
}
}
3. Weight-Aware Batching
Compute weight dynamically per batch rather than using a fixed count:
const WEIGHT_BUFFER = 0.7; // stay at 70% of limit
async function buildWeightAwareBatches(
providers: ContractProviders,
recipients: Recipient[]
): Promise<Transaction[]> {
const transactions: Transaction[] = [];
let currentBatch: Recipient[] = [];
let currentTx = Transaction.fromParts(
ZswapOffer.empty(), ZswapOffer.empty(), new Map()
);
for (const recipient of recipients) {
const candidateTx = await currentTx.addCalls([
buildRecipientCall(recipient)
]);
const mock = await candidateTx.mockProve(providers.costModel);
const weight = mock.cost().weight;
if (weight > BLOCK_WEIGHT_LIMIT * WEIGHT_BUFFER) {
// Current tx is full—flush and start new
transactions.push(currentTx);
currentTx = await Transaction.fromParts(
ZswapOffer.empty(), ZswapOffer.empty(), new Map()
).addCalls([buildRecipientCall(recipient)]);
} else {
currentTx = candidateTx;
}
}
if (currentBatch.length > 0) {
transactions.push(currentTx);
}
return transactions;
}
The iteration pattern—build, check weight, flush or continue—is tedious but reliable. The alternative (fixed recipient counts per batch) breaks the moment circuit complexity changes across a network upgrade.
Guaranteed vs. Fallible: The Design Trap
Here's the thing that burned me. I had a settlement contract where the guaranteed segment updated the contract's internal accounting (marking funds as distributed), and the fallible segment did the actual token transfers. Clean separation, right?
Wrong.
If the fallible segment reverts—maybe a recipient's address is invalid, maybe there's a transient network issue—the guaranteed segment's state update stands. The contract thinks the funds were distributed. The tokens are still sitting in the contract. I had to add a reconciliation mechanism to detect and correct this.
The rule I follow now: don't advance state in the guaranteed segment that assumes fallible success.
// Dangerous: state update in guaranteed, transfer in fallible
// If fallible reverts, state says "paid" but no tokens moved
contract.guaranteed.markAsPaid(recipientId); // BAD
contract.fallible.transferTokens(recipientId); // if this reverts...
// Safer: keep state and transfer in the same segment
contract.fallible.transferAndMark(recipientId); // atomic: both or neither
The guaranteed segment should handle what it was designed for: fee logic, authorization checks, nonce updates. State that implies successful completion of the fallible work should live in the fallible segment.
For complex flows where you genuinely need guaranteed-segment state (like locking a mutex to prevent concurrent settlements), design explicit recovery paths:
interface SettlementState {
locked: boolean;
lockId: string;
completedAt: bigint | null;
}
// guaranteed: acquire lock
// fallible: do work + release lock
// recovery: check if locked without completedAt → lock is orphaned, can clear
async function detectOrphanedLock(
contract: DeployedContract
): Promise<boolean> {
const state = await contract.query.getSettlementState();
return state.locked && state.completedAt === null;
}
Submitting the Transaction Sequence
When you've built a sequence of transactions, submission order matters. submitTxAsync() returns a handle you can await for finalization; submitTx() blocks until confirmed.
import {
submitTx,
submitTxAsync,
type TransactionResult,
} from '@midnight-ntwrk/midnight-js-contracts';
async function submitSequence(
providers: ContractProviders,
transactions: Transaction<Signaturish, Proofish, Bindingish>[]
): Promise<TransactionResult[]> {
const results: TransactionResult[] = [];
for (const tx of transactions) {
const proven = await tx.prove(providers.prover, providers.costModel);
const bound = proven.bind();
// submitTxAsync: fire and continue; await for finalization
const handle = await submitTxAsync(providers.midnight, bound);
const result = await handle.waitForFinalization();
if (result.status === 'FailEntirely') {
// Hard failure: transaction rejected. Don't continue sequence.
throw new TransactionRejectedError(result, tx.transactionHash());
}
if (result.status === 'FailFallible') {
// Soft failure: guaranteed ran, fallible reverted.
// Decision: retry fallible? log and continue? abort?
await handlePartialFailure(result, tx);
}
results.push(result);
}
return results;
}
The FailFallible branch is where most production bugs live. The transaction "succeeded" from a network perspective (it's in a block, fees are paid), but your business logic didn't complete. Build explicit handling for this case before you ship.
Putting It Together: A Settlement Service
Here's a complete pattern combining the above:
class SettlementService {
constructor(
private providers: ContractProviders,
private contract: DeployedSettlementContract,
) {}
async settle(settlementId: string, recipients: Recipient[]): Promise<void> {
// 1. Preflight: check for orphaned state from previous attempt
if (await detectOrphanedLock(this.contract)) {
await this.contract.callTx.clearOrphanedLock(settlementId);
}
// 2. Build weight-aware transaction batches
const batches = await buildWeightAwareBatches(this.providers, recipients);
// 3. Preflight weight check on each batch
for (const [i, tx] of batches.entries()) {
const { feasible, weight } = await checkTransactionFeasibility(
tx, this.providers
);
if (!feasible) {
throw new Error(`Batch ${i} too heavy: ${weight}. Reduce batch size.`);
}
}
// 4. Submit in sequence
const results = await submitSequence(this.providers, batches);
// 5. Verify completion
const allSucceeded = results.every(r =>
r.status === 'SucceedEntirely'
);
if (!allSucceeded) {
// Log partial state for reconciliation
await this.logSettlementIncomplete(settlementId, results);
throw new PartialSettlementError(settlementId, results);
}
}
}
What I'd Do Differently
Three things I'd change if I were starting over on a multi-recipient settlement system:
Weight budget first. Before writing contract logic, estimate your per-recipient transaction weight using mockProve() with a single recipient. Everything else flows from that number.
Fallible-only for payouts. Never put token transfers in the guaranteed segment. Not for performance, not for "simplicity." The partial-revert behavior is hard to reason about under production conditions.
Sequence numbers on every batch. Give each batch a sequence number that's enforced by the contract. Out-of-order delivery becomes a recoverable state, not a corrupted one.
The block weight limit isn't the hard part. The hard part is designing around the fact that "transaction submitted" and "business operation complete" are two different things on Midnight.
Top comments (0)