The Proof Server and Indexer: How Midnight Actually Processes Your Transactions
When I first started building on Midnight, I treated the proof server like a magic black box: start it, point my app at port 6300, and hope it works. That approach lasted exactly two days before I hit my first invalid proof error and had no idea where to start debugging.
This guide is what I wish I'd had then. We're going to look at how the proof server generates zero-knowledge proofs from your circuit inputs, how the indexer turns raw blockchain data into queryable state, and—critically—how these two components stay synchronized with the ledger. Miss that last part and you'll waste an afternoon chasing errors that have nothing to do with your code.
What Actually Happens When You Submit a Transaction
Before touching Docker, it helps to trace a transaction from your dApp code to the chain.
When you call a Compact contract function, a few things happen in sequence:
Circuit execution — Your Compact contract compiles to a circuit. When the function runs, the circuit executes locally using your private inputs (which never leave your machine) and produces a witness.
Proof generation — The proof server takes that witness and generates a ZK proof. This proves the computation was executed correctly without revealing the private inputs. It's computationally heavy; proof times vary from seconds to minutes depending on circuit size.
Transaction assembly — Midnight.js assembles a transaction containing the proof, public outputs, and any token transfers (DUST or NIGHT).
Submission and verification — The transaction goes to the Midnight node. The node runs the Impact VM to verify the proof on-chain. If the proof is invalid, the transaction is rejected.
Indexing — Once the transaction is included in a block, the indexer picks it up, processes contract state changes, and makes the updated state queryable.
The proof server is involved in step 2. The indexer is involved in step 5. They're separate services that don't talk to each other—the node is what connects them.
Setting Up the Local Stack
The typical local development environment runs three containers: the Midnight node, the proof server, and the indexer. Here's the docker-compose.yml structure you'll encounter in official examples:
services:
node:
image: midnightntwrk/node:7.0.0
ports:
- "9944:9944"
proof-server:
image: midnightntwrk/proof-server:7.0.0
ports:
- "6300:6300"
indexer:
image: midnightntwrk/midnight-indexer:7.0.0
ports:
- "8088:8088"
environment:
NODE_URL: ws://node:9944
depends_on:
- node
Start everything with:
docker compose up -d
Wait about 30 seconds after startup before hitting any endpoints. The indexer needs to connect to the node and do its initial sync before it'll return useful data.
The Version Number That Will Ruin Your Day
See those 7.0.0 tags? All three must match the ledger version your Midnight.js SDK expects. If they don't, you'll get failures that look completely unrelated to versioning.
When your proof server version doesn't match the ledger, proof generation either fails outright or—worse—produces an invalid proof that the node rejects. The error you'll see at the node level is something like:
Transaction rejected: proof verification failed
Not "version mismatch." Not "wrong proof server." Just proof verification failed, which could mean a dozen different things. What actually happened is the proof was generated against a different circuit format than the ledger expects.
The mapping is: check the @midnight-ntwrk/ledger package version in your package.json. Your Docker tags need to match that major version. If you're on @midnight-ntwrk/ledger: "^7.0.0", run midnightntwrk/proof-server:7.0.0. Don't mix a 4.x proof server with a 7.x ledger package.
To verify what's actually running:
docker compose logs proof-server | head -20
You'll see the server announce its version on startup.
The Proof Server in Detail
What it does
The proof server is an HTTP service. Your Midnight.js app connects to it and sends circuit inputs; the proof server returns a ZK proof. That's the entire interface from the outside.
Inside, it's running Noir circuits compiled from your Compact contracts. The proof server needs to have the right ZK parameters (prover keys, verifier keys) for the circuits your app uses. This is where another common failure mode lives.
ZK Parameters
ZK proof systems require large parameter files—prover and verifier keys specific to circuit size. The official midnightnetwork/proof-server Docker image downloads these at startup from Midnight's S3 bucket. This is fine in stable network conditions. It breaks when:
- You're in a CI environment without reliable outbound HTTPS
- The S3 download stalls halfway through (it can and does happen)
- You need offline development
When the parameter download fails, you'll see:
Error: Public parameters for k=16 not found in cache
or
FATAL: Failed to initialize ZK parameters
The recovery approach is to pre-bake the parameters into your Docker image rather than downloading at runtime. The community-maintained bricktowers/midnight-proof-server repository does exactly this with a multi-stage build:
FROM alpine AS downloader
# Download zk params to /.cache/midnight/zk-params
ARG CIRCUIT_PARAM_RANGE="16 17"
RUN ./fetch-zk-params.sh $CIRCUIT_PARAM_RANGE
FROM midnightnetwork/proof-server:${PROOF_SERVER_VERSION}
COPY --from=downloader /.cache/midnight/zk-params /.cache/midnight/zk-params
One caveat: circuit 24 parameters exceed 3 GB. Only include the circuit sizes your application actually uses, or your image becomes unwieldy.
How Proof Generation Actually Works
When you send a transaction request, Midnight.js serializes your circuit inputs and POSTs them to http://localhost:6300/prove. The proof server receives the circuit identifier (derived from your Compact contract), the private inputs, and the public inputs. It runs the circuit through the prover, which executes the circuit logic and constructs a proof that the execution produced the claimed outputs.
The proof itself is a fixed-size blob regardless of how complex the computation inside was—that's one of the useful properties of ZK proofs. A simple counter increment and a complex auction settlement produce proofs of similar size. What scales with complexity is the time it takes to generate the proof.
Proof generation time depends on circuit size, measured in "constraints" (basically, the number of arithmetic gates in the circuit). Midnight uses BLS12-381 as of proof server 4.0.0 (it switched from Pluto-Eris in that release). Small circuits prove in seconds. Larger circuits with many constraints can take 30–60 seconds on a standard developer laptop. There's no progress indicator during this time; your request is just pending. If you're hitting timeouts in your HTTP client, increase them—proof generation is expected to be slow.
One counter-intuitive thing: the proof server is stateless between requests. Each proof request is independent. This means horizontal scaling is straightforward (just run more instances behind a load balancer), but it also means the proof server doesn't accumulate state that you need to worry about. If a container crashes, restart it and it'll be immediately ready.
Connecting from Midnight.js
In your application, the proof server connection is configured via httpClientProofProvider:
import { httpClientProofProvider } from '@midnight-ntwrk/midnight-js-http-client-proof-provider';
import { FetchZkConfigProvider } from '@midnight-ntwrk/midnight-js-fetch-zk-config-provider';
const proofProvider = httpClientProofProvider(
'http://localhost:6300',
new FetchZkConfigProvider(
'http://localhost:6300', // where to fetch ZK artifacts
fetch
)
);
For Node.js environments, swap FetchZkConfigProvider for the filesystem-based provider, which loads artifacts from a local path instead of fetching them over HTTP.
The proof server URL you pass here needs to be reachable from whatever environment your app runs in. When running locally, http://localhost:6300 works. When running in a container alongside the proof server, use the container name: http://proof-server:6300. This distinction matters—I've seen people deploy their app in Docker and forget to update the proof server URL from localhost, then spend an hour debugging why proofs are failing when everything else looks fine.
The Indexer
Where the proof server is synchronous and transactional (you call it, it responds), the indexer is a streaming service that follows the chain in real time. It reads every block, processes contract state changes, and exposes that data through GraphQL.
Endpoints
HTTP: POST http://localhost:8088/api/v4/graphql
WS: ws://localhost:8088/api/v4/graphql/ws
The WebSocket endpoint uses the graphql-ws protocol. Make sure your client sends Sec-WebSocket-Protocol: graphql-transport-ws in the handshake header—some generic WebSocket clients don't do this automatically and the connection will fail silently.
Querying Blockchain State
The indexer's query API covers blocks, transactions, and contract actions. Here are the queries you'll reach for most often.
Get a block by height:
query {
block(offset: { height: 1000 }) {
hash
height
protocolVersion
timestamp
transactions {
id
hash
}
}
}
Get contract state at a specific block:
query {
contractAction(address: "3031323334...", offset: { blockOffset: { height: 1000 } }) {
__typename
... on ContractDeploy {
address
state
zswapState
}
... on ContractCall {
address
entryPoint
state
}
}
}
The address field is the contract's on-chain address in hex. You get this when you deploy.
Check a wallet's DUST generation capacity:
query {
dustGenerationStatus(
cardanoRewardAddresses: ["stake_test1uqtgpdz0chm6jnxx7erfd7rhqfud7t4ajazx8es8xk8x3ts06psdv"]
) {
registered
nightBalance
generationRate
currentCapacity
}
}
WebSocket Subscriptions
For real-time updates, the WebSocket API follows the graphql-ws protocol. After the WebSocket handshake, you exchange JSON messages.
Initial handshake:
{ "type": "connection_init" }
You'll receive { "type": "connection_ack" } back. Then start a subscription:
Subscribe to new blocks:
{
"id": "1",
"type": "subscribe",
"payload": {
"query": "subscription { blocks(offset: { height: 1000 }) { hash height timestamp transactions { id hash } } }"
}
}
Monitor contract activity:
{
"id": "2",
"type": "subscribe",
"payload": {
"query": "subscription { contractActions(address: \"3031323334...\", offset: { height: 1 }) { __typename ... on ContractCall { address entryPoint } ... on ContractDeploy { address state } } }"
}
}
Real-time unshielded transaction monitoring:
{
"id": "3",
"type": "subscribe",
"payload": {
"query": "subscription { unshieldedTransactions(address: \"mn_addr_test1...\") { __typename ... on UnshieldedTransaction { createdUtxos spentUtxos } } }"
}
}
Shielded transactions require a session first. The connect mutation over HTTP establishes the session; use the returned ID for WebSocket subscriptions:
# Over HTTP first:
mutation {
connect(viewingKey: "mn_shield-esk1abcdef...")
}
# Returns sessionId, use that in WS subscription:
{
"id": "4",
"type": "subscribe",
"payload": {
"query": "subscription { shieldedTransactions(sessionId: \"returned-session-id\", index: 0) { __typename ... on ViewingUpdate { index } } }"
}
}
Rate Limits and Query Constraints
The indexer enforces query constraints: max depth, max fields, timeout, and complexity cost. If you're building complex queries, you'll run into these during development. The error responses are descriptive—they tell you which constraint you exceeded. The solution is usually to paginate rather than fetch everything in one query.
The complexity cost is per-field, so deeply nested queries that join blocks → transactions → contractActions will hit limits faster than flat queries. If you need to enumerate a lot of historical data, use offset-based pagination on the blocks query and process in batches.
What the Indexer Does Not Do
One thing that trips people up: the indexer doesn't decrypt shielded transaction content. It can tell you that a shielded transaction happened (you'll see it in the block), but the actual inputs and outputs are encrypted to the recipient's viewing key. The shieldedTransactions subscription decrypts in the client using the viewing key you provided to the connect mutation—the decryption happens in Midnight.js code running on your machine, not on the indexer server.
This is the correct behavior from a privacy perspective, but it means you can't build a "see all shielded transactions" explorer without having the relevant viewing keys. What you can build is a per-wallet transaction history by providing that wallet's viewing key.
indexerPublicDataProvider vs Direct Indexer Access
This is where a lot of confusion lives, and it's worth being explicit about the tradeoff.
indexerPublicDataProvider is a high-level abstraction in Midnight.js:
import { indexerPublicDataProvider } from '@midnight-ntwrk/midnight-js-indexer-public-data-provider';
const provider = indexerPublicDataProvider(
'http://localhost:8088/api/v4/graphql', // HTTP query URL
'ws://localhost:8088/api/v4/graphql/ws' // WebSocket subscription URL
);
This is what you want for dApp frontends. It handles:
- Automatic reconnection on WebSocket drops
- Retry logic on query failures
- The specific query shapes Midnight.js needs for transaction finalization and contract state sync
You pass this provider into your contract's deployContract or findDeployedContract call, and it handles everything.
Direct indexer access is for when you're building something outside the Midnight.js framework—a block explorer, a backend analytics service, a contract monitoring tool. You write GraphQL queries yourself against the HTTP and WebSocket endpoints.
The two aren't mutually exclusive in the same application. A typical pattern: use indexerPublicDataProvider for your contract interactions (it's battle-tested), and add custom GraphQL queries for any analytics or monitoring you need beyond what the SDK provides.
One thing indexerPublicDataProvider doesn't expose: the raw block and transaction data. If you need to enumerate all contracts deployed to the chain, or track every call to a specific entry point, you'll need direct access to the blocks and contractActions queries.
Troubleshooting Reference
proof verification failed on node — Almost always a version mismatch between proof server and ledger. Check your Docker tags match your @midnight-ntwrk/ledger package version.
Public parameters for k=N not found in cache — ZK parameter download failed or incomplete. Either restart the proof server container (it'll retry the download), or switch to a pre-baked image.
Indexer returns empty for recent transactions — The indexer needs time to sync. Check its logs with docker compose logs indexer. You'll see it processing blocks; wait until it catches up.
WebSocket subscription connects but never receives events — Verify you're sending Sec-WebSocket-Protocol: graphql-transport-ws in the handshake. Also verify you sent connection_init and received connection_ack before starting subscriptions.
Transaction rejected: invalid state transition — This isn't a proof server issue; this is a contract logic error. Your circuit executed and proved something, but the resulting state transition violates a rule. Debug at the Compact layer.
Shielded transaction subscription never fires — You skipped the connect mutation. The session ID is required; shielded subscriptions don't work without it.
Putting It Together
The proof server and indexer handle opposite ends of the transaction lifecycle. The proof server is pre-submission—it generates the cryptographic proof that makes privacy-preserving transactions possible without sending your private data anywhere. The indexer is post-submission—it processes finalized blocks and makes contract state queryable in real time.
For most dApp development, you configure them once and they stay in the background. The times you'll need to think about them explicitly:
- Upgrading the SDK: bump your ledger package version, update your Docker tags to match, rebuild
-
Moving to production: the indexer GraphQL endpoints change from localhost to the testnet/mainnet endpoints;
indexerPublicDataProviderjust takes different URLs -
Building backend tooling: reach past
indexerPublicDataProviderto the raw GraphQL API
The version mismatch failure is the one that bites people hardest because the error messages don't point there. Keep your Docker tags in sync with your SDK version and you'll avoid the most common class of mysterious failures.
Moving to Testnet and Mainnet
Everything above is for local development. When you move to testnet or mainnet, you stop running your own proof server and indexer. Midnight's network operators run these services for you.
The proof server on testnet is public and accessible from browsers and backend services. Your httpClientProofProvider URL changes from http://localhost:6300 to the public testnet endpoint. Look up the current endpoint in Midnight's official documentation—it changes with major releases.
The indexer works the same way. Your indexerPublicDataProvider URLs swap from localhost to the public testnet indexer. The GraphQL schema is the same; only the host changes.
One important difference: the public testnet proof server may have longer queue times during periods of high activity, and proof generation on slower shared hardware takes longer than on your local machine. Build with generous timeouts from the start and don't assume sub-10-second proof times in production.
Top comments (0)