DEV Community

Cover image for Proof server and Indexer: how Midnight processes transactions
Usang Emmanuel
Usang Emmanuel

Posted on

Proof server and Indexer: how Midnight processes transactions

A hands-on walkthrough of Midnight's two core infrastructure components: the proof server that generates zero-knowledge proofs locally, and the Indexer that makes on-chain data queryable via GraphQL.

Introduction

This tutorial is aimed at developers who are new to Midnight and want to understand how transactions are processed behind the scenes.

When you build a DApp on Midnight, two pieces of infrastructure do most of the heavy work behind the scenes: the proof server and the Indexer. One handles the privacy side, the other handles the data side. You need both or nothing works. Understanding both is the difference between a DApp that works and a DApp that fails suddenly.

The proof server is the reason your private data stays private on Midnight. It runs locally on your machine, takes the ZK circuits produced by your Compact contract, combines them with your private inputs, and gives out a zero-knowledge proof. That proof is what gets submitted on-chain, not your data.

The Indexer handles the other direction. It watches the blockchain, parses every block and transaction, and exposes that data through a GraphQL API. Anything your DApp needs to read from on-chain contract state, transaction history, epoch info flows through the Indexer.

In this tutorial we'll walk through what each component does, set them both up with Docker, talk about the version pinning that trips up most newcomers, and send real queries to the Indexer's GraphQL endpoint. By the end you'll have a working local stack and a mental model for how a Midnight transaction actually moves from your wallet to the chain and back, so walk with me, let's go.

Midnight transaction lifecycle: from user action through proof server, node, and Indexer back to DApp


Prerequisites

Before you begin, make sure you have:

  • A machine running Ubuntu (or another Linux distribution)
  • Docker installed and running
  • Basic familiarity with the command line
  • curl installed (for testing GraphQL queries)

What the proof server actually does

Midnight transactions are different from what you're used to on Ethereum or Solana. There's no signature in the usual sense. Instead, a transaction carries a zero-knowledge proof, a compact proof that basically says "the computation described by this contract was executed correctly using valid private inputs" without revealing what those inputs actually were, cool right?

That proof doesn't just appear out of nowhere though. It requires the ZK circuits generated when your Compact contract is compiled, the verification keys that describe the circuit shape, and your actual witness data (balances, secrets, whatever the contract needs). The proof server is the process that takes all of that and produces the final zk-SNARK.

The important design choice here: it runs locally. Your private inputs never leave your machine. The server is a Docker container you run yourself, and the Midnight.js SDK talks to it over HTTP, i guess you understand all about proof servers now.

First-run behavior

The first time you start the proof server, it has to fetch some artifacts. You'll see logs like this:

INFO midnight_base_crypto::data_provider: Missing zero-knowledge verifying key
for Zswap inputs. Attempting to download from the host
https://midnight-s3-fileshare-dev-eu-west-1.s3.eu-west-1.amazonaws.com/
Enter fullscreen mode Exit fullscreen mode

That's the server pulling down the ZK verification keys and ZKIR (Zero-Knowledge Intermediate Representation) source files from Midnight's S3 bucket. Integrity is checked before anything is used. If a file's hash doesn't match, the server refuses to start.

Once the download is done and caching is complete, you'll see:

INFO actix_server::builder: starting 4 workers
INFO actix_server::server: starting service: "actix-web-service-0.0.0.0:6300",
workers: 4, listening on: 0.0.0.0:6300
Enter fullscreen mode Exit fullscreen mode

That's an Actix web server (Rust-based, very fast) spinning up with four worker threads on port 6300. This is the endpoint the SDK will hit when it needs a proof generated.

Proof server startup logs

Confirming it's working and active

A quick server check for our local host:

curl http://localhost:6300
Enter fullscreen mode Exit fullscreen mode

Returns:

{"status":"ok","timestamp":"2026-04-21 10:13:12.154677419 +00:00:00"}
Enter fullscreen mode Exit fullscreen mode

If you get that response, the server is ready to accept proof requests.

Health check

Wiring it to your DApp

In your Midnight.js code, the SDK wrapper that talks to the proof server is httpClientProofProvider:

import { httpClientProofProvider } from
  '@midnight-ntwrk/midnight-js-http-client-proof-provider';

// Points to your local proof server
const proofProvider = httpClientProofProvider('http://localhost:6300');
Enter fullscreen mode Exit fullscreen mode

That's it. From there, every time your DApp submits a transaction, the SDK bundles up the circuit + witness, sends it to localhost:6300, waits for the proof, and attaches it to the unsigned transaction before submission.


Docker setup for local development

Both the proof server and the Indexer ship as Docker images. If you don't already have Docker on your machine, get that sorted first.

For production, you can run your own Indexer and proof server on dedicated infrastructure, or use Midnight's hosted endpoints for Preview, Preprod, and Mainnet.

On Ubuntu (I'm on 24.04):

sudo apt install -y docker.io
sudo usermod -aG docker $USER
# log out and back in, or:
newgrp docker
Enter fullscreen mode Exit fullscreen mode

Verify with docker --version and docker run hello-world before going further. If docker run hello-world gives you a "permission denied" error, the group change hasn't taken effect yet. The newgrp docker above usually fixes it without a full logout.

Running the proof server

docker run -p 6300:6300 midnightntwrk/proof-server:8.0.3 midnight-proof-server -v
Enter fullscreen mode Exit fullscreen mode

A few things worth calling out:

  • The -v flag on midnight-proof-server enables verbose logging. Keep it on while you're learning. When something goes wrong, the extra output tells you exactly where.
  • This command occupies your terminal. Open a new tab for everything else.
  • First run pulls the image and downloads the ZK artifacts, so it takes several minutes. Subsequent runs are fast because Docker caches the image and the artifacts persist inside the container or probably in a volume if you mount one.

Running the Indexer

For a fully local setup, run the standalone Indexer image:

docker run -p 8088:8088 \
  -e APP__INFRA__SECRET=$(openssl rand -hex 32) \
  midnightntwrk/indexer-standalone:4.0.1
Enter fullscreen mode Exit fullscreen mode

The APP__INFRA__SECRET is required. It is used to encrypt sensitive data the Indexer stores internally. Generating it with openssl rand -hex 32 gives you a clean 256-bit hex string.

By default the standalone Indexer connects to a local Midnight node at ws://localhost:9944, so if you want a fully self-contained stack you'll also need a node running. For most DApp development that's overkill.

Use the hosted Indexer instead

For development work, the simplest approach is to skip the standalone Indexer entirely and hit Midnight's hosted endpoints:

  • Preview: https://indexer.preview.midnight.network/api/v4/graphql
  • Preprod: https://indexer.preprod.midnight.network/api/v4/graphql
  • Mainnet: https://indexer.mainnet.midnight.network/api/v4/graphql These are the same Indexer code, just running against Midnight's test and production networks. You get a fully-synced indexer for free, which is great when you're prototyping.

Errors you'll actually hit

A few I've run into:

  • permission denied while trying to connect to the Docker daemon socket: your user isn't in the docker group yet. Run the usermod and newgrp commands above.
  • bind: address already in use on port 6300: something else is already bound to that port, or a previous container is still running. docker ps to find it, docker stop <container-id> to kill it.
  • Cannot connect to the Docker daemon: the daemon isn't running. sudo systemctl start docker. Those are some quick fixes to the issues i encountered while setting up.

Docker hello-world


Docker tags and version pinning

This is the single most important thing to get right, and also the easiest to get wrong.

The proof server tag MUST match the Ledger version.

Here's the current compatibility matrix at the time of writing:

Component Version
proof server 8.0.3
Ledger 8.0.3
Indexer 4.0.1
Node 0.22.3
Compact 0.5.1

Why alignment matters

The proof server generates proofs against a specific circuit format. The Ledger (the on-chain state machine) defines how those proofs are verified. Both sides have to agree on the exact format, verification keys, and field layout. If they don't, one of two things might happen:

  1. The proof is rejected outright when your transaction hits the chain.
  2. Worse, the transaction silently fails in a way that's very hard to debug, because the proof itself looked structurally fine but encoded assumptions the ledger no longer holds. You don't want to be debugging that at 2am late night XD. Just pin the versions.

How to check

Before pulling any image, check the official support matrix:

https://docs.midnight.network/relnotes/support-matrix

Then pin explicitly:

# Ledger is 8.0.3, so proof server must also be 8.0.3
docker pull midnightntwrk/proof-server:8.0.3   # ✓ Correct
docker pull midnightntwrk/proof-server:7.0.0   # ✗ Version mismatch
docker pull midnightntwrk/proof-server:latest  # ✗ Never do this
Enter fullscreen mode Exit fullscreen mode

Best practices

  • Never use :latest. Your setup might work today and break tomorrow for no obvious reason and you'll ship bugs that only appear on some machines.
  • Keep a note of your working combination in your repo's README. When a teammate clones the project six months from now, that one line saves them an afternoon of confusion honestly XD.
  • Pin every component together. When you upgrade the Ledger also upgrade the proof server, the Node, and update your Compact compiler.

Querying the Indexer with GraphQL

Now for the fun part. The Indexer's GraphQL API is where your DApp or a debugger reads on-chain data. Let's send some real queries to the Preview network endpoint and walk through what comes back.

Query 1: get the latest block

curl -s -X POST https://indexer.preview.midnight.network/api/v4/graphql \
  -H "Content-Type: application/json" \
  -d '{"query": "{ block { hash height timestamp protocolVersion author } }"}' \
  | python3 -m json.tool
Enter fullscreen mode Exit fullscreen mode

Response:

{
  "data": {
    "block": {
      "hash": "2a4d888a...",
      "height": 293425,
      "timestamp": 1776161520001,
      "protocolVersion": 22000,
      "author": "3a8a798e..."
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Fun fact: my first query used blocks instead of block and the API corrected me. The error messages are actually helpful.

Breaking down what you get back:

  • hash: the unique identifier for this block.
  • height: block number, basically a counter that keeps going up.
  • timestamp: Unix time in milliseconds.
  • protocolVersion: which version of the Midnight protocol this block was produced under. Useful for detecting upgrades.
  • author: the validator (stake pool operator) who produced the block.

Latest block

Query 2: fetch a specific block by height

Pass an offset argument to target a specific block. Here's the genesis block:

curl -s -X POST https://indexer.preview.midnight.network/api/v4/graphql \
  -H "Content-Type: application/json" \
  -d '{"query": "{ block(offset: { height: 1 }) { hash height timestamp transactions { hash id protocolVersion contractActions { address } } } }"}'
Enter fullscreen mode Exit fullscreen mode

This also pulls in the transactions in that block, and for each transaction the contract actions it triggered. Running it against Preview returned the genesis block with its initial bootstrapping transaction.

Genesis block

Query 3: current epoch information

curl -s -X POST https://indexer.preview.midnight.network/api/v4/graphql \
  -H "Content-Type: application/json" \
  -d '{"query": "{ currentEpochInfo { epochNo durationSeconds elapsedSeconds } }"}'
Enter fullscreen mode Exit fullscreen mode

Result showed epochNo: 986757, durationSeconds: 1800 (a 30-minute epoch), and whatever elapsedSeconds had accumulated by the time of the call. Handy when you're building anything that cares about staking cycles or time-based contract logic.

Epoch info

Pro tip: schema introspection

Don't memorize the schema. Ask for it:

curl -s -X POST https://indexer.preview.midnight.network/api/v4/graphql \
  -H "Content-Type: application/json" \
  -d '{"query": "{ __schema { queryType { fields { name description } } } }"}'
Enter fullscreen mode Exit fullscreen mode

That returns every available top-level query along with its description. I do this every time I'm exploring a new version of the Indexer.

 Schema introspection:

What's available

Top-level queries include:

  • block: get a block by hash or height (latest if no offset).
  • transactions: look up transactions by hash or identifier.
  • contractAction: fetch contract actions by contract address.
  • currentEpochInfo: current epoch number and timing.
  • spoCount: number of stake pool operators.
  • stakeDistribution: stake distribution across validators.
  • Plus dustGenerationStatus, dParameterHistory, and others. Useful block fields: hash, height, protocolVersion, timestamp, author, ledgerParameters, parent, transactions, systemParameters.

Useful transaction fields: id, hash, protocolVersion, raw, block, contractActions, unshieldedCreatedOutputs, unshieldedSpentOutputs, zswapLedgerEvents, dustLedgerEvents.


WebSocket subscriptions for real-time updates

Polling the Indexer for new data works but burns bandwidth and adds latency. For anything live (a wallet UI that updates when funds arrive, a dashboard that streams blocks, a DApp that reacts to contract state changes) you want subscriptions.

The Indexer's GraphQL endpoint also accepts WebSocket connections, and the schema exposes a set of subscriptions you can tap into.

Available subscriptions

Discovered via schema introspection:

  • blocks: subscribe to new blocks as they arrive, with an optional starting offset.
  • contractActions: stream contract actions filtered by contract address.
  • shieldedTransactions: shielded transaction events for a given session ID.
  • unshieldedTransactions: unshielded transaction events for a given address.
  • zswapLedgerEvents: ZSwap ledger events.
  • dustLedgerEvents: DUST ledger events. ### Why this matters

The difference between polling and subscriptions looks small until you're running it at scale:

  • Polling: "anything new?" → no. "anything new?" → no. "anything new?" → yes, here. Every poll is a request, whether or not there's data.
  • Subscription: you ask once, the Indexer pushes data to you whenever there's something to say. For a block explorer or a live wallet view, this is the difference between a smooth UI and one that either lags or hammers the Indexer.

Connecting with WebSocket

Here's the shape of a minimal subscription client:

import { WebSocket } from 'ws';

const ws = new WebSocket(
  'wss://indexer.preview.midnight.network/api/v4/graphql'
);

ws.on('open', () => {
  // Subscribe to new blocks
  ws.send(JSON.stringify({
    type: 'start',
    id: '1',
    payload: {
      query: `subscription { blocks { hash height timestamp } }`
    }
  }));
});

ws.on('message', (data) => {
  const result = JSON.parse(data.toString());
  console.log('New block:', result.payload?.data?.blocks);
});
Enter fullscreen mode Exit fullscreen mode

I haven't tested this WebSocket connection myself yet, so verify the protocol your Indexer version expects before using this in production.

A note on the protocol: some GraphQL WebSocket servers use the legacy subscriptions-transport-ws protocol (which is what the snippet above speaks), and some use the newer graphql-ws protocol, which uses slightly different message types (connection_init, subscribe, next). If the simple version doesn't work on your setup, check which protocol the endpoint expects and adjust the handshake accordingly.

In practice, if you're using indexerPublicDataProvider from the SDK (which we'll cover next), all of this is handled for you.


indexerPublicDataProvider vs. direct GraphQL

You now have two ways to read Indexer data: through the Midnight.js SDK, or by hitting the GraphQL endpoint directly. Both are valid; they're useful in different situations.

To be honest, if you're just starting out, the direct GraphQL approach is easier to understand because you can see exactly what's happening.

The SDK approach

import { indexerPublicDataProvider } from
  '@midnight-ntwrk/midnight-js-indexer-public-data-provider';

const publicDataProvider = indexerPublicDataProvider(
  'https://indexer.preview.midnight.network/api/v4/graphql',
  'wss://indexer.preview.midnight.network/api/v4/graphql'
);
Enter fullscreen mode Exit fullscreen mode

What this gets you:

  • A type-safe TypeScript interface: autocomplete, compile-time checks, the works.
  • Clean integration with the rest of Midnight.js. deployContract() and findDeployedContract() both use this provider internally.
  • Automatic serialization and deserialization: on-chain byte blobs become usable objects.
  • Managed WebSocket subscription lifecycle: no manual reconnect logic. ### The direct approach
curl -s -X POST https://indexer.preview.midnight.network/api/v4/graphql \
  -H "Content-Type: application/json" \
  -d '{"query": "{ block { hash height timestamp } }"}'
Enter fullscreen mode Exit fullscreen mode

What this gets you:

  • Full control over the exact query shape.
  • Zero dependency on TypeScript or Node.js. You can hit the endpoint from Python, Go, Rust, a shell script, or Postman.
  • A fast debugging loop: no rebuild, no bundler, just a curl.
  • Freedom to build tools that don't fit the SDK's assumptions (custom analytics, block explorers, monitoring). ### Picking between them
Use case Recommendation
Building a DApp with Midnight.js indexerPublicDataProvider
Debugging contract state Direct GraphQL
Building a block explorer Direct GraphQL
Custom analytics or monitoring Direct GraphQL
Standard contract deployment indexerPublicDataProvider

How they relate

The important thing to note: indexerPublicDataProvider is a wrapper around the same GraphQL API. Under the hood, the SDK is sending the same queries you'd send by hand. It just wraps them in a typed, cleaner interface that plays well with the rest of the Midnight.js ecosystem.

So everything you learn from running raw GraphQL queries still helps you when you use the SDK later. Time spent poking at the GraphQL endpoint with curl makes you a better SDK user, because you develop intuition for what the SDK is actually doing. And if you ever need to step outside the SDK to build tooling, to debug a weird state, to automate something, you already know the shape of the API.


Wrapping up

The proof server and the Indexer are the two halves of how a Midnight DApp interacts with the network:

  • proof server: privacy side. Generates ZK proofs locally so your private data never leaves your machine.
  • Indexer: data access side. Makes on-chain state queryable via GraphQL, with WebSocket subscriptions for real-time updates. Before you go, remember: pin your Docker tags and check the support matrix religiously, use indexerPublicDataProvider for building DApps and direct GraphQL for debugging, and use schema introspection whenever you want to explore what the Indexer can do.

Resources

Share your feedback on X with #MidnightforDevs

Top comments (0)