DEV Community

Tosh
Tosh

Posted on

Retrofitting Privacy: Adding Midnight to an Existing dApp Step by Step

Retrofitting Privacy: Adding Midnight to an Existing dApp Step by Step

Last year I inherited a voting dApp that had been running in production for eight months. It worked fine — users could submit votes, results were tallied on-chain, everything was auditable. The problem was that everything was auditable. Voter addresses, their choices, the exact timestamp of each vote — all of it sitting in plain ledger state for anyone to query. When the client asked me to add voter privacy before their next election cycle, I had two options: burn it down and rebuild from scratch on Midnight, or retrofit privacy into what already existed.

I chose the retrofit. Here is exactly how I did it.

This walkthrough assumes you have a working dApp — it does not have to be a voting app — and you want to add Midnight-based privacy without rewriting your entire stack. I will go step by step from auditing your existing state all the way through migrating live on-chain data. By the end you will have a concrete pattern you can apply to your own project.


Step 1 — Audit Your State

Before writing a single line of Compact or TypeScript, you need a clear taxonomy of every field in your application state. This is not optional. If you skip it, you will make the wrong things private, miss the things that actually need protecting, or both.

I use three categories:

Fully public — data that is supposed to be on the ledger in plaintext. Aggregate vote counts, election metadata, contract version numbers. These are fine exactly where they are.

Sensitive but not secret — data where you want to prove facts about it without revealing the underlying value. The classic example is proving you are over 18 without revealing your exact birthdate. In a voting context this is "prove you voted, but don't reveal your choice."

Fully private — data that should never touch the ledger at all. It lives as a witness input to a circuit, computed locally by the prover, and only a commitment to it appears on-chain.

For my voting app, the original TypeScript state looked like this:

// Before — fully transparent on-chain state
interface VoteRecord {
  voterAddress: string;    // Public key of the voter
  voteChoice: number;      // 0 = Candidate A, 1 = Candidate B
  timestamp: number;       // Unix epoch when the vote was cast
  electionId: string;      // Which election this vote belongs to
}

interface ElectionState {
  electionId: string;
  candidates: string[];
  votes: VoteRecord[];
  totalVotes: number;
  isOpen: boolean;
}
Enter fullscreen mode Exit fullscreen mode

Walk through each field and assign a category:

Field Category Reasoning
voterAddress Fully private Reveals identity. Should never be on-chain in plaintext.
voteChoice Fully private The entire point of a secret ballot.
timestamp Sensitive but not secret Knowing when you voted can be fine; knowing how you voted combined with timing is a privacy risk. I decided to keep a coarse timestamp (block number) as public.
electionId Fully public Needs to be public for the tally to be auditable.
totalVotes Fully public Aggregate — fine to expose.
isOpen Fully public Contract control flow.

The audit tells you which fields become commitments (fully private), which stay public, and which need selective disclosure circuits (sensitive but not secret). Do not move on until this table is complete.


Step 2 — Replacing Public State with Commitments in Compact

A commitment is a cryptographic primitive: you compute commitment = hash(value, randomness) and publish the commitment on-chain. The value and the randomness stay private with the user. Later, you (or anyone you choose) can prove that a given commitment opens to a specific value by revealing both the value and the randomness — but until you choose to reveal, the commitment is opaque.

Midnight uses Pedersen commitments. The Compact standard library exposes createCommitment(value, randomness) for creating them and verifyCommitment(commitment, value, randomness) for checking that a commitment opens correctly.

Here is the original Compact contract for the voting app, with all state fields exposed:

// Before — transparent vote ledger
ledger {
  voterAddress: Bytes<32>;     // On-chain, fully visible
  voteChoice: Uint<8>;         // On-chain, fully visible
  timestamp: Uint<64>;         // On-chain, fully visible
  electionId: Bytes<32>;       // On-chain, fine
  totalVotes: Uint<64>;        // On-chain, fine
  isOpen: Boolean;             // On-chain, fine
}

export circuit castVote(
  voterAddr: Bytes<32>,
  choice: Uint<8>,
  ts: Uint<64>,
  electionId: Bytes<32>
): [] {
  assert isOpen "Election is closed";
  // Store everything in plaintext
  ledger.voterAddress = voterAddr;
  ledger.voteChoice = choice;
  ledger.timestamp = ts;
  ledger.totalVotes = ledger.totalVotes + 1;
}
Enter fullscreen mode Exit fullscreen mode

Now here is the same contract after replacing private fields with commitments:

// After — commitments replace plaintext sensitive fields
ledger {
  // Private fields are now stored as commitment hashes only
  voterCommitment: Bytes<32>;      // Hash of (voterAddress, r1)
  voteCommitment: Bytes<32>;       // Hash of (voteChoice, r2)

  // Public fields unchanged
  electionId: Bytes<32>;
  blockNumber: Uint<64>;
  totalVotes: Uint<64>;
  isOpen: Boolean;

  // Nullifier set to prevent double-voting
  usedNullifiers: Set<Bytes<32>>;
}

export circuit castVote(
  // Public inputs
  electionId: Bytes<32>,
  blockNum: Uint<64>,
  voterCommit: Bytes<32>,
  voteCommit: Bytes<32>,
  nullifier: Bytes<32>,
  // Private witness inputs — never stored on-chain
  witness voterAddress: Bytes<32>,
  witness voteChoice: Uint<8>,
  witness r1: Bytes<32>,
  witness r2: Bytes<32>,
  witness nullifierKey: Bytes<32>
): [] {
  assert isOpen "Election is closed";

  // Verify the commitments match the private witnesses
  assert verifyCommitment(voterCommit, voterAddress, r1)
    "Voter commitment is invalid";
  assert verifyCommitment(voteCommit, voteChoice, r2)
    "Vote commitment is invalid";

  // Hash the nullifier key to produce the on-chain nullifier
  // (IMPORTANT: never store the raw nullifier key)
  const hashedNullifier = hash(nullifierKey);
  assert hashedNullifier == nullifier "Nullifier mismatch";
  assert !usedNullifiers.contains(nullifier) "Vote already cast";

  // Now write to ledger — commitments only, no plaintext
  ledger.voterCommitment = voterCommit;
  ledger.voteCommitment = voteCommit;
  ledger.electionId = electionId;
  ledger.blockNumber = blockNum;
  ledger.usedNullifiers = usedNullifiers.insert(nullifier);
  ledger.totalVotes = ledger.totalVotes + 1;
}
Enter fullscreen mode Exit fullscreen mode

The key insight is that witness parameters in a Compact circuit are private inputs. The proof system verifies the relationship between those private inputs and the public commitments without ever writing the private inputs to the ledger. When the circuit executes, voterAddress and voteChoice exist only inside the zero-knowledge proof — they are erased after the proof is generated.

createCommitment on the TypeScript side, for generating the values before you call the circuit:

import { createCommitment } from "@midnight-ntwrk/compact-runtime";
import { randomBytes } from "crypto";

function prepareVoteCommitments(voterAddress: Uint8Array, voteChoice: number) {
  // Generate fresh randomness for each commitment
  const r1 = randomBytes(32);
  const r2 = randomBytes(32);

  const voterCommitment = createCommitment(voterAddress, r1);
  const voteCommitment = createCommitment(
    new Uint8Array([voteChoice]),
    r2
  );

  return { voterCommitment, voteCommitment, r1, r2 };
}
Enter fullscreen mode Exit fullscreen mode

The randomness (r1, r2) must be stored securely on the client side. If the user loses their randomness, they lose the ability to open their commitment and prove their vote. Build your key management accordingly.


Step 3 — Selective Disclosure

Selective disclosure is how you prove a fact about private state without revealing the underlying value. "Prove you voted without revealing who you voted for" is selective disclosure. "Prove your balance is above a threshold without revealing the exact amount" is selective disclosure. The pattern is the same in both cases: you write a circuit that takes private witnesses and outputs a boolean proof of a specific predicate.

For the voting app, I needed users to be able to prove they participated in an election — useful for things like claiming a participation token — without revealing their choice.

Here is the Compact circuit for that disclosure:

// Selective disclosure: prove participation without revealing vote choice
export circuit proveParticipation(
  // Public inputs
  electionId: Bytes<32>,
  voterCommit: Bytes<32>,

  // Private witnesses
  witness voterAddress: Bytes<32>,
  witness r1: Bytes<32>
): Boolean {
  // Verify the voter commitment opens to the private witness
  assert verifyCommitment(voterCommit, voterAddress, r1)
    "Voter commitment does not match witness";

  // At this point the proof guarantees the caller knows
  // the voterAddress and r1 that open voterCommit.
  // The verifier learns nothing about voterAddress itself.

  // Optionally: assert the commitment exists in the ledger
  // (omitted here for brevity — add a Merkle inclusion proof
  //  if you need on-chain verification of participation)

  return true;
}
Enter fullscreen mode Exit fullscreen mode

On the TypeScript side, generating the witness and calling the circuit looks like this:

import {
  WitnessProvider,
  deployedContractInstance,
} from "@midnight-ntwrk/midnight-js-contracts";
import { httpClientProofProvider } from "@midnight-ntwrk/midnight-js-http-client-proof-provider";

interface ParticipationWitness {
  voterAddress: Uint8Array;
  r1: Uint8Array;
}

async function proveParticipation(
  contractAddress: string,
  electionId: Uint8Array,
  voterCommitment: Uint8Array,
  witness: ParticipationWitness
): Promise<{ proof: Uint8Array; publicInputs: Uint8Array[] }> {
  const proofProvider = httpClientProofProvider("http://localhost:6300");

  // The witness provider wraps the private inputs.
  // These are passed to the proof server over a local connection
  // and never transmitted to the chain.
  const witnessProvider: WitnessProvider<"proveParticipation"> = {
    proveParticipation: async () => ({
      voterAddress: witness.voterAddress,
      r1: witness.r1,
    }),
  };

  const contract = deployedContractInstance(contractAddress, proofProvider);

  const result = await contract.callTx.proveParticipation(
    electionId,
    voterCommitment,
    witnessProvider
  );

  return result;
}
Enter fullscreen mode Exit fullscreen mode

Notice that the witnessProvider is a plain object with functions that return the private data. The SDK handles feeding those values into the proof server. From the application's perspective, you pass in the private data and get back a proof — the ZK machinery is behind that interface.


Step 4 — Integrating the Proof Server

The proof server is a sidecar process. It does not replace your existing backend — it sits alongside it and handles the computationally expensive work of generating ZK proofs. Your existing Express API, your database, your existing blockchain interaction layer: all of that stays in place. You are adding a dependency, not replacing one.

The proof server runs at localhost:6300 by default. It exposes two endpoints: GET /check for health and POST /prove for proof generation. You do not call these directly — the TypeScript SDK calls them via httpClientProofProvider.

Here is a minimal docker-compose.yml diff showing what you add:

# docker-compose.yml — before
version: "3.9"
services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      NODE_ENV: production
      DATABASE_URL: ${DATABASE_URL}

  postgres:
    image: postgres:15
    environment:
      POSTGRES_DB: voting
      POSTGRES_USER: app
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
Enter fullscreen mode Exit fullscreen mode
# docker-compose.yml — after (added proof-server service)
version: "3.9"
services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      NODE_ENV: production
      DATABASE_URL: ${DATABASE_URL}
      PROOF_SERVER_URL: http://proof-server:6300   # NEW
    depends_on:
      proof-server:                                # NEW
        condition: service_healthy                 # NEW

  postgres:
    image: postgres:15
    environment:
      POSTGRES_DB: voting
      POSTGRES_USER: app
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}

  # NEW — proof server sidecar
  proof-server:
    image: midnightnetwork/proof-server:latest
    ports:
      - "6300:6300"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:6300/check"]
      interval: 10s
      timeout: 5s
      retries: 5
Enter fullscreen mode Exit fullscreen mode

Now wiring httpClientProofProvider into your existing Express handlers. The trap here — and I will cover it more in the pitfalls section — is that proof generation is slow. A typical proof takes 3–15 seconds depending on circuit complexity. You cannot synchronously await that in a REST handler without your clients timing out.

The right pattern is a job queue. Kick off proof generation, return a job ID immediately, and let the client poll or use a webhook:

import express from "express";
import { httpClientProofProvider } from "@midnight-ntwrk/midnight-js-http-client-proof-provider";
import { randomUUID } from "crypto";

const app = express();
const proofJobs = new Map<string, { status: "pending" | "done" | "failed"; proof?: Uint8Array; error?: string }>();

const proofServerUrl = process.env.PROOF_SERVER_URL ?? "http://localhost:6300";
const proofProvider = httpClientProofProvider(proofServerUrl);

// Existing route — unchanged
app.get("/elections/:id", async (req, res) => {
  // ... your existing election fetch logic
});

// NEW — submit a vote, returns a job ID immediately
app.post("/elections/:id/vote", express.json(), async (req, res) => {
  const { voterCommitment, voteCommitment, nullifier, witnesses } = req.body;
  const jobId = randomUUID();

  // Store pending state immediately
  proofJobs.set(jobId, { status: "pending" });

  // Fire and forget — proof generation runs in background
  generateAndSubmitVote(jobId, req.params.id, {
    voterCommitment,
    voteCommitment,
    nullifier,
    witnesses,
  }).catch((err) => {
    proofJobs.set(jobId, { status: "failed", error: err.message });
  });

  // Return 202 Accepted with the job ID
  res.status(202).json({ jobId });
});

// NEW — poll for job status
app.get("/jobs/:jobId", (req, res) => {
  const job = proofJobs.get(req.params.jobId);
  if (!job) {
    res.status(404).json({ error: "Job not found" });
    return;
  }
  res.json(job);
});

async function generateAndSubmitVote(
  jobId: string,
  electionId: string,
  params: VoteParams
): Promise<void> {
  try {
    // This call blocks for several seconds while the proof server works
    const proof = await proofProvider.prove("castVote", params);

    // Submit the proof to the Midnight node
    await submitProofToChain(electionId, proof);

    proofJobs.set(jobId, { status: "done", proof });
  } catch (err) {
    proofJobs.set(jobId, {
      status: "failed",
      error: err instanceof Error ? err.message : String(err),
    });
    throw err;
  }
}
Enter fullscreen mode Exit fullscreen mode

For production, replace the in-memory proofJobs map with a proper queue like BullMQ backed by Redis. The interface is the same — the durability and scalability are not.


Step 5 — Migration Pattern

You have new Compact contracts, you have integrated the proof server, and your frontend now generates commitments before submitting votes. The problem is that you have existing on-chain state in the old transparent format. You cannot just abandon it — real votes are sitting there.

The migration follows a dual-write window pattern: for a defined period, your application writes to both the old transparent ledger and the new commitment-based ledger simultaneously. Reads prefer the new ledger when data is available, and fall back to the old one otherwise.

// Migration-aware vote submission
async function submitVote(voteData: VoteData, migrationMode: boolean): Promise<string> {
  const jobId = randomUUID();

  if (migrationMode) {
    // Dual-write: submit to both contracts
    await Promise.all([
      submitToLegacyContract(voteData),           // Old transparent contract
      submitToPrivateContract(voteData, jobId),    // New commitment contract
    ]);
  } else {
    // Post-migration: private contract only
    await submitToPrivateContract(voteData, jobId);
  }

  return jobId;
}

// Migration job: backfill existing votes as commitments
async function migrateExistingVotes(electionId: string): Promise<void> {
  const legacyVotes = await fetchLegacyVotes(electionId);

  for (const vote of legacyVotes) {
    // Generate fresh randomness for each migrated vote
    const r1 = randomBytes(32);
    const r2 = randomBytes(32);
    const voterCommitment = createCommitment(
      hexToBytes(vote.voterAddress),
      r1
    );
    const voteCommitment = createCommitment(
      new Uint8Array([vote.voteChoice]),
      r2
    );

    // IMPORTANT: store r1 and r2 in your off-chain key store
    // associated with the original voter. Without these, the
    // voter cannot prove anything about their migrated record.
    await keyStore.save(vote.voterAddress, { r1, r2, electionId });

    // Submit commitment to the new contract
    await submitCommitmentToChain({ voterCommitment, voteCommitment, electionId });
  }
}
Enter fullscreen mode Exit fullscreen mode

The dual-write window should run until:

  1. You have migrated all existing records to the new contract.
  2. Your monitoring shows zero reads falling back to the legacy contract.
  3. You have notified users whose commitments required fresh randomness to log in and retrieve their randomness from your key store.

Cut over by deploying a flag that sets migrationMode = false. Keep the legacy contract deployed but stop writing to it. After a grace period — I used 30 days — you can stop reading from it too.

One practical note: if you are migrating votes where the original voter is no longer active, you cannot generate a useful commitment for them because there is nobody to hold the randomness. Decide up front whether you want to commit to these records with server-held randomness (weaker privacy, but preserves the count) or simply carry the aggregate count forward without individual commitments. For my client, we carried aggregates forward and migrated only voters who logged back in during a 60-day migration window.


Common Pitfalls When Retrofitting

These are the mistakes I made or nearly made. Learn from them.

Forgetting to hash the nullifier

A nullifier is a value that, once spent, prevents the same action from being taken twice. In a voting system, it prevents double-voting. The mistake is storing the raw nullifier key on-chain:

// WRONG — stores the raw nullifier key
assert !usedNullifiers.contains(nullifierKey) "Already voted";
ledger.usedNullifiers = usedNullifiers.insert(nullifierKey);
Enter fullscreen mode Exit fullscreen mode

If you store the raw nullifierKey, anyone who sees the on-chain data can link it back to the voter — especially if the nullifier key is derived from the voter's private key in a predictable way. Always hash it:

// CORRECT — stores only the hash of the nullifier key
const hashedNullifier = hash(nullifierKey);
assert !usedNullifiers.contains(hashedNullifier) "Already voted";
ledger.usedNullifiers = usedNullifiers.insert(hashedNullifier);
Enter fullscreen mode Exit fullscreen mode

The circuit takes nullifierKey as a witness input (private), hashes it, and stores only the hash. The key never appears on-chain.

Using public ledger fields for data that should be commitments

It is tempting, during a retrofit, to rationalize that some field is "not really sensitive" and leave it as a public ledger field. Resist this. Once data is on-chain in plaintext, it is permanent and public. Even if it seems harmless today, combining it with other public data later can create privacy violations through linkability.

The rule I use: if the field would ever appear on a form with a "private" or "confidential" label in any non-blockchain context, it belongs in a commitment. When in doubt, commit it.

Proof server timeout when synchronously awaiting in REST handlers

I covered this in Step 4, but it is worth repeating because it is the most common integration mistake. A proof takes 3–15 seconds. If your Express handler does this:

// WRONG — will time out for almost every client
app.post("/vote", async (req, res) => {
  const proof = await proofProvider.prove("castVote", params); // Blocks for ~10 seconds
  res.json({ success: true });
});
Enter fullscreen mode Exit fullscreen mode

Clients with aggressive HTTP timeouts (the default in many mobile browsers is 60 seconds, many corporate proxies are 30) will cut the connection before the proof finishes. Worse, the proof server keeps working, the proof gets generated, but you never receive it — and now you have to implement retry logic anyway.

Use the job queue pattern from Step 4. It is a few extra lines and it makes your API resilient to proof server latency by design.

A related issue: do not run the proof server on a shared instance with other CPU-intensive workloads. Proof generation is CPU-bound. If your proof server shares a machine with your database or your node process and something else spikes the CPU, proof times can balloon from 10 seconds to several minutes. Give the proof server its own machine or at minimum its own container with CPU limits that give it priority.


Closing

Retrofitting privacy is incremental work, but it is tractable. The state audit in Step 1 is the most important step — it determines everything else. Get the taxonomy right and the Compact and TypeScript changes fall into place. Get it wrong and you will end up either over-engineering (committing things that never needed to be private) or under-engineering (leaving sensitive fields exposed because you convinced yourself they were fine).

If you are working through this in your own project, the discussion thread for this article is open in issue #307 on the contributor-hub repo. I read every comment and I am happy to help debug specific circuit design questions or migration edge cases.

Top comments (0)