DEV Community

Arihant Agarwal
Arihant Agarwal

Posted on

Running a Midnight Node: Setup, Sync & Monitoring: An In-depth Tutorial

Set up a Midnight full node from scratch, monitor sync, troubleshoot peers, and verify node health with Docker, RPC, and logs.

This tutorial is for developers and node operators who want to run a Midnight full node, watch it sync, keep it healthy, and diagnose the common failure mode where a node stays on block 1 while peers connect and disconnect.

You do not need to run a proof server for a full node. A proof server generates ZK proofs for smart contract workflows. A full node syncs chain state, validates blocks and transactions, exposes local RPC if you enable it, and joins the P2P network.

Since this is a hands-on tutorial, we will dive straight into it without much chatter. I'm sure you're here because you can't wait to deploy!

What you will build

You will build a single-host operator setup with these services:

  • A Cardano node
  • Cardano-db-sync
  • PostgreSQL for Cardano-db-sync
  • A Midnight full node
  • Local RPC and Prometheus endpoints bound to 127.0.0.1

The default path in this guide targets the Preview environment. Preview is best for early development and operator practice. Change the network variables for Preprod or Mainnet only after you check the current compatibility matrix.

How Midnight full nodes fit into the network

A Midnight node implements core protocol logic, manages P2P networking, and supports decentralized operation as a Cardano Partnerchain. It also runs the Midnight Ledger component, enforces protocol rules, maintains state integrity, discovers peers, establishes connections, and gossips state across the network.

A full node and an archive node differ mainly in state retention:

  • A full node syncs the blockchain, validates transactions, serves real-time state queries, and prunes older historical state. The default pruning window in the official docs is 256 blocks.
  • An archive node keeps the full block and state history. Use it for explorers, historical debugging, analytics, or services that need past state at arbitrary heights.

Use a full node unless you have a clear archive use case. Archive mode increases storage use and operational cost.

Prerequisites

Use a Linux host for production or serious operator testing. The commands below assume Ubuntu 22.04 LTS or Ubuntu 24.04 LTS.

You need:

  • sudo access
  • A stable internet connection
  • Inbound TCP 30333 open for P2P traffic
  • Outbound HTTPS and P2P access
  • Docker Engine and the Docker Compose plugin
  • jq, curl, openssl, psql, and nc
  • Enough CPU, RAM, disk, and IOPS for Cardano-db-sync and the Midnight node

Do not use latest image tags. Pin the node image for the network you run.

Choose the network and image version

The values below reflect the public Midnight compatibility information available at the time of writing. Always check the release compatibility matrix before you start or upgrade.

Environment Use case Node image
Preview Early development and operator testing midnightntwrk/midnight-node:0.22.5
Preprod Final testing before Mainnet deployment midnightntwrk/midnight-node:0.22.2
Mainnet Production network midnightntwrk/midnight-node:0.22.1

This tutorial uses Preview:

cat > midnight.env <<'EOF'
MIDNIGHT_NETWORK=preview
MIDNIGHT_NODE_VERSION=0.22.5
MIDNIGHT_NODE_IMAGE=midnightntwrk/midnight-node:0.22.5
CARDANO_NETWORK=preview
MIDNIGHT_BOOTNODE_1=/dns/bootnode-1.preview.midnight.network/tcp/30333/ws/p2p/12D3KooWK66i7dtGVNSwDh9tTeqov1q6LSdWsRLJvTyzTCaywYgK
MIDNIGHT_BOOTNODE_2=/dns/bootnode-2.preview.midnight.network/tcp/30333/ws/p2p/12D3KooWHqFfXFwb7WW4jwR8pr4BEf562v5M6c8K3CXAJq4Wx6ym
EOF
Enter fullscreen mode Exit fullscreen mode

For Preprod, use this file instead:

cat > midnight.env <<'EOF'
MIDNIGHT_NETWORK=preprod
MIDNIGHT_NODE_VERSION=0.22.2
MIDNIGHT_NODE_IMAGE=midnightntwrk/midnight-node:0.22.2
CARDANO_NETWORK=preprod
MIDNIGHT_BOOTNODE_1=/dns/bootnode-1.preprod.midnight.network/tcp/30333/ws/p2p/12D3KooWQxxUgq7ndPfAaCFNbAxtcKYxrAzTxDfRGNktF75SxdX5
MIDNIGHT_BOOTNODE_2=/dns/bootnode-2.preprod.midnight.network/tcp/30333/ws/p2p/12D3KooWNrUBs22FfmgjqFMa9ZqKED2jnxwsXWw5E4q2XVwN35TJ
EOF
Enter fullscreen mode Exit fullscreen mode

Mainnet operators should use the Mainnet image from the compatibility matrix and the Mainnet boot node configuration published for the current release. Do not reuse Preview or Preprod boot nodes on Mainnet.

Some docs, older guides, and community posts may mention testnet-02 and the old midnightnetwork/midnight-node image. Do not mix those values with Preview, Preprod, or Mainnet. A mismatched image, boot node, or chain spec is one of the fastest ways to get a node that starts but never syncs.

Size the host

Size the machine for the slowest component, which is usually Cardano-db-sync during initial catch-up. The official Cardano-db-sync guidance for Midnight lists these lower bounds:

Component Preview minimum Mainnet minimum
CPU 4 cores 4 cores
RAM 16 GB 32 GB
Disk for Cardano-db-sync 40 GB SSD 320 GB SSD
IOPS 30,000 or better 60,000 or better

Use these as the floor, not the target. A combined node host also needs space for the Midnight chain database, Docker layers, PostgreSQL growth, logs, and backups.

Recommended starting sizes:

Role CPU RAM Storage Notes
Preview full node 4 vCPU 16 GB 150 GB SSD Good for development and testing
Preview full node with headroom 8 vCPU 32 GB 250 GB NVMe Better for stable long-running service
Mainnet full node 8 vCPU 32 GB or more 500 GB NVMe Treat this as a practical floor
Archive node 8 vCPU or more 32 GB or more 1 TB NVMe or more Monitor growth and plan expansion

Avoid HDD storage. A slow disk causes long sync times, peer churn, PostgreSQL stalls, and failed restarts after unclean shutdowns.

Install system packages

Run the setup from a clean host:

sudo apt-get update
sudo apt-get install -y \
  ca-certificates \
  curl \
  gnupg \
  jq \
  lsb-release \
  netcat-openbsd \
  openssl \
  postgresql-client
Enter fullscreen mode Exit fullscreen mode

Install Docker Engine and the Docker Compose plugin:

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
  | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

. /etc/os-release
printf '%s\n' \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu ${VERSION_CODENAME} stable" \
  | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update
sudo apt-get install -y \
  containerd.io \
  docker-buildx-plugin \
  docker-ce \
  docker-ce-cli \
  docker-compose-plugin
Enter fullscreen mode Exit fullscreen mode

Allow your user to run Docker commands:

sudo usermod -aG docker "$USER"
newgrp docker
Enter fullscreen mode Exit fullscreen mode

Verify the installation:

docker --version
docker compose version
docker run --rm hello-world
Enter fullscreen mode Exit fullscreen mode

Create the working directory

Keep node files in a dedicated directory. This makes backups, resets, and audits easier.

sudo mkdir -p /opt/midnight-node
sudo chown "$USER:$USER" /opt/midnight-node
cd /opt/midnight-node
Enter fullscreen mode Exit fullscreen mode

Create the PostgreSQL environment file. The password is random hex, so it is safe to source in shell commands without escaping issues. The database connection string uses the same password as the PostgreSQL service.

umask 077
POSTGRES_PASSWORD="$(openssl rand -hex 24)"

cat > postgres.env <<EOF
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_USER=midnight
POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
POSTGRES_DB=cexplorer
DB_SYNC_POSTGRES_CONNECTION_STRING=psql://midnight:${POSTGRES_PASSWORD}@postgres:5432/cexplorer
EOF
Enter fullscreen mode Exit fullscreen mode

Load both environment files:

set -a
. ./midnight.env
. ./postgres.env
set +a
Enter fullscreen mode Exit fullscreen mode

Start Cardano-db-sync

Midnight nodes need a persistent connection to Cardano-db-sync. Start the Cardano side first and let it sync.

Create cardano-db-sync.compose.yml:

cat > cardano-db-sync.compose.yml <<'YAML'
networks:
  midnight-node-net:
    name: midnight-node-net

volumes:
  cardano-ipc: {}
  db-sync-data: {}
  postgres-data: {}

services:
  cardano-node:
    image: ${CARDANO_IMAGE:-ghcr.io/intersectmbo/cardano-node:10.5.3}
    platform: linux/amd64
    restart: unless-stopped
    container_name: cardano-node
    ports:
      - "3001:3001"
    environment:
      - NETWORK=${CARDANO_NETWORK}
      - CARDANO_NODE_SOCKET_PATH=/ipc/node.socket
    volumes:
      - cardano-ipc:/ipc
      - ./cardano-data:/data
    networks:
      - midnight-node-net

  postgres:
    image: postgres:15.3
    platform: linux/amd64
    restart: unless-stopped
    container_name: db-sync-postgres
    environment:
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${POSTGRES_DB}
      - POSTGRES_USER=${POSTGRES_USER}
    volumes:
      - postgres-data:/var/lib/postgresql/data
    ports:
      - "127.0.0.1:${POSTGRES_PORT}:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 5s
      timeout: 5s
      retries: 20
    networks:
      - midnight-node-net

  cardano-db-sync:
    image: ghcr.io/intersectmbo/cardano-db-sync:13.6.0.5
    platform: linux/amd64
    restart: unless-stopped
    container_name: cardano-db-sync
    depends_on:
      postgres:
        condition: service_healthy
    environment:
      - NETWORK=${CARDANO_NETWORK}
      - POSTGRES_HOST=${POSTGRES_HOST}
      - POSTGRES_PORT=${POSTGRES_PORT}
      - POSTGRES_DB=${POSTGRES_DB}
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
    volumes:
      - cardano-ipc:/node-ipc
      - db-sync-data:/var/lib
    networks:
      - midnight-node-net
YAML
Enter fullscreen mode Exit fullscreen mode

Start the services:

docker compose \
  --env-file postgres.env \
  --env-file midnight.env \
  -f cardano-db-sync.compose.yml \
  up -d
Enter fullscreen mode Exit fullscreen mode

Check container status:

docker compose \
  --env-file postgres.env \
  --env-file midnight.env \
  -f cardano-db-sync.compose.yml \
  ps
Enter fullscreen mode Exit fullscreen mode

Follow logs while Cardano-db-sync catches up:

docker logs -f cardano-db-sync
Enter fullscreen mode Exit fullscreen mode

Query Cardano-db-sync progress:

docker exec -e PGPASSWORD="${POSTGRES_PASSWORD}" db-sync-postgres \
  psql -U "${POSTGRES_USER}" -d "${POSTGRES_DB}" -tAc \
  "SELECT 100 * (
      EXTRACT(EPOCH FROM (MAX(time) AT TIME ZONE 'UTC')) -
      EXTRACT(EPOCH FROM (MIN(time) AT TIME ZONE 'UTC'))
    ) / (
      EXTRACT(EPOCH FROM (NOW() AT TIME ZONE 'UTC')) -
      EXTRACT(EPOCH FROM (MIN(time) AT TIME ZONE 'UTC'))
    ) AS sync_percent
    FROM block;"
Enter fullscreen mode Exit fullscreen mode

Wait until the value is close to 100. The exact number may fluctuate near the tip. The Midnight node can start before this completes, but it may report mainchain follower errors or stay near the first blocks until Cardano-db-sync is usable.

Pull and inspect the Midnight node image

Pull the pinned image:

docker pull "${MIDNIGHT_NODE_IMAGE}"
Enter fullscreen mode Exit fullscreen mode

Confirm that the image contains the chain resources for your environment:

docker run --rm \
  --entrypoint ls \
  "${MIDNIGHT_NODE_IMAGE}" \
  "/res/${MIDNIGHT_NETWORK}"
Enter fullscreen mode Exit fullscreen mode

You should see files such as chain-spec-raw.json, chain-spec.json, and pc-chain-config.json. Stop if the directory is missing. A missing chain resource usually means the image version and network do not match.

Start the Midnight full node

Create the node volume:

docker volume create midnight-node-data
Enter fullscreen mode Exit fullscreen mode

Start the full node:

docker run -d \
  --name midnight-full-node \
  --platform linux/amd64 \
  --restart unless-stopped \
  --network midnight-node-net \
  -p 30333:30333 \
  -p 127.0.0.1:9944:9944 \
  -p 127.0.0.1:9615:9615 \
  -v midnight-node-data:/node \
  -e "CFG_PRESET=${MIDNIGHT_NETWORK}" \
  -e "DB_SYNC_POSTGRES_CONNECTION_STRING=${DB_SYNC_POSTGRES_CONNECTION_STRING}" \
  "${MIDNIGHT_NODE_IMAGE}" \
  --chain="/res/${MIDNIGHT_NETWORK}/chain-spec-raw.json" \
  --base-path=/node/chain \
  --listen-addr=/ip4/0.0.0.0/tcp/30333 \
  --name="midnight-full-node" \
  --rpc-external \
  --rpc-methods=Safe \
  --prometheus-external \
  --bootnodes "${MIDNIGHT_BOOTNODE_1}" \
  --bootnodes "${MIDNIGHT_BOOTNODE_2}" \
  --no-private-ip
Enter fullscreen mode Exit fullscreen mode

This binds the RPC endpoint to 127.0.0.1:9944 on the host, even though the node listens on all interfaces inside the container. Keep RPC private unless you put it behind explicit access control.

Follow the logs:

docker logs -f midnight-full-node
Enter fullscreen mode Exit fullscreen mode

You want to see peer discovery, block import, and continuing progress. Do not judge the node from the first minute. Initial startup includes database creation, chain spec loading, peer discovery, and mainchain follower checks.

Run an archive node instead

Use archive mode only when you need historical state. Start from a clean Midnight data volume when switching between full and archive mode.

docker rm -f midnight-full-node || true
docker volume rm midnight-node-data || true
docker volume create midnight-node-data

docker run -d \
  --name midnight-archive-node \
  --platform linux/amd64 \
  --restart unless-stopped \
  --network midnight-node-net \
  -p 30333:30333 \
  -p 127.0.0.1:9944:9944 \
  -p 127.0.0.1:9615:9615 \
  -v midnight-node-data:/node \
  -e "CFG_PRESET=${MIDNIGHT_NETWORK}" \
  -e "DB_SYNC_POSTGRES_CONNECTION_STRING=${DB_SYNC_POSTGRES_CONNECTION_STRING}" \
  "${MIDNIGHT_NODE_IMAGE}" \
  --chain="/res/${MIDNIGHT_NETWORK}/chain-spec-raw.json" \
  --base-path=/node/chain \
  --listen-addr=/ip4/0.0.0.0/tcp/30333 \
  --name="midnight-archive-node" \
  --rpc-external \
  --rpc-methods=Safe \
  --prometheus-external \
  --bootnodes "${MIDNIGHT_BOOTNODE_1}" \
  --bootnodes "${MIDNIGHT_BOOTNODE_2}" \
  --pruning archive \
  --no-private-ip
Enter fullscreen mode Exit fullscreen mode

Monitor sync and block height

The health endpoint gives a quick process check:

curl -fsS http://127.0.0.1:9944/health
Enter fullscreen mode Exit fullscreen mode

Use JSON-RPC for node status:

curl -fsS \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"system_health","params":[],"id":1}' \
  http://127.0.0.1:9944 \
  | jq .
Enter fullscreen mode Exit fullscreen mode

Example shape:

{
  "jsonrpc": "2.0",
  "result": {
    "peers": 8,
    "isSyncing": false,
    "shouldHavePeers": true
  },
  "id": 1
}
Enter fullscreen mode Exit fullscreen mode

Check the latest local block:

curl -fsS \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"chain_getBlock","params":[],"id":1}' \
  http://127.0.0.1:9944 \
  | jq -r '.result.block.header.number'
Enter fullscreen mode Exit fullscreen mode

The block number is hexadecimal. Convert it to decimal:

LOCAL_HEIGHT_HEX=$(curl -fsS \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"chain_getBlock","params":[],"id":1}' \
  http://127.0.0.1:9944 \
  | jq -r '.result.block.header.number')

printf '%d\n' "$((LOCAL_HEIGHT_HEX))"
Enter fullscreen mode Exit fullscreen mode

Create a reusable RPC helper:

cat > rpc.sh <<'EOF'
#!/usr/bin/env bash
set -euo pipefail

RPC_URL="${1:-http://127.0.0.1:9944}"
METHOD="${2:-system_health}"
PARAMS="${3:-[]}"

curl -fsS \
  -H "Content-Type: application/json" \
  -d "{\"jsonrpc\":\"2.0\",\"method\":\"${METHOD}\",\"params\":${PARAMS},\"id\":1}" \
  "${RPC_URL}"
EOF
chmod +x rpc.sh
Enter fullscreen mode Exit fullscreen mode

Use it like this:

./rpc.sh http://127.0.0.1:9944 system_health | jq .
./rpc.sh http://127.0.0.1:9944 rpc_methods | jq -r '.result.methods[]'
Enter fullscreen mode Exit fullscreen mode

Create a height monitor that compares your node with the public Preview RPC endpoint:

cat > monitor-height.sh <<'EOF'
#!/usr/bin/env bash
set -euo pipefail

LOCAL_RPC="${LOCAL_RPC:-http://127.0.0.1:9944}"
REMOTE_RPC="${REMOTE_RPC:-https://rpc.preview.midnight.network}"
INTERVAL_SECONDS="${INTERVAL_SECONDS:-30}"

rpc_block_number() {
  local url="$1"
  local hex_number
  hex_number=$(curl -fsS \
    -H "Content-Type: application/json" \
    -d '{"jsonrpc":"2.0","method":"chain_getBlock","params":[],"id":1}' \
    "$url" | jq -r '.result.block.header.number // empty')

  if [ -z "$hex_number" ]; then
    printf '0\n'
    return
  fi

  printf '%d\n' "$((hex_number))"
}

while true; do
  local_height=$(rpc_block_number "$LOCAL_RPC")
  remote_height=$(rpc_block_number "$REMOTE_RPC")
  timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")

  if [ "$local_height" -eq 0 ] || [ "$remote_height" -eq 0 ]; then
    printf '%s local=%s remote=%s lag=unknown\n' \
      "$timestamp" "$local_height" "$remote_height"
  else
    lag=$((remote_height - local_height))
    printf '%s local=%s remote=%s lag=%s\n' \
      "$timestamp" "$local_height" "$remote_height" "$lag"
  fi

  sleep "$INTERVAL_SECONDS"
done
EOF
chmod +x monitor-height.sh
Enter fullscreen mode Exit fullscreen mode

Run it:

LOCAL_RPC=http://127.0.0.1:9944 \
REMOTE_RPC=https://rpc.preview.midnight.network \
INTERVAL_SECONDS=30 \
./monitor-height.sh
Enter fullscreen mode Exit fullscreen mode

A healthy node shows increasing local height and a shrinking lag. A synced node stays close to the remote height and keeps peers.

Monitor peer connectivity

Start with the RPC view:

./rpc.sh http://127.0.0.1:9944 system_health | jq '.result'
Enter fullscreen mode Exit fullscreen mode

Interpret the fields like this:

Field Healthy value What it means
peers Greater than 0 The node has live P2P connections
isSyncing true during catch-up, then false The node still imports historical blocks
shouldHavePeers Usually true The node expects to participate in P2P

Check that the P2P port listens on the host:

ss -ltnp | grep ':30333' || true
Enter fullscreen mode Exit fullscreen mode

Allow inbound P2P traffic if you use ufw:

sudo ufw allow 30333/tcp
sudo ufw status verbose
Enter fullscreen mode Exit fullscreen mode

Check outbound reachability to a boot node:

nc -vz bootnode-1.preview.midnight.network 30333
nc -vz bootnode-2.preview.midnight.network 30333
Enter fullscreen mode Exit fullscreen mode

Check recent peer-related logs:

docker logs --since 15m midnight-full-node \
  | grep -Ei 'peer|bootnode|disconnect|dial|sync|import' \
  || true
Enter fullscreen mode Exit fullscreen mode

Check Prometheus metrics if enabled:

curl -fsS http://127.0.0.1:9615/metrics \
  | grep -Ei 'peer|height|sync|block' \
  | head -n 50
Enter fullscreen mode Exit fullscreen mode

Troubleshoot a node stuck on block 1

A node stuck on block 1 usually has a configuration, dependency, or P2P problem. Work through the checks in order.

1. Confirm the image and network match

Print the active values:

set -a
. ./midnight.env
. ./postgres.env
set +a

printf 'network=%s\n' "$MIDNIGHT_NETWORK"
printf 'image=%s\n' "$MIDNIGHT_NODE_IMAGE"
printf 'cardano_network=%s\n' "$CARDANO_NETWORK"
Enter fullscreen mode Exit fullscreen mode

For Preview, use MIDNIGHT_NETWORK=preview, CARDANO_NETWORK=preview, and a Preview-compatible image. For Preprod, all three values must point to Preprod. Do not run a Preview image against Preprod boot nodes or a Preprod Cardano-db-sync database.

2. Confirm the chain spec exists inside the image

docker run --rm \
  --entrypoint test \
  "${MIDNIGHT_NODE_IMAGE}" \
  -f "/res/${MIDNIGHT_NETWORK}/chain-spec-raw.json"
Enter fullscreen mode Exit fullscreen mode

No output means the file exists. Any nonzero exit means the image or network is wrong.

3. Confirm Cardano-db-sync is caught up

docker logs --tail 100 cardano-db-sync
Enter fullscreen mode Exit fullscreen mode

Then rerun the sync percentage query from the setup section. If Cardano-db-sync is far behind, let it finish. The Midnight Docker repository documents a related error that appears when the Cardano node or db-sync is not ready:

Unable to author block in slot. Failure creating inherent data provider:
'No latest block on chain.' not found.
Possible causes: main chain follower configuration error, db-sync not synced fully,
or data not set on the main chain.
Enter fullscreen mode Exit fullscreen mode

For a full node, the practical response is the same: check the Cardano services, database connection string, and network selection before resetting the Midnight data volume.

4. Check the PostgreSQL connection string from inside the node network

Run a temporary PostgreSQL client on the same Docker network:

docker run --rm \
  --network midnight-node-net \
  -e PGPASSWORD="${POSTGRES_PASSWORD}" \
  postgres:15.3 \
  psql \
    -h postgres \
    -p 5432 \
    -U "${POSTGRES_USER}" \
    -d "${POSTGRES_DB}" \
    -c 'SELECT COUNT(*) FROM block;'
Enter fullscreen mode Exit fullscreen mode

If this fails, fix PostgreSQL before you touch the Midnight node.

5. Check peers and boot nodes

./rpc.sh http://127.0.0.1:9944 system_health | jq .
Enter fullscreen mode Exit fullscreen mode

If peers is 0, check firewall rules, cloud security groups, outbound access, and boot node values. If peers appear and disappear repeatedly, check system time, network stability, and image or chain spec mismatch.

Check clock sync:

timedatectl status
Enter fullscreen mode Exit fullscreen mode

Enable NTP if needed:

sudo timedatectl set-ntp true
Enter fullscreen mode Exit fullscreen mode

6. Check for memory pressure or OOM kills

docker inspect midnight-full-node \
  --format 'OOMKilled={{.State.OOMKilled}} ExitCode={{.State.ExitCode}} RestartCount={{.RestartCount}}'

dmesg -T | grep -Ei 'out of memory|oom|killed process' | tail -n 20 || true
Enter fullscreen mode Exit fullscreen mode

If the node or PostgreSQL is OOM-killed, add RAM before restarting. Repeated OOM events can corrupt local state or keep the node in a restart loop.

7. Reset only the data that needs resetting

Reset the Midnight node data if you changed the Midnight network, changed from full to archive mode, used the wrong chain spec, or hit a corrupted Midnight database.

docker rm -f midnight-full-node || true
docker volume rm midnight-node-data || true
docker volume create midnight-node-data
Enter fullscreen mode Exit fullscreen mode

Do not delete the PostgreSQL or Cardano-db-sync volumes unless you changed the Cardano network, damaged the database, or intentionally want a full resync.

If you must reset Cardano-db-sync too, stop services and identify the exact volume names before removal:

docker compose \
  --env-file postgres.env \
  --env-file midnight.env \
  -f cardano-db-sync.compose.yml \
  down

docker volume ls --format '{{.Name}}' \
  | grep -E 'cardano|db-sync|postgres'
Enter fullscreen mode Exit fullscreen mode

Remove only the volumes that belong to this node directory and this Compose project. Your Compose project name may differ by host, so do not paste a volume deletion command from another machine into production.

Verify that the node is synced and healthy

Use this checklist after initial sync and after every restart:

  • Containers are running.
docker ps --format 'table {{.Names}}\t{{.Status}}\t{{.Ports}}'
Enter fullscreen mode Exit fullscreen mode
  • The health endpoint responds.
curl -fsS http://127.0.0.1:9944/health
Enter fullscreen mode Exit fullscreen mode
  • The node has peers.
./rpc.sh http://127.0.0.1:9944 system_health | jq '.result.peers'
Enter fullscreen mode Exit fullscreen mode
  • The block height increases.
for _ in 1 2 3; do
  ./rpc.sh http://127.0.0.1:9944 chain_getBlock \
    | jq -r '.result.block.header.number'
  sleep 12
done
Enter fullscreen mode Exit fullscreen mode

Midnight's documented block time is 6 seconds, so a healthy node near the tip should observe new blocks over short intervals. Network conditions can vary, so use repeated checks instead of a single sample.

  • The local height is near the public RPC height for the same environment.
LOCAL_RPC=http://127.0.0.1:9944 \
REMOTE_RPC=https://rpc.preview.midnight.network \
INTERVAL_SECONDS=10 \
timeout 35 ./monitor-height.sh
Enter fullscreen mode Exit fullscreen mode
  • Prometheus responds.
curl -fsS http://127.0.0.1:9615/metrics > /tmp/midnight-metrics.txt
wc -l /tmp/midnight-metrics.txt
Enter fullscreen mode Exit fullscreen mode
  • Logs show normal operation, not repeated disconnects, database errors, or panics.
docker logs --since 30m midnight-full-node \
  | grep -Ei 'error|warn|panic|disconnect|database|db-sync' \
  || true
Enter fullscreen mode Exit fullscreen mode

A healthy node has a running container, a responsive local RPC endpoint, nonzero peers, increasing block height, low height lag, and no repeated fatal log lines.

Maintain the node

Check the compatibility matrix before every upgrade. Stop the node, pull the new image, and restart with the same network and data volume only when the release notes allow that upgrade path.

docker rm -f midnight-full-node

docker pull "${MIDNIGHT_NODE_IMAGE}"

# Re-run the docker run command from the setup section with the new pinned image.
Enter fullscreen mode Exit fullscreen mode

Back up configuration files, not just volumes:

tar -czf "midnight-node-config-$(date -u +%Y%m%dT%H%M%SZ).tar.gz" \
  midnight.env \
  postgres.env \
  cardano-db-sync.compose.yml \
  rpc.sh \
  monitor-height.sh
Enter fullscreen mode Exit fullscreen mode

Keep these files out of public repositories. postgres.env contains credentials.

Next steps

After the node is stable, connect local tooling to http://127.0.0.1:9944, import the Midnight Insomnia collection if you prefer a GUI client, and add external monitoring for container restarts, peer count, height lag, disk use, and PostgreSQL health.

For DApp development, pair the node with the Midnight indexer, wallet tooling, and a proof server. For production operation, keep RPC private, monitor disk growth, and rehearse upgrades on Preview or Preprod before touching Mainnet.

References

Top comments (0)