Blockchain infrastructure is hard. Running it on Kubernetes is even harder. In this article I'll walk you through how I built a production-ready Hyperledger Fabric network on Kubernetes, including automated deployment scripts, network configuration, and the security decisions I made along the way.
🧠 Why Hyperledger Fabric + Kubernetes?
Hyperledger Fabric is the go-to permissioned blockchain framework for enterprise use cases — supply chain, financial services, healthcare. Unlike public chains, you control who participates.
Kubernetes brings what Fabric alone can't give you out of the box:
Self-healing — pods restart automatically on failure
Scalability — spin up more peers as needed
Declarative infrastructure — everything is a manifest
Namespace isolation — clean separation between components
The combination is powerful, but the learning curve is steep. Here's what I built and what I learned.
🏗️ Architecture Overview
The network consists of:
ComponentRoleOrdererOrders transactions and creates blocks (RAFT consensus)PeersEndorse and commit transactions, host the ledgerCA (Certificate Authority)Issues identities for all participantsKubernetes JobsHandle one-time setup tasks (channel creation, chaincode install)
All components live inside a dedicated fabric namespace in Kubernetes, with strict network policies controlling traffic between them.
📁 Project Structure
fabric-k8s/
├── manifests/
│ ├── orderer/
│ ├── peers/
│ ├── ca/
│ └── jobs/
├── scripts/
│ ├── deploy.sh # Main entrypoint
│ └── utils.sh # Helpers: logging, wait functions
├── config/
│ └── configtx.yaml # Network genesis config
└── .env.example # Environment template (no secrets committed)
⚙️ Automated Deployment Scripts
One of the things I'm most proud of in this project is the deploy automation. Rather than running kubectl apply commands manually and hoping for the best, I built a script system with proper logging, error handling, and readiness checks.
Logging with Color
bashRED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
CYAN='\033[0;36m'
NC='\033[0m'
info() { echo -e "${GREEN}[INFO]${NC} $1"; }
warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
step() { echo -e "\n${CYAN}▶ $1${NC}"; }
error() { echo -e "${RED}[ERROR]${NC} $1"; exit 1; }
Simple but effective — every log line is color-coded by severity, so you know at a glance what's happening during a deploy.
Waiting for Deployments
One of the trickiest parts of Kubernetes automation is knowing when something is actually ready. I wrote a waitDeployment() function that uses kubectl rollout status with a timeout:
bashwaitDeployment() {
local NAME=$1
info "Waiting for deployment/$NAME..."
kubectl rollout status deployment/"$NAME" \
-n "$NAMESPACE" --timeout=120s || error "Timeout on $NAME"
}
Waiting for Jobs
Channel creation and chaincode installation run as Kubernetes Jobs. These need their own wait logic:
bashwaitJob() {
local NAME=$1
info "Waiting for job/$NAME..."
kubectl wait job/"$NAME" \
-n "$NAMESPACE" \
--for=condition=complete \
--timeout=300s || {
warn "Job $NAME timed out. Checking logs..."
kubectl logs -n "$NAMESPACE" -l app="$NAME" --tail=50
error "Job $NAME failed"
}
}
Notice that on failure, it automatically dumps the last 50 lines of logs — no need to manually kubectl logs when something breaks at 2am.
🔐 Network Configuration — The Orderer
The orderer is the most critical component: it's the one that decides the order of transactions across the entire network. I used RAFT consensus (as opposed to the deprecated Solo mode) which means multiple orderer nodes vote on block ordering.
Key configuration decisions:
TLS enabled on all orderer-to-peer communication
Mutual TLS (mTLS) for admin operations
Resource limits set to prevent one noisy component from starving others
Persistent volume for the ledger data (not ephemeral storage)
📦 Kubernetes Manifests
Each component has its own manifest directory. An example of the security-conscious securityContext I applied to every pod:
yamlsecurityContext:
runAsNonRoot: true
runAsUser: 1000
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
No pod runs as root. No pod can escalate privileges. This is table stakes for anything production-adjacent.
Network Policies
Every component is locked down with NetworkPolicy — only the pods that need to talk to the orderer can reach it:
yamlapiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: orderer-ingress
namespace: fabric
spec:
podSelector:
matchLabels:
app: orderer
ingress:
- from:
- podSelector:
matchLabels:
role: peer
ports:
- port: 7050
🔒 Security Decisions
A few things I was deliberate about:
No secrets in the repo — certificates, keys, and passwords are loaded from environment variables and Kubernetes Secrets, never committed to Git.
.env.example pattern — I commit a template with empty values so collaborators know what's needed without exposing real data.
Private repository — the repo stays private; only collaborators with explicit access can see it.
Pre-deploy validation — the script checks for kubectl availability and cluster connectivity before touching anything.
bashwhich kubectl > /dev/null 2>&1 || error "kubectl not found"
kubectl cluster-info > /dev/null 2>&1 || error "No cluster connection"
Fail fast, fail loud.
🧗 Challenges & What I Learned
Crypto material management is the #1 pain point in Fabric. The cryptogen tool generates a mountain of certificates and keys, and keeping track of which cert goes where (and making sure they match between components) took significant debugging time.
RAFT leader election surprised me — during initial setup, if the orderer pods don't all come up within the election timeout, the network never bootstraps. Adding proper readiness probes and the waitDeployment() timeout logic solved this.
Kubernetes Jobs for one-time operations (channel creation, anchor peer updates) was a pattern I hadn't used much before. It's elegant — idempotent, tracked by Kubernetes, with built-in retry logic.
🚀 What's Next
Add Prometheus + Grafana dashboards for peer/orderer metrics
Implement Sealed Secrets or Vault for crypto material management
Write chaincode in Go and deploy it through the pipeline
Add CI/CD with GitHub Actions to automate manifest linting and test deploys
💬 Final Thoughts
This project pushed me across infrastructure, cryptography, distributed systems, and DevOps simultaneously. If you're exploring enterprise blockchain or want to see how Fabric actually runs in a cloud-native environment, I hope this breakdown gives you a useful starting point.
The full project (minus secrets, of course) is on my GitHub. Feel free to open an issue or reach out if you have questions.
Top comments (0)