The assignment was to build a ticket booking system using Hyperledger Fabric. Two entities (travel agencies and customers) a shared ledger, and a requirement that every booking be verifiable on the blockchain. We had weeks to do it.
I did not plan to spend the first few weeks fighting infrastructure before writing a single line of business logic.
What Hyperledger Fabric Actually Is
Before I get into what went wrong, it’s worth explaining what Hyperledger Fabric is, because it’s quite different from what most people imagine when they hear “blockchain.”
When people think of blockchain, they usually think of Bitcoin or Ethereum: a public network anyone can join, where transactions are anonymous, and where consensus is reached through computational work. Hyperledger Fabric is none of those things. It’s a permissioned blockchain framework — every participant must be explicitly identified and credentialed before they can interact with the network. There are no anonymous transactions. There is no mining.
Fabric’s target audience is enterprises and consortiums. Think of a group of banks that want to record inter-bank settlements on a shared ledger, or airlines and travel agencies that want a single source of truth for ticket inventory. Each of those organizations runs their own nodes, retains control of their own data, and collectively agrees on what gets written to the ledger. No one organization controls the chain.
The core building blocks of a Fabric network are:
Peers : The actual nodes that store a copy of the ledger and execute the smart contracts (called “chaincode” in Fabric). Each organization in the network runs one or more peers. If you have two organizations, you have at least two sets of peers.
Orderers : A separate cluster of nodes whose only job is to sequence transactions and package them into blocks. Peers don’t talk to each other to agree on order; they send transactions to the orderers, who handle that. The orderers use a consensus algorithm called Raft — the same one used in databases like etcd — where one node is elected leader and the others follow.
Certificate Authorities (CAs): Since every participant must be credentialed, Fabric runs a CA for each organization. These issue the cryptographic identities (X.509 certificates) that peers, orderers, and users present when making any request. No valid certificate, no access.
Channels : A Fabric network can have multiple independent sub-ledgers called channels. Each channel has its own blockchain, its own set of members, and its own chaincode. In this project there’s one channel: mychannel.
Chaincode : The smart contracts. These are programs that run on the peers and define what operations can be performed on the ledger. In Fabric, chaincode is written in a real programming language (Go, Java, TypeScript). When a client wants to record a booking, it calls a chaincode function. The chaincode executes on the peer, validates the inputs, and writes to the ledger.
World State : The current state of all data, stored in a database (CouchDB in this project). When chaincode writes data, it goes into the world state. The blockchain itself records the history of every transaction; the world state is the up-to-date snapshot.
For this project, the network has three organizations. Org0 runs the ordering service — three orderer nodes using Raft consensus. Org1 represents travel agencies. Org2 represents customers. Each org has two peers and one CA.

Diagram: Three boxes labeled Org0 (3 orderers), Org1 (2 peers + CA), Org2 (2 peers + CA), connected by a channel labeled mychannel
The Problem With “Simple”
Fabric ships with several example networks. The simplest is a Docker Compose setup that brings up a few containers on your local machine. I started there.
It didn’t connect reliably. Peers couldn’t reach each other. The REST API sample couldn’t find the ledger. I tried the JavaScript version. Same issues. The Docker Compose approach works fine if you follow the tutorial exactly on a clean machine with the right Fabric binary versions. In practice, when you’re trying to connect your own code to it rather than running the provided samples, small mismatches in TLS configuration or service discovery cause silent failures that are difficult to trace.
The Kubernetes-based test network (test-network-k8s in the fabric-samples repository) was the only variant that worked consistently. And once I committed to it, it solved a second problem I hadn't fully thought through: I needed to run a lot of things simultaneously. There was a customer backend, a travel agency backend, a unified frontend, the Fabric REST interface, and the Fabric network itself - eight-plus processes. Kubernetes gave me a way to run all of that in one KIND cluster (KIND is "Kubernetes IN Docker" - it runs a full Kubernetes cluster inside Docker containers on your laptop) without manually managing ports, docker networks, and process restarts.
So the choice wasn’t ideological. It was pragmatic: Kubernetes was what worked, and it handled the orchestration problem for free.
What the Kubernetes Deployment Actually Looks Like
Every component in this network is a Kubernetes resource. Let me show what that means concretely with the peer deployment.
Each peer has three Kubernetes resources: a Certificate (for TLS), a ConfigMap (for environment config), and a Deployment that runs the actual container. Here's the ConfigMap for org1-peer1, which shows how the peer is configured:
apiVersion: v1
kind: ConfigMap
metadata:
name: org1-peer1-config
data:
CORE_PEER_ID: org1-peer1.org1.example.com
CORE_PEER_ADDRESS: org1-peer1:7051
CORE_PEER_LOCALMSPID: Org1MSP
CORE_PEER_MSPCONFIGPATH: /var/hyperledger/fabric/organizations/...
CORE_PEER_GOSSIP_BOOTSTRAP: org1-peer2:7051
CHAINCODE_AS_A_SERVICE_BUILDER_CONFIG: '{"peername":"org1peer1"}'
CORE_LEDGER_STATE_STATEDATABASE: CouchDB
CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS: localhost:5984
CORE_PEER_GOSSIP_BOOTSTRAP tells this peer to connect to org1-peer2 when it starts, for gossip - the protocol peers use to share ledger state with each other. CHAINCODE_AS_A_SERVICE_BUILDER_CONFIG tells the peer the name of this specific peer so the chaincode deployment knows which sidecar belongs to which peer. The CouchDB configuration is because the world state for this project is stored in CouchDB rather than the default LevelDB, which gives richer query capability.
The Deployment spec itself is interesting because it runs two containers in the same pod:
containers:
- name: main
image: ${FABRIC_PEER_IMAGE}
ports:
- containerPort: 7051 # gRPC for clients
- containerPort: 7052 # gRPC for chaincode
- containerPort: 9443 # operations/health
- name: couchdb
image: couchdb:${COUCHDB_VERSION}
env:
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: adminpw
ports:
- containerPort: 5984
The peer and its CouchDB instance are co-located in the same pod. CouchDB is accessed at localhost:5984 from inside the peer container - they share a network namespace since they're in the same pod. This is standard Kubernetes sidecar pattern.
The TLS certificate for the peer is handled by cert-manager, a Kubernetes add-on that automates certificate issuance. Each peer gets a certificate with multiple DNS names:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: org1-peer1-tls-cert
spec:
dnsNames:
- localhost
- org1-peer1
- org1-peer1.test-network.svc.cluster.local
- org1-peer1.localho.st
- org1-peer-gateway-svc
secretName: org1-peer1-tls-cert
issuerRef:
name: org1-tls-cert-issuer
The certificate needs to cover all the names by which clients might reach this peer — inside the cluster, outside the cluster via ingress, and through the gateway service. TLS validation will reject a connection if the hostname the client is connecting to doesn’t match a name in the certificate. This is relevant because a single TLS handshake failure cascades into completely opaque errors that look like connection refused.

Screenshot: kubectl -n test-network get pods — showing the full list of running pods
Chaincode as a Service: Why the Chaincode Runs as Its Own Pod
In traditional Fabric deployments, the peer launches chaincode directly using Docker — when you install chaincode on a peer, the peer builds a Docker image and spins up a container. This is problematic in Kubernetes because it requires the peer container to have access to a Docker daemon (Docker-in-Docker), which is complex and generally frowned upon.
The solution is Chaincode as a Service (CCaaS). Instead of the peer spawning the chaincode, the chaincode runs as its own Kubernetes Deployment and exposes a gRPC server on port 9999. The peer connects to it at a known address. The chaincode is just another pod in the cluster.
The address is specified in a connection.json file that gets bundled into the chaincode package before deployment:
{
"address": "{{.peername}}-ccaas-chaincode:9999",
"dial_timeout": "10s",
"tls_required": false
}
The {{.peername}} placeholder is substituted at packaging time - so org1peer1 becomes the address org1peer1-ccaas-chaincode:9999. The peer knows exactly which Kubernetes service to connect to.
The chaincode Kubernetes deployment is generated from a template, with the chaincode name, ID, and image substituted in by the deployment script:
apiVersion: apps/v1
kind: Deployment
metadata:
name: org1{{PEER_NAME}}-ccaas-{{CHAINCODE_NAME}}
spec:
replicas: 1
template:
spec:
containers:
- name: main
image: { { CHAINCODE_IMAGE } }
env:
- name: CHAINCODE_SERVER_ADDRESS
value: 0.0.0.0:9999
- name: CHAINCODE_ID
value: { { CHAINCODE_ID } }
---
apiVersion: v1
kind: Service
metadata:
name: org1{{PEER_NAME}}-ccaas-{{CHAINCODE_NAME}}
spec:
ports:
- name: chaincode
port: 9999
One deployment and one service per peer, per org. With two peers per org and two orgs (org1 and org2), that’s four chaincode sidecar pods in total — org1peer1-ccaas-chaincode, org1peer2-ccaas-chaincode, org2peer1-ccaas-chaincode, org2peer2-ccaas-chaincode.
The CHAINCODE_ID is the critical environment variable. It's computed as sha256(chaincode.tgz) - the hash of the packaged chaincode archive. The peer and the chaincode container must agree on this value; if they don't match, the peer refuses to talk to the chaincode container.
Writing the Chaincode: What fabric-contract-api Is
The chaincode is written in TypeScript using a library called fabric-contract-api. Before getting into what this does, it helps to understand the problem it's solving.
Fabric chaincode communicates with the peer over gRPC — a low-level binary protocol. Without a framework, you’d be implementing the gRPC server yourself, handling message serialization, managing the chaincode lifecycle protocol, and making raw putState and getState calls. It's doable but tedious and error-prone.
fabric-contract-api wraps all of that. It lets you write a TypeScript class where each method is a smart contract function. You decorate the class and its properties with @Object, @Property, @Transaction, and @Info, and the framework handles the gRPC plumbing, serialization, and lifecycle.
Here is what the booking data model looks like with these decorators:
import { Object, Property } from "fabric-contract-api";
@Object()
export class Booking {
@Property()
public bookingID: string = "First Booking";
@Property()
public userID: string = "First User";
@Property()
public userHash: string = "";
@Property()
public isUserAnonymous: boolean = true;
@Property()
public agencyID: string = "";
@Property()
public travelID: number = 0;
@Property()
public seatNumbers: string = "1A,1B";
@Property()
public totalPrice: number = 100;
@Property()
public transactionID: string = "";
@Property()
public status: string = "Confirmed";
@Property()
public createdAt: string = "";
@Property()
public updatedAt: string = "";
@Property()
public cancelledAt: string = "";
@Property()
public refundAmount: number = 0;
@Property()
public penalty: number = 0;
@Property()
public availableSeats: number = 0;
@Property()
public hyperledgerTxId: string = "";
}
The @Object() decorator tells the framework this class represents a ledger asset. The @Property() decorators tell it which fields to include in serialization. When you call ctx.stub.putState(bookingID, Buffer.from(JSON.stringify(booking))), this object gets serialized to JSON and written to the world state under the bookingID key. Reading it back is just ctx.stub.getState(bookingID) and parsing the result.
The contract itself uses @Transaction() to mark functions that write to the ledger (submit transactions) and @Transaction(false) for read-only queries (evaluate transactions):
import { Context, Contract, Info, Transaction } from "fabric-contract-api";
@Info({
title: "BookingContract",
description: "Smart contract for recording travel bookings",
})
export class BookingContract extends Contract {
@Transaction()
public async RecordBooking(
ctx: Context,
bookingID: string,
userID: string,
isUserAnonymous: boolean,
// ... other fields
): Promise<void> {
const booking = new Booking();
booking.bookingID = bookingID;
// ... assign fields
booking.hyperledgerTxId = ctx.stub.getTxID();
await ctx.stub.putState(
booking.bookingID,
Buffer.from(JSON.stringify(booking)),
);
}
@Transaction(false)
public async ReadBooking(ctx: Context, bookingID: string): Promise<string> {
const data = await ctx.stub.getState(bookingID);
if (data.length === 0) {
throw new Error(`Booking ${bookingID} does not exist`);
}
return data.toString();
}
@Transaction(false)
public async BookingExists(
ctx: Context,
bookingID: string,
): Promise<boolean> {
const data = await ctx.stub.getState(bookingID);
return data.length > 0;
}
@Transaction(false)
public async GetAllBookings(ctx: Context): Promise<string> {
const iterator = await ctx.stub.getStateByRange("", "");
const bookings = [];
let result = await iterator.next();
while (!result.done) {
bookings.push(JSON.parse(result.value.value.toString()));
result = await iterator.next();
}
await iterator.close();
return JSON.stringify(bookings);
}
@Transaction()
public async DeleteBooking(ctx: Context, bookingID: string): Promise<void> {
const exists = await this.BookingExists(ctx, bookingID);
if (!exists) {
throw new Error(`The booking ${bookingID} does not exist`);
}
return ctx.stub.deleteState(bookingID);
}
}
The ctx.stub.getTxID() call inside RecordBooking is important. When Fabric commits a transaction, it assigns a unique transaction ID to it. By capturing this inside the chaincode and storing it as hyperledgerTxId in the booking record, we can later look up exactly which block this booking is in. That's what the block height endpoint does.
GetAllBookings uses getStateByRange('', '') with empty strings for both bounds - that means "all keys." It returns a cursor-based iterator rather than loading everything at once, which matters if the ledger grows large.
The Privacy Design: Hashing User Identity
There’s a field in the Booking object called userHash, and a boolean called isUserAnonymous. This needs explaining.
The Fabric ledger in this architecture is not private — all peers in the network can read all bookings. Org1 (travel agencies) and Org2 (customers) share the same channel and therefore the same ledger. If a booking stored userName: "Yuvraj Raghuvanshi" and userEmail: "yuvraj@example.com" directly in the ledger entry, then every peer operator - including the travel agencies - could read that personal information.
Bitcoin has the same problem and solves it with a hash: your identity on the Bitcoin network is a hash of your public key, not your name. I used the same idea here. By default, only a hash of the user’s identifier is written to the ledger. The actual name and email stay in the customer backend’s database, which the travel agencies can’t access. If the user explicitly opts out of anonymity (isUserAnonymous: false), their internal application userID is written instead - still not their name or email, just an opaque identifier.
The personal information lives in the application layer. The ledger records that a booking was made. If you need to verify who made it, you go through the application, not the ledger directly.
The Chaincode Lifecycle: A Five-Step Process
This is one of the parts of Fabric that confuses people most. Deploying chaincode is not like deploying a Docker image. There is a formal five-step governance process:
1. Package : Bundle the chaincode source (or in CCaaS mode, the connection.json and metadata.json) into a .tgz archive. Compute its SHA256 hash - this becomes the CHAINCODE_ID.
2. Install : Copy the package to each peer that will run the chaincode. The peer stores it locally but doesn’t activate it yet.
3. Approve : Each organization’s admin issues a vote approving the chaincode definition: this name, this version, this sequence number. In a real multi-party network, each organization does this independently. The channel’s endorsement policy determines how many approvals are needed before the chaincode can be committed.
4. Commit : Once enough organizations have approved, one admin commits the chaincode definition to the channel. This is a channel-wide operation: after commit, all peers on the channel recognize the chaincode as active.
5. Launch (CCaaS only): In CCaaS mode, after the lifecycle is complete, the chaincode container must actually be running. The peer connects to it over gRPC at the address from connection.json.
Here is the approval step in the deployment script, showing the key arguments:
peer lifecycle chaincode approveformyorg \
--channelID mychannel \
--name chaincode \
--version 1 \
--package-id chaincode:${sha256_of_package} \
--sequence ${next_seq_num} \
--orderer org0-orderer1.localho.st:443 \
--tls --cafile /path/to/org0-tls-ca.pem
The --sequence argument is where I ran into a concrete problem.
The Sequence Number Bug
When you first deploy chaincode, --sequence 1 is correct. The sequence number tracks how many times the chaincode definition has been updated on the channel. First deployment: 1. First update: 2. And so on.
The original chaincode.sh from fabric-samples had --sequence 1 hardcoded everywhere - in both the approveformyorg and commit commands. This works exactly once. The reset script tears down the KIND cluster and rebuilds everything from scratch, so each reset starts fresh - which means sequence 1 is always correct after a full reset.
But Fabric also supports incremental chaincode updates without tearing down the cluster. If you install a new version of chaincode on a running network, you need to increment the sequence. With the hardcoded 1, this would fail.
The fix was to query the current committed sequence and increment it:
function get_next_sequence() {
local channel=$1
local cc_name=$2
export_peer_context org1 peer1
current_seq=$(peer lifecycle chaincode querycommitted \
-C $channel \
-n $cc_name \
--output json 2>/dev/null | jq -r '.sequence') || echo 0
echo $((current_seq + 1))
}
If the chaincode hasn’t been committed yet, querycommitted returns nothing, we default to 0, and the next sequence is 1. If it's already been committed at sequence 1, the next sequence is 2. This makes incremental updates possible without touching the cluster.
Deploying to Both Orgs
The original chaincode.sh installed and approved chaincode only for org1. This is a problem.
The reason goes back to Fabric’s endorsement policy. When a client submits a transaction to the ledger, it doesn’t go directly to one peer. It goes to multiple peers for endorsement first. Each endorsing peer executes the chaincode, signs the result, and sends it back. The client collects enough endorsements to satisfy the policy, then sends the endorsed transaction to the orderers for ordering and commit.
If the endorsement policy requires signatures from both org1 and org2 — which is the correct setup for a two-party booking system — then org2 peers must also have the chaincode installed and approved. Otherwise, they can’t endorse transactions, and the policy is never satisfied.
I knew this from reading the architecture documentation before writing a line of deployment code. The fix was to loop over both orgs everywhere:
function install_chaincode() {
local cc_package=$1
for org in org1 org2; do
install_chaincode_for ${org} peer1 ${cc_package}
install_chaincode_for ${org} peer2 ${cc_package}
done
}
function approve_chaincode() {
local cc_name=$1
local cc_id=$2
local next_seq_num=$3
for org in org1 org2; do
export_peer_context ${org} peer1
peer lifecycle chaincode approveformyorg \
--channelID ${CHANNEL_NAME} \
--name ${cc_name} \
--version 1 \
--package-id ${cc_id} \
--sequence ${next_seq_num} \
...
done
}
And for the CCaaS launches, org2 also needs its own chaincode sidecar pods. This required a separate org2-cc-template.yaml with the same structure as org1-cc-template.yaml but with org2 substituted throughout. Four CCaaS pods in total: org1peer1-ccaas-chaincode, org1peer2-ccaas-chaincode, org2peer1-ccaas-chaincode, org2peer2-ccaas-chaincode.
The REST Interface: Bridging the Fabric SDK and HTTP
Fabric doesn’t expose an HTTP API. The Fabric Node SDK communicates with peers over gRPC directly. To let the application backends interact with the blockchain over HTTP, there’s a separate service (the network REST interface) that wraps the SDK in an Express server.
I forked fabric-samples/asset-transfer-basic/rest-api-typescript, which had the right structure already. It manages two long-lived gRPC connections to the network (one authenticated as an org1 identity, one as an org2 identity) and keeps them open for the life of the server. Creating new connections per request is expensive and the wrong pattern with Fabric's SDK.
The key design decision in the original sample that I kept was the async job queue. Submitting a transaction to Fabric is not instant. The request goes to a peer for endorsement, then to the orderers for ordering, then to peers for validation and commit. This can take a few seconds. A naive synchronous REST endpoint would time out.
The solution is to return 202 Accepted immediately with a job ID, and queue the transaction for background processing. The caller polls /api/jobs/:jobId to find out when it's done:
POST /api/bookings
→ 202 Accepted, { jobId: "42" }
GET /api/jobs/42
→ { status: "complete", transactionId: "abc123..." }
The queue is implemented with BullMQ, which uses Redis as a backend. Each submitted transaction is a job in the queue. A worker process picks jobs off the queue, submits them to Fabric, and writes the result back to Redis. The job status endpoint reads from Redis.
The booking router handles authentication via API keys mapped to org identities. An API key for org1 tells the server to use the org1 connection profile and sign transactions with the org1 admin certificate. The key is passed as an X-Api-Key header. Both the customer backend and the travel agency backend have their own API keys.
// From auth.ts
const apiKeyOrgs: { [key: string]: string } = {
[ORG1_APIKEY]: "Org1MSP",
[ORG2_APIKEY]: "Org2MSP",
};
The endpoints the booking router exposes:
- GET /api/bookings - Evaluate GetAllBookings chaincode function
- GET /api/bookings/:bookingID - Evaluate ReadBooking
- POST /api/bookings - Submit RecordBooking (queued)
- DELETE /api/bookings/:bookingID - Submit DeleteBooking (queued)
- GET /api/bookings/:hyperledgerTxId/blockheight - Query block position
Reading the Blockchain: Block Height and QSCC
The assignment required that bookings be verifiable on the blockchain. The simplest form of verification is proving not just that a booking exists in the world state, but that it was committed in a specific block that has subsequent blocks built on top of it.
Fabric has a system chaincode called QSCC — Query System Chaincode. It’s a built-in chaincode that runs on every peer and exposes ledger metadata. You can query it to find which block contains a given transaction, or to get the current height of the chain.
The block height endpoint works like this: take the hyperledgerTxId stored in the booking, call QSCC's GetBlockByTxID function on the peer, and decode the returned protobuf to find the block number. Then call GetChainInfo to find the current chain height. The difference tells you how many blocks have been added since this booking was committed.
bookingsRouter.get("/:hyperledgerTxId/blockheight", async (req, res) => {
const contract = req.app.locals[mspId]?.qsccContract as Contract;
const hyperledgerTxId = req.params.hyperledgerTxId;
// Ask QSCC: which block contains this transaction?
const blockBytes = await contract.evaluateTransaction(
"GetBlockByTxID",
"mychannel",
hyperledgerTxId,
);
// QSCC returns raw protobuf bytes - decode with fabric-protos
const block = common.Block.decode(blockBytes);
const blockHeight = block.header.number.toString();
// Get current chain height
const chainInfo = common.BlockchainInfo.decode(
await contract.evaluateTransaction("GetChainInfo", "mychannel"),
);
const currentHeight = chainInfo.height.toString();
return res.status(OK).json({
hyperledgerTxId,
blockHeight, // Block where this booking was committed
blockchainHeight: currentHeight, // Current chain height
});
});
The protobuf decoding uses fabric-protos, a package that contains the compiled protobuf definitions for all Fabric message types. common.Block.decode(blockBytes) takes the raw bytes from QSCC and gives you a structured object with header.number as the block index.

Screenshot: Webapp (Fabric REST interface) showing hyperledgerTxId, blockHeight: 5, blockchainHeight: 7 — meaning 2 blocks have been added since this booking
The Reset Script and 30 Hours of Debugging
Every configuration change, every chaincode update, every time something was broken beyond quick repair — the reset script. It tears down everything and rebuilds:
./network down # Bring down peers, orderers, chaincode
./network unkind # Delete the KIND cluster
./network kind # Create a new KIND cluster
./network cluster init # Install cert-manager, nginx, set up namespaces
./network up # Launch CAs, peers, orderers
./network channel create # Create mychannel, join peers
./network chaincode deploy chaincode chaincode/ # Full chaincode lifecycle
./network rest-easy # Build and deploy the REST interface
kubectl -n test-network port-forward svc/fabric-rest-sample 3003:3000
From scratch, this takes about fifteen minutes. I ran it a lot.
yuvraj@Windows-11:~/mytravel/hyperledger$ ./reset
Fabric image versions: Peer (2.5.15), CA (1.5.19)
Fabric binary versions: Peer (2.5.15), CA (1.5.19)
Shutting down test network "test-network":
✅ - Stopping Fabric services ...
✅ - Scrubbing Fabric volumes ...
✅ - Deleting namespace "test-network" ...
🏁 - Fabric network is down.
Fabric image versions: Peer (2.5.15), CA (1.5.19)
Fabric binary versions: Peer (2.5.15), CA (1.5.19)
Deleting KIND cluster "kind":
✅ - Deleting KIND cluster kind ...
✅ - Deleting container registry "kind-registry" at localhost:5000 ...
🏁 - KIND Cluster is gone.
Fabric image versions: Peer (2.5.15), CA (1.5.19)
Fabric binary versions: Peer (2.5.15), CA (1.5.19)
Creating KIND cluster "kind":
✅ - Creating cluster "kind" ...
✅ - Launching container registry "kind-registry" at localhost:5000 ...
🏁 - KIND cluster is ready
Fabric image versions: Peer (2.5.15), CA (1.5.19)
Fabric binary versions: Peer (2.5.15), CA (1.5.19)
Initializing K8s cluster
✅ - Launching kind ingress controller ...
✅ - Launching cert-manager ...
✅ - Waiting for cert-manager ...
✅ - Waiting for ingress controller ...
🏁 - Cluster is ready
Fabric image versions: Peer (2.5.15), CA (1.5.19)
Fabric binary versions: Peer (2.5.15), CA (1.5.19)
Launching network "test-network":
✅ - Creating namespace "test-network" ...
✅ - Provisioning volume storage ...
✅ - Creating fabric config maps ...
✅ - Initializing TLS certificate Issuers ...
✅ - Launching Fabric CAs ...
✅ - Enrolling bootstrap ECert CA users ...
✅ - Creating local node MSP ...
✅ - Launching orderers ...
✅ - Launching peers ...
🏁 - Network is ready.
Fabric image versions: Peer (2.5.15), CA (1.5.19)
Fabric binary versions: Peer (2.5.15), CA (1.5.19)
Creating channel "mychannel":
✅ - Registering org Admin users ...
✅ - Enrolling org Admin users ...
✅ - Creating channel MSP ...
✅ - Creating channel genesis block ...
✅ - Joining orderers to channel mychannel ...
✅ - Joining org1 peers to channel mychannel ...
✅ - Joining org2 peers to channel mychannel ...
🏁 - Channel is ready.
Fabric image versions: Peer (2.5.15), CA (1.5.19)
Fabric binary versions: Peer (2.5.15), CA (1.5.19)
Deploying chaincode
✅ - Building chaincode image chaincode ...
✅ - Publishing chaincode image localhost:5000/chaincode ...
✅ - Packaging ccaas chaincode chaincode ...
✅ - Launching chaincode container "localhost:5000/chaincode" ...
✅ - Launching chaincode container "localhost:5000/chaincode" ...
✅ - Launching chaincode container "localhost:5000/chaincode" ...
✅ - Launching chaincode container "localhost:5000/chaincode" ...
✅ - Installing chaincode for org org1 peer peer1 ...
✅ - Installing chaincode for org org1 peer peer2 ...
✅ - Installing chaincode for org org2 peer peer1 ...
✅ - Installing chaincode for org org2 peer peer2 ...
✅ - Approving chaincode chaincode with ID chaincode:105d1916755525d103749c9d6245f1553cd7dc6b10be036d4cd574b050f99bf1 for org1 ...
✅ - Approving chaincode chaincode with ID chaincode:105d1916755525d103749c9d6245f1553cd7dc6b10be036d4cd574b050f99bf1 for org2 ...
✅ - Committing chaincode chaincode ...
🏁 - Chaincode is ready.
Fabric image versions: Peer (2.5.15), CA (1.5.19)
Fabric binary versions: Peer (2.5.15), CA (1.5.19)
2026-05-01 14:59:49.508 UTC 0001 INFO [chaincodeCmd] chaincodeInvokeOrQuery -> Chaincode invoke successful. result: status:200 payload:"[]"
Fabric image versions: Peer (2.5.15), CA (1.5.19)
Fabric binary versions: Peer (2.5.15), CA (1.5.19)
Launching fabric-rest-sample application:
✅ - Constructing fabric-rest-sample connection profiles ...
✅ - Preparing the typescript REST interface ...
The fabric-rest-sample has started.
See https://github.com/hyperledger/fabric-samples/tree/main/asset-transfer-basic/rest-api-typescript for additional usage details.
To access the endpoint:
export SAMPLE_APIKEY=97834158-3224-4CE7-95F9-A148C886653E
curl -s --header "X-Api-Key: ${SAMPLE_APIKEY}" http://fabric-rest-sample.localho.st/api/assets
🏁 - Fabric REST sample is ready.
Forwarding from 127.0.0.1:3003 -> 3000
Forwarding from [::1]:3003 -> 3000
Somewhere in rest_sample.sh there is this function:
# This magical awk script led to 30 hours of debugging a "TLS handshake error"
# moral: do not edit / alter the number of '\' in the following transform:
function one_line_pem {
echo "`awk 'NF {sub(/\\n/, ""); printf "%s\\\\\\n",$0;}' $1`"
}
This converts a multi-line PEM certificate file into a single-line string, which can be embedded in the JSON connection profile that the REST interface uses to connect to the peers. PEM files look like this:
-----BEGIN CERTIFICATE-----
MIICnTCCAkSgAwIBAgIUHqVnDpJd...
-----END CERTIFICATE-----
The JSON connection profile needs the certificate as a single string with literal \n characters instead of actual newlines. The awk script does that conversion. The number of backslashes in printf "%s\\\n" is not a mistake - it's exactly what's needed to survive multiple layers of shell interpretation (awk's string parsing, the outer shell's variable interpolation, and then the final JSON embedding).
I found it on Stack Overflow. The comment saying not to edit was already there. I ignored the comment. At some point while trying to understand what the function did, I adjusted the backslashes. The resulting connection profile looked syntactically fine (valid JSON, readable PEM string) but the embedded certificate was subtly malformed when parsed by the TLS library. The peers rejected connections with a generic TLS handshake error. Nothing in the error message pointed to the certificate content.
Thirty hours later I found the diff, restored the original function, and the network came back up immediately.
The lesson I took from this is specific: when you copy a piece of code that works and the original author has left a warning comment, take the comment more seriously than you take your own curiosity.
What It Feels Like to Develop with Fabric
Hyperledger Fabric is not built for rapid iteration. The formal chaincode lifecycle (package, install, approve, commit) exists for legitimate reasons in a real multi-organization network, where independent organizations need to independently audit and approve changes to shared business logic before those changes take effect. In that context, the process is the point.
In a student project with one developer and a fifteen-minute reset cycle, the friction is harder to appreciate. But some of the design choices still made genuine sense even at this scale.
The privacy model was the clearest one. Booking records on a distributed ledger are readable by every peer operator in the network. Storing userName: "Alice" and userEmail: "alice@example.com" directly in those records was obviously wrong. The user hash approach - borrow the idea from Bitcoin, keep personal data in the application layer, put only an opaque identifier on the chain - is the correct design regardless of whether you're building a student project or a production system.
The block height endpoint also felt worth building properly. Returning a booking record from the world state proves the booking exists now. Returning the block number and the current chain height proves when it was committed and that the chain has grown since then, making the record progressively harder to retroactively alter. That’s what blockchain verification actually means, and it’s different from just having a database record.
The rest of it (the endorsement policy, the CA infrastructure, the Raft consensus cluster) was mostly infrastructure I set up correctly and then tried not to touch. Which, given the awk script experience, is probably the right approach.
YuvrajRaghuvanshiS
/
mytravel
Single monorepo for the MyTravel project
MyTravel.com - Blockchain-Based Ticket Booking Platform
MyTravel.com is a comprehensive ticket booking system that leverages blockchain technology to ensure secure, transparent, and immutable transaction records. This platform integrates traditional web application architecture with Hyperledger Fabric blockchain infrastructure, providing a hybrid web2-web3 solution for customers and travel agencies. The system enables real-time booking management, dynamic pricing, and decentralized transaction verification while maintaining user privacy through anonymized blockchain interactions.
Project Architecture Overview
The platform follows a modular microservices architecture with four core components:
1. Customer Backend Service (Node.js/Express)
Handles customer-facing operations including:
- User registration and JWT-based authentication
- Travel listing discovery and filtering
- Booking management with blockchain verification
- Digital wallet operations
- Profile management with anonymous mode support
2. Travel Agency Backend Service (Node.js/Express)
Manages agency-specific functionalities:
- Agency registration and authentication
- Travel route creation/updation
- Seat inventory management
- Booking reconciliation
- Financial settlements
3. React Frontend Application
Provides unified user interface for:
- Customer booking flow
- Agency…
This article is rewritten using AI chatbots.
Top comments (0)