Part 2 of a two-part case study on building an ERC-20 rewards service in Go. This one covers stdlib signing, the event loop shape that runs the async pipelines, and replay protection at the consumer end.
TL;DR
- Signing an on-chain transaction is two library calls.
crypto/ecdsaplusgo-ethereum/typesland it in five lines. - Event pipelines are one
for { select }over three channels. The same shape runs the deposit monitor and the reconciler. - Reach for Zerohash, Fireblocks, or Circle first. Write this code only when self-custody is part of the product.
- Go does not give you ABI (Application Binary Interface) encoding, reorg handling, gas estimation, or HSM (Hardware Security Module) integration. Those are domain problems, not language problems.
Recap of Part 1
Part 1 covered three consistency problems in an ERC-20 (Ethereum's fungible token standard) backend: ordering (nonce sequencing across goroutines and replicas), resulting (idempotent retries that do not double-mint), and atomicity (the Transactional Outbox pattern that keeps Postgres and the broker in agreement). How Go's explicit error values turn each failure case into a named domain object ran through all three. Part 2 picks up where atomic dispatch hands off to the live chain: signing, event loops, and replay protection.
Read Part 1: what the language solves in a crypto backend
When to reach for an SDK
Custody is hard, and compliance is harder. Before any of this code earns its place, the commercial alternatives deserve the opening paragraph.
| Provider | What it abstracts |
|---|---|
| Zerohash | Custody, signing, settlement, and compliance for fintechs |
| Fireblocks | Institutional custody with MPC (Multi-Party Computation) and a policy engine |
| Circle | USDC issuance, payouts, treasury, and a wallets API |
| Coinbase Prime | Institutional custody and trading with an API |
| BitGo | Multi-sig custody, staking, and a wallets API |
An SDK (Software Development Kit) is the correct choice when crypto sits next to the core business, per-transaction fees are acceptable at the target volume, on-chain programmability requirements are shallow, and the team does not want to own signing key material. The self-custody path is correct when custody is part of the product, BaaS (Blockchain as a Service) fees stop being economic at scale, programmability exceeds what the SDK exposes, or regulation forces in-house signing. A Go service on this path ships as a single static binary. CGO_ENABLED=0 go build produces an executable that runs on a gcr.io/distroless/static base image. No shell, no package manager, and the container layer stays under 20MB. A Node or JVM (Java Virtual Machine) service at the same stage carries its runtime, its dependency tree, and warm-up time before the first signing call. The patterns below assume the second path, and most of them still apply to any team that writes wrappers around a BaaS response.
Problem 4: signing with the stdlib
A conventional payment API looks like stripe.Charge.Create(params). The SDK handles authentication, idempotency keys, retries, and webhooks. For an on-chain transaction with self-custody, no equivalent exists. Part 1 closed with an event dispatched from the Transactional Outbox. The worker downstream of the broker signs that event and broadcasts it to the chain. The backend computes transaction bytes, signs them with the wallet's private key, and submits the result to an RPC (Remote Procedure Call) endpoint. In Go, that is five lines:
signer := types.NewLondonSigner(chainID)
signed, err := types.SignTx(unsigned, signer, privateKey)
if err != nil {
return common.Hash{}, fmt.Errorf("sign tx for %s: %w", from.Hex(), err)
}
return signed.Hash(), client.SendTransaction(ctx, signed)
NewLondonSigner encodes the chain identity and selects EIP-1559 rules. SignTx computes the ECDSA (Elliptic Curve Digital Signature Algorithm) signature over the RLP (Recursive Length Prefix) encoded transaction. These are native Go implementations, not bindings to a C library or a JNI (Java Native Interface) wrapper. The source is in go-ethereum/core/types and readable in the same language the service ships. The returned transaction carries its signature, and its hash is the broadcast identifier the database persists. SendTransaction is the RPC, and ctx carries the trace ID and the deadline.
Where the key lives is the real question. An environment variable is fine for development and for nothing else. Cloud KMS (Key Management Service, available as AWS KMS or Google Cloud KMS) is the right default for staging and low-volume production. crypto/ecdsa does not sign against a KMS directly. The integration is a thin adapter that calls the KMS Sign API and returns the r, s, v tuple that the go-ethereum signer wraps. HSM or MPC is the answer at high volume, where signing latency climbs into the hundreds of milliseconds per call and key caching becomes a correctness question rather than a performance one.
ECDSA runs at P99 (99th-percentile latency) under 10ms on modern hardware. Go's garbage collector does not pause the goroutine running ECDSA for the duration of a full heap scan. JVM signers without careful GC tuning can spike at the wrong moment. For a service with a broadcast SLA (Service Level Agreement), predictable latency is more valuable than raw throughput, and Go's concurrent GC delivers it without configuration.
Signing itself is CPU-local and fast. The temptation is to run a goroutine per transaction. Part 1 is the reason not to. Two goroutines signing from the same wallet race on the nonce. The correct parallelism axis is across wallets, not across transactions from one wallet. That is why Part 1 and Part 2 are one series rather than two.
Problem 5: event pipelines
Every crypto backend runs at least two long-running loops. A deposit monitor that ingests inbound transfers, and a reconciler that heals state when the database and the chain disagree. Each one must survive rolling deploys, propagate context into every downstream call, and shut down within the Kubernetes grace period. The pattern is the same in each:
func (r Relay) Run(ctx context.Context) {
ticker := time.NewTicker(r.interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-r.done:
return
case <-ticker.C:
r.tick(ctx)
}
}
}
Three channels and one select. ctx.Done() catches external cancellation. SIGTERM (the Unix process termination signal) from Kubernetes lands here when the runtime is wired with signal.NotifyContext. r.done catches internal shutdown, for example a health-check failure that asks the component to exit before the pod does. ticker.C drives the work cadence, and r.tick(ctx) passes the context forward so downstream RPC calls inherit the deadline and the trace ID.
Why not cron? Three reasons. The loop carries in-memory state across ticks, a cached block height for example, which cron cannot. The loop propagates context, which cron cannot. The loop shuts down deterministically on SIGTERM, which cron does not. A cron job that misses a tick because the previous invocation is still running is a distributed systems bug waiting to happen.
Two failure modes live inside this pattern. Omitting defer ticker.Stop() leaks the ticker's internal goroutine across hot reloads. Calling r.tick(context.Background()) instead of forwarding ctx severs trace propagation and deadline cascading, which turns a single slow RPC into a stuck loop.
Shutdown is handled with sync.Once to make Stop safe to call from multiple goroutines without closing a channel twice:
func (m *DepositMonitor) Stop() {
m.once.Do(func() {
close(m.quit)
})
}
close(m.quit) unblocks the case <-m.quit branch in the for { select } loop. sync.Once guarantees the close happens exactly once regardless of how many callers race on shutdown. The pattern composes cleanly with os/signal.NotifyContext: the signal handler cancels the context, the loop exits, the caller calls Stop as cleanup, and once.Do is a no-op.
Bounded workers when the tick fans out
The for { select } loop fires every interval. A naive tick walks a work list serially, which is proportional at lower volumes. A service scanning hundreds of wallets per tick needs bounded parallelism: dispatch each wallet to its own goroutine, cap the concurrency so the RPC pool does not starve, and wire shutdown to the same context that kills the outer loop. errgroup.WithContext plus semaphore.NewWeighted from golang.org/x/sync gives that in 20 lines:
func (m DepositMonitor) tick(ctx context.Context) {
wallets := m.repo.ActiveWallets(ctx)
sem := semaphore.NewWeighted(8)
g, gctx := errgroup.WithContext(ctx)
for _, w := range wallets {
if err := sem.Acquire(gctx, 1); err != nil {
return
}
g.Go(func() error {
defer sem.Release(1)
return m.scanWallet(gctx, w)
})
}
if err := g.Wait(); err != nil {
m.log.ErrorContext(ctx, "tick failed", "err", err)
}
}
Eight workers, one errgroup, one shared context. If the outer loop receives SIGTERM, gctx cancels, sem.Acquire returns immediately, in-flight workers finish the RPC call they already started and release, and g.Wait returns within the Kubernetes grace period. The first worker that fails cancels the rest through the errgroup, which matches the semantics the operator wants: a systemic RPC outage stops the tick fast instead of burning budget on 200 timeouts.
When does this lose to Kafka Connect, Debezium, or Flink? When the event rate dwarfs what a single process can reasonably handle, when the business already operates a streaming platform, or when the downstream consumers are polyglot. Until one of those holds, a 20-line Go worker pool beats a pipeline you do not own on ownership, operability, and on-call surface.
Problem 6: replay protection and idempotent consumption
Confirmed events from the chain are not the end of the pipeline. A deposit that triggers a credit in the database must not be credited twice when the deposit monitor restarts mid-batch, when the RPC provider replays a block, or when a reorg heals and the same log surfaces again. The contract the backend needs from the loop is at-least-once delivery with idempotent consumption. The chain gives the first half. The database enforces the second.
Two primitives carry the work. A unique constraint on (tx_hash, log_index) in the table that records processed events, and a persisted offset that records how far the monitor has scanned. The offset lets the loop resume after a crash without rescanning from genesis. The unique constraint lets a rescan be safe when it happens anyway.
func (m DepositMonitor) consume(ctx context.Context, ev Event) error {
return m.tx.WithTransaction(ctx, func(ctx context.Context) error {
if err := m.repo.MarkProcessed(ctx, ev.TxHash, ev.LogIndex); err != nil {
if errors.Is(err, domain.ErrAlreadyProcessed) {
return nil
}
return fmt.Errorf("marking event: %w", err)
}
return m.repo.CreditUser(ctx, ev.To, ev.Amount)
})
}
Three properties fall out of this shape. MarkProcessed returns ErrAlreadyProcessed when the unique constraint rejects the insert, and the transaction commits a no-op instead of a double credit. The credit and the dedup row commit in the same transaction, so a partial apply is impossible. And the unique index is the source of truth, not the application cache, which means a new replica coming up cold cannot double-spend a replayed event.
Why not rely on the chain to tell you what is confirmed? Because the chain does not know what your backend has already done with its confirmed events. The database is the only actor with that memory. Every other pattern in this series leans on the same principle: the chain is the source of events, the database is the source of effects.
What Go does not give you
The language does not know what a reorg is. It does not know that an ABI change on a contract you do not own can break decoding in production six months after deploy. It does not know that a gas price estimate from a public RPC can lag the mempool by a full block. It does not know that a signing key must never leave an HSM or a cloud KMS, and that every path that touches the private key must be audited on every pull request.
It also does not hide its costs. Spring and NestJS carry request scope through ThreadLocal and AsyncLocalStorage, and the developer rarely sees it. Go forces ctx context.Context as the first parameter of every function that touches I/O. A misplaced context.Background() severs trace propagation and deadline cascading in one line. That explicitness is a virtue under load and a tax under review. A payments codebase that takes the tax seriously catches the bug before it ships.
The Transactional Outbox pattern from Part 1 covers the case where the write and the publish share a database. When the operation crosses service boundaries with independent stores, Outbox is not enough, and saga becomes the next pattern to reach for. I wrote the distributed version in a separate piece on concurrent transactions, saga, queues, and DDD (Domain-Driven Design) aggregates, which pairs with the in-process patterns covered here.
The stack this series leans on is Go plus Postgres plus an RPC provider. What sits above that is a design problem, not a language problem. Go makes the solutions short. It does not write them for you.
The honest take
Two articles, six patterns, zero frameworks. These Go properties directly earned their place in this domain.
- Goroutines cheap enough to run one per monitored wallet without a thread pool
-
for { select }as the full concurrency model for a background worker -
CGO_ENABLED=0for a static binary under 20MB, no runtime required -
crypto/ecdsawith no C bindings or JNI overhead -
context.Contextas the single pipe for cancellation, deadlines, and trace IDs
None of that is a reason to rewrite a working Java or Python service. It is a reason to start a new fintech service here.
If your team is evaluating stacks for a service that signs transactions, monitors a chain, or enforces idempotency against a public RPC endpoint, the patterns above are the argument. They fit the language without adapters or base classes.
There is one honest caveat. Go does not shrink the domain. Reorg handling, gas estimation under congestion, HSM integration, and contract ABI upgrades are still hard engineering problems. The language makes the implementation shorter. The domain knowledge is still yours to carry.
Building something similar, or hitting a problem not covered here? Drop questions and corrections in the comments.
References
-
go-ethereum: official Go implementation of the Ethereum protocol. Source of
TokenContract,types.SignTx, and signer types used in code snippets. -
golang.org/x/sync:
errgroupandsemaphorepackages referenced in the bounded-workers pattern. - Martin Kleppmann, How to do distributed locking: context for the replay protection and fencing approach covered in Part 1.
- Fireblocks MPC Wallet API: managed signing alternative when self-custody is not viable.
- Zerohash Developer Docs: regulated crypto settlement infrastructure with API, webhooks, and SDK for fintechs.
- Circle Developer Platform: programmable USDC wallets and payments APIs.
- Coinbase Prime API: institutional custody, trading, and custody APIs for financial institutions.
- BitGo: multi-sig custody, staking, and wallet services API.
- fascari/cashback-platform: Go blockchain adapter service used as the running example throughout this series.
This is part 2 of a two-part series on Fintech on Go. Part 1 covered nonce sequencing, idempotent ERC-20 minting, and the Transactional Outbox pattern, the three consistency problems that surface before a transaction is signed and broadcast.
Top comments (0)