In modern Fintech systems, microservices are distributed and autonomous — but often need access to shared state in real time. A common challenge arises when one service, like a Tax Service, needs to act on data owned by another, such as a Payments Service.
This article shows how to implement Event-Carried State Transfer (ECST) to solve this problem cleanly and reliably, using Kafka for messaging, Go for services, and PostgreSQL for durability. We'll explore how to calculate taxes in real-time — without REST calls, shared databases, or fragile coordination.
Architecture Overview
We’re working with two services:
Payments Service
- Owns the Order and Transaction domains.
- Emits
orders.created
with full order details. - Emits
transactions.created
as financial events happen.
Tax Service
- Listens to both
orders.created
andtransactions.created
. - Builds a durable local cache of orders using PostgreSQL.
- On each transaction, it retrieves the related order and calculates tax.
- Emits a
tax.calculated
event with the enriched data.
How Kafka + Go + PostgreSQL Work Together
Kafka streams events with full entity state (orders.created, transactions.created).
Go handles the concurrency and performance needs of both producers and consumers.
PostgreSQL provides a durable local cache, so the Tax Service can store what it sees — and make decisions offline, quickly, and deterministically.
This trio is battle-tested and scales well under high volume, audit-heavy environments.
What is Event-Carried State Transfer?
Event-Carried State Transfer (ECST) is a messaging pattern where a service emits not just event metadata or IDs, but the entire state of an entity. That way, any downstream service can cache and act on it independently.
Protobuf Models (via Buf)
message Order {
string order_id = 1;
string customer_id = 2;
string country = 3;
string currency = 4;
int32 amount = 5;
string product_type = 6;
google.protobuf.Timestamp created_at = 7;
google.protobuf.Timestamp updated_at = 8;
}
message Transaction {
string transaction_id = 1;
string order_id = 2;
string customer_id = 3;
int32 amount = 4;
string status = 5;
google.protobuf.Timestamp created_at = 6;
google.protobuf.Timestamp updated_at = 7;
}
PostgreSQL Schema for ECST Order Cache
Instead of querying Payments for every transaction, the Tax Service builds its own ECST snapshot of the Orders it needs — one row per order, stored as JSON.
Why JSONB?
- You can query it for debugging or analytics.
- You don’t need to tightly couple DB schema to Protobuf definitions.
- You can store the exact payload you saw on the Kafka topic.
CREATE TABLE order_events (
id VARCHAR PRIMARY KEY,
payload JSONB NOT NULL,
order_created_at TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL,
created_at TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL DEFAULT now(),
updated_at TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL DEFAULT now()
);
-
id
: order ID -
payload
: full order data as JSON (Protobuf → JSON) -
order_created_at
: original event timestamp -
created_at
: when the row was inserted -
updated_at
: when it was last updated
Writing Orders to the Cache
When orders.created
events arrive, the Tax Service writes them to Postgres:
func upsertOrderJSON(db *sql.DB, order Order) error {
jsonData, err := json.Marshal(order)
if err != nil {
return fmt.Errorf("marshal: %w", err)
}
_, err := db.Exec(`
INSERT INTO order_cache (id, payload, order_created_at)
VALUES ($1, $2, to_timestamp($3))
ON CONFLICT (id) DO UPDATE
SET payload = EXCLUDED.payload,
updated_at = now()
`, order.OrderID, jsonData, order.CreatedAt)
return err
}
Reading Orders and Calculating Tax
When a transactions.created
event arrives, the service fetches the order and calculates tax:
func getOrderByID(db *sql.DB, id string) (Order, error) {
row := db.QueryRow(`SELECT payload FROM order_events WHERE id = $1`, id)
var raw []byte
err := row.Scan(&raw)
if err != nil {
return Order{}, fmt.Errorf("row scan: %w", err)
}
var order Order
err = json.Unmarshal(raw, &order)
if err != nil {
return Order{}, fmt.Errorf("unmarshal: %w", err)
}
return order, nil
}
func calculateTax(order Order, tx Transaction) TaxedTransaction {
vat := map[string]float64{"DE": 0.19, "FR": 0.20}[order.Country]
if order.ProductType == "crypto" {
vat = 0.0
}
tax := order.Amount * vat
return &TaxedTransaction{
TransactionID: tx.TransactionID,
OrderID: order.OrderID,
CustomerID: order.CustomerID,
BaseAmount: order.Amount,
TaxApplied: tax,
TotalAmount: order.Amount + tax,
Country: order.Country,
ProductType: order.ProductType,
}
}
Handling Out-of-Order Events with Retries
Kafka doesn't guarantee that orders.created will arrive before transactions.created
. So the Tax Service must retry:
func getOrderWithRetries(db *sql.DB, orderID string, maxRetries int, delay time.Duration) (Order, error) {
for i := 0; i < maxRetries; i++ {
order, err := getOrderByID(db, orderID)
if err != nil {
if errors.Is(err, sql.ErrNoRows) {
time.Sleep(delay)
delay *= 2
continue
}
return Order{}, fmt.Errorf("get order: %w", err)
}
return order, nil
}
return Order{}, fmt.Errorf("order not found after retries: %s", orderID)
}
If the order is still missing, the service can emit the failed transaction to a tax.failed
topic for later processing.
Benefits of This Design
- No RPCs: The Tax Service does not depend on Payments being up or reachable.
- Durable State: PostgreSQL acts as a reliable local cache.
- Fast & Reactive: All tax logic happens locally in response to events.
- Failure-tolerant: Retry logic handles timing issues. DLQ covers gaps.
- Auditable: All order data and tax calculations are traceable and replayable.
Common Misconceptions About ECST
Isn't this denormalized?
Yes — and that's the point. You denormalize state so consumers don’t have to coordinate or join at runtime.
What if the schema changes?
Use Protobuf + Buf Schema Registry with breaking-change enforcement. You evolve your schemas safely.
Won’t Postgres get too big?
Only if you never prune or partition. But for most transaction-based services, retaining ~3 year of cache is trivial. Use TTLs or archive pipelines if needed.
Conclusion
Event-Carried State Transfer is a powerful pattern that gives microservices autonomy without sacrificing correctness. In Fintech, where correctness, auditability, and latency all matter, ECST with Kafka and Postgres offers a clean solution.
With this architecture, your services are resilient, reactive, and easy to reason about — no shared database, no RPC chains, and no surprises.
Top comments (0)