What happens between a trader clicking "Buy" and the asset actually changing hands? The answer is six microservices, four Kafka topics, a compliance rules engine, and two business days of settlement scheduling.
When most developers build a "finance project," they build a stock price tracker or a portfolio dashboard. Something that fetches data from an API and renders it on a chart.
That is not what happens inside a bank.
Inside a bank, a trade is not a number on a screen. It is a legally binding commitment that passes through compliance review, market execution, bilateral confirmation, T+2 settlement scheduling, and regulatory reporting, in that order, every single time, across systems built by different teams that talk to each other exclusively through message queues.
I wanted to build something that actually modeled this. Not a simplified version. The real pipeline, with real failure modes, real latency concerns, and real regulatory structure. The result is Valoris Systems, a distributed trade lifecycle simulator built with Java 21, Spring Boot 3, Apache Kafka, PostgreSQL, Redis, and React.
This post explains every architectural decision and why it mirrors what production trading systems actually do.
What a Trade Lifecycle Actually Is
Before writing a single line of code, I spent time understanding the business domain. This matters because the architecture follows the business, not the other way around.
When a trader at a bank or fund submits an order, it does not execute immediately. It passes through five mandatory stages:
1. Pre-Trade Compliance. Before anything happens, the system checks: Is this counterparty on the approved list? Does this trade exceed the trader's notional risk limit? Is this instrument in the allowed universe? All three must pass. One failure kills the trade before it reaches the market.
2. Execution. The trade hits the market. An execution price and timestamp get locked in. The venue is recorded (DFM, NASDAQ Dubai, DIFC dark pool in Valoris's case).
3. Confirmation. Both sides of the trade, buyer and seller, must independently confirm they agreed to the same price and quantity. Mismatches happen. In Valoris, 5% of confirmations introduce a random price mismatch of plus or minus 2%, which sits in a MISMATCHED state until resolved. This is realistic: in real markets, confirmation breaks happen and require manual intervention.
4. Settlement. The actual exchange of asset and cash. This does not happen immediately after execution. It happens T+2, meaning two business days later, skipping weekends. A scheduler runs every 60 seconds, checks which trades have reached their settlement date, updates net positions per counterparty/instrument pair, and marks them settled.
5. Regulatory Reporting. Every settled trade must be reported to the regulator (DFSA in Dubai, FCA in the UK, SEC in the US) within a defined window after execution. The report includes LEI identifiers, ISIN, notional value, execution price, venue, and counterparty details. In MiFID II terms this is called a transaction report. In Valoris, the reporting service generates a structured report in a format that mirrors real EMIR/MiFID II fields.
Each of these stages is a separate microservice in Valoris. They do not call each other over HTTP. They communicate exclusively through Apache Kafka event streams.
Why Event-Driven Over REST-to-REST
This is the most important architectural decision in the entire project and it needs a direct explanation.
The naive way to build this system is as a chain of REST calls:
FIX Gateway -> HTTP POST -> Compliance Service -> HTTP POST -> Execution Service -> ...
This works in development. It fails in production for several reasons.
Temporal coupling. If the execution service is down when compliance publishes a result, the trade is lost. With Kafka, compliance publishes to trades.validated, and the execution service consumes that topic whenever it is ready. The topic persists messages. Nothing gets lost.
Backpressure handling. In a real trading system, trade volume is not uniform. There are bursts at market open, major announcements, and high-volatility periods where thousands of trades hit the system simultaneously. Kafka acts as a buffer. Each downstream service processes at its own rate. The compliance service does not care whether execution is running fast or slow.
Audit trail by default. Every Kafka topic in Valoris is a permanent, ordered log of events. If you want to know exactly what happened to trade f47ac10b and when, you replay the events. This is not a nice-to-have in financial systems. It is a regulatory requirement.
Independent deployability. Each service can be updated, restarted, or scaled independently without any other service needing to know. The compliance service does not import any code from the execution service. They share nothing except the event schema.
The Kafka topics in Valoris map directly to the business stages:
| Topic | What it means |
|---|---|
trades.incoming |
A trade has been submitted and needs compliance review |
trades.validated |
Compliance passed, route to execution |
trades.rejected |
Compliance failed, route to reporting for rejection record |
trades.executed |
Execution complete, needs counterparty confirmation |
trades.confirmed |
Both sides confirmed, ready for settlement scheduling |
trades.settled |
Settlement complete, generate regulatory report |
Notice that trades.rejected goes directly to the reporting service. Rejected trades still need to be recorded. Regulators care about failed trades too.
The Compliance Service: Why Redis Matters Here
The compliance service is the most latency-sensitive component in the pipeline. Every trade must pass through it before execution. In real markets, pre-trade compliance checks need to complete in sub-millisecond time, not because the user is waiting (they are not, this is async), but because compliance rule evaluation is on the critical path for market access.
In Valoris, compliance rules are seeded into PostgreSQL on startup, then loaded into Redis at service initialization. The Redis cache holds:
- The approved counterparty list (Redis Set under
compliance:counterparty:approved) - Notional risk limits per trader (Redis strings under
compliance:risk_limit:{submitterId}) - The allowed instruments universe (Redis Set under
compliance:instrument:allowed)
When a TradeIncomingEvent arrives on the Kafka consumer, the three compliance checks hit Redis exclusively. PostgreSQL never gets queried during the hot path. This matters because Redis operations complete in microseconds. A PostgreSQL query across a network involves disk I/O and connection overhead that you cannot afford to put on every single trade.
The three checks run in sequence with short-circuit logic:
public ComplianceCheckResponse check(TradeCheckRequest request) {
log.info("Running compliance check for trade {}", request.getTradeId());
LocalDateTime now = LocalDateTime.now();
// Check 1: counterparty approval
if (!rulesCacheService.isCounterpartyApproved(request.getCounterpartyId())) {
return reject(request.getTradeId(), "COUNTERPARTY",
"Counterparty " + request.getCounterpartyId() + " is not on the approved list", now);
}
// Check 2: notional risk limit
Optional<BigDecimal> limit = rulesCacheService.getRiskLimit(request.getSubmittedBy());
if (limit.isEmpty()) {
return reject(request.getTradeId(), "RISK_LIMIT",
"No risk limit configured for submitter " + request.getSubmittedBy(), now);
}
if (request.getNotionalValue().compareTo(limit.get()) > 0) {
return reject(request.getTradeId(), "RISK_LIMIT",
"Notional " + request.getNotionalValue() + " exceeds limit of " + limit.get() +
" for submitter " + request.getSubmittedBy(), now);
}
// Check 3: instrument eligibility
if (!rulesCacheService.isInstrumentAllowed(request.getInstrument())) {
return reject(request.getTradeId(), "INSTRUMENT",
"Instrument " + request.getInstrument() + " is not in the allowed instruments universe", now);
}
persistResult(request.getTradeId(), "VALIDATED", "PASS", null);
log.info("Trade {} passed all compliance checks", request.getTradeId());
return new ComplianceCheckResponse(request.getTradeId(), true, null, null, now);
}
First failure short-circuits. No point running instrument eligibility if the counterparty is already blocked.
The compliance service also exposes REST endpoints for rule management: adding/removing counterparties, updating notional limits, adding instruments. These write to PostgreSQL and update the relevant Redis keys. The dashboard's compliance panel calls these endpoints directly.
The Execution Service: Simulating Real Market Pricing
The execution service receives TradeValidatedEvent messages and is responsible for pricing the trade.
Valoris supports real ISINs with seeded base prices. For each instrument, the service applies a plus or minus 0.5% random spread to simulate market movement:
private static final Map<String, BigDecimal> BASE_PRICES = Map.of(
"US0378331005", new BigDecimal("189.50"), // Apple
"US5949181045", new BigDecimal("415.20"), // Microsoft
"US02079K3059", new BigDecimal("175.80"), // Alphabet
"US4592001014", new BigDecimal("188.90"), // IBM
"US912828ZL9", new BigDecimal("99.85"), // US Treasury 2Y
"US9128284Y00", new BigDecimal("98.60"), // US Treasury 5Y
"XS2314659447", new BigDecimal("100.25"), // Emirates NBD bond
"AEA007601011", new BigDecimal("8.42"), // Emaar Properties (AED)
"AEA000301011", new BigDecimal("14.76") // First Abu Dhabi Bank (AED)
);
public BigDecimal getPrice(String isin) {
BigDecimal base = BASE_PRICES.getOrDefault(isin, new BigDecimal("100.00"));
double spreadFactor = 1.0 + (ThreadLocalRandom.current().nextDouble() - 0.5) * 0.01;
return base.multiply(BigDecimal.valueOf(spreadFactor))
.setScale(6, RoundingMode.HALF_UP);
}
The execution service assigns a venue to each trade drawn from a realistic set for Gulf markets: DIFC-DARK-POOL, DFM (Dubai Financial Market), and NASDAQ-DUBAI. The venue field is mandatory in MiFID II transaction reports, so this domain detail matters.
The resulting TradeExecutedEvent includes executionPrice, executionTimestamp, and venue. These three fields are immutable from this point forward. Downstream services reference them but cannot modify them. This is the principle of event immutability: past events are facts, not suggestions.
The Confirmation Service: Modeling the Failure Path
Most toy projects ignore failure modes. The confirmation service exists specifically to model one.
In real markets, bilateral confirmation is a process where both sides of a trade independently report what they believe they agreed to. Mismatches happen when one side reports a slightly different price than the other due to rounding, latency, or fat-finger errors.Valoris simulates the counterparty-side confirmation response with a
probabilistic mismatch.
In Valoris, the confirmation service applies a 5% random mismatch rate on the execution price, introducing a plus or minus 2% variance:
private static final double MISMATCH_PROBABILITY = 0.05;
private static final double MISMATCH_DEVIATION = 0.02;
public TradeConfirmedEvent confirm(TradeExecutedEvent event) {
boolean isMismatch = ThreadLocalRandom.current().nextDouble() < MISMATCH_PROBABILITY;
BigDecimal confirmedPrice;
String status;
String mismatchReason;
if (isMismatch) {
double deviation = 1.0 + (ThreadLocalRandom.current().nextBoolean() ? 1 : -1) * MISMATCH_DEVIATION;
confirmedPrice = event.getExecutionPrice()
.multiply(BigDecimal.valueOf(deviation))
.setScale(6, RoundingMode.HALF_UP);
status = "MISMATCHED";
mismatchReason = String.format(
"Counterparty confirmed price %s differs from execution price %s",
confirmedPrice, event.getExecutionPrice()
);
} else {
confirmedPrice = event.getExecutionPrice();
status = "CONFIRMED";
mismatchReason = null;
}
// ... persist and publish
}
A mismatched trade enters MISMATCHED status and is exposed via a REST endpoint:
GET :8084/api/confirmations/mismatched
The dashboard displays these trades with a warning state. A trader or operations staff can view the mismatch detail and resolve it manually. Only confirmed trades (status CONFIRMED) advance to settlement. Mismatched trades do not.
This failure path handling is what separates an engineering project from a demo. Real systems break in predictable ways. The architecture must handle it.
The Settlement Service: T+2 and Position Netting
Settlement is where the asset and cash actually change hands. T+2 means two business days after execution, not two calendar days. Weekends are skipped.
The settlement service implements this with a Spring @Scheduled task that runs every 60 seconds:
@Scheduled(fixedDelay = 60000)
@Transactional
public void settleMaturedTrades() {
LocalDate today = LocalDate.now();
List<Settlement> due = settlementRepository
.findBySettlementStatusAndSettlementDateLessThanEqual("PENDING", today);
if (due.isEmpty()) return;
log.info("Settlement scheduler: {} trade(s) due for settlement on or before {}", due.size(), today);
for (Settlement s : due) {
s.setSettlementStatus("SETTLED");
settlementRepository.save(s);
producer.publish(new TradeSettledEvent(
s.getTradeId(), s.getInstrument(), s.getSide(),
s.getQuantity(), s.getCounterpartyId(), s.getCurrency(),
s.getNotionalValue(), s.getSubmittedBy(),
s.getExecutionPrice(), s.getExecutionVenue(),
s.getSettlementDate(), "SETTLED", LocalDateTime.now()
));
log.info("Trade {} settled on {}", s.getTradeId(), today);
}
}
Position queries are exposed via REST:
GET :8085/api/positions/{counterpartyId}
This endpoint returns the current net position across all instruments for a given counterparty, matching the data structure a real prime broker would show a fund client.
The Reporting Service: Mirroring Regulatory Requirements
The reporting service is the final stage of the pipeline. It consumes both trades.settled and trades.rejected, since both settled and rejected trades require records in real regulatory frameworks.
The TradeReport model mirrors the fields required by EMIR and MiFID II Transaction Reporting. Here is what GET /api/reports/{tradeId} returns for a settled trade:
{
"tradeId": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"instrument": "US0378331005",
"side": "BUY",
"quantity": 500,
"counterpartyId": "CP-001",
"currency": "USD",
"notionalValue": 94875.00,
"submittedBy": "trader.dubai",
"executionPrice": 189.750000,
"executionVenue": "NASDAQ-DUBAI",
"settlementDate": "2026-04-02",
"settlementStatus": "SETTLED",
"settledAt": "2026-04-02T00:01:03.112Z",
"reportGeneratedAt": "2026-04-02T00:01:03.215Z"
}
The LEI (Legal Entity Identifier) is the ISO standard identifier for firms participating in financial markets. Every regulated entity has one. Valoris's data model is structured to accommodate this field in a production extension without structural changes.
The reporting service also powers analytics. Endpoints expose volume aggregated by instrument, by counterparty, and by venue. The dashboard's analytics section visualizes these using Recharts: bar charts for volume by ISIN, pie charts for distribution by counterparty, area charts for settlement activity over time.
Database Isolation: One Schema Per Service
Each service owns its own PostgreSQL database. The compliance service has no access to the execution service's database. The settlement service has no access to the confirmation service's database. This is strict.
This is not just a microservices best practice. It is a business requirement in regulated financial infrastructure. When a regulator audits the compliance team's systems, they should be auditing only the compliance service's database. Cross-service database joins are a regulatory and operational risk.
In practice this means:
- No foreign keys across service boundaries
- No shared ORM models
- No cross-database queries in the application layer
- Event schemas (the Kafka event DTOs) are the only shared contract
If the execution service needs compliance data, it gets it from the TradeValidatedEvent payload that was published when compliance passed. It does not query the compliance database.
The React Dashboard: Live Pipeline Visibility
The dashboard is built with React 18 and Vite, served in production by nginx. It has three sections.
Trade Pipeline View. A live table of all trades with columns for TradeID, Instrument, Side, Notional, Current Stage, and Status. Color coding: green for progressing trades, red for rejected or failed, yellow for pending states like MISMATCHED or PENDING_SETTLEMENT. The table auto-refreshes by polling the FIX gateway's trade list endpoint.
Trade Detail View. Click any trade to open a side panel showing the full event timeline from submission through reporting. Each stage shows its timestamp and the full JSON payload, expandable inline. This is how an operations team would investigate a problem trade in a real system.
Analytics. Recharts visualizations driven by the reporting service's analytics endpoints. Volume by instrument, volume by counterparty, settlement summary.
Running the Entire System
The entire stack (PostgreSQL, Redis, Zookeeper, Kafka, six Spring Boot services, and the React frontend) starts with one command:
docker compose up --build
All 11 containers start in the correct order with health checks. The dashboard is available at http://localhost:5173. Submit a trade via the dashboard or directly:
curl -X POST http://localhost:8081/api/trades \
-H "Content-Type: application/json" \
-d '{
"instrument": "US0378331005",
"side": "BUY",
"quantity": 500,
"counterpartyId": "CP-001",
"currency": "USD",
"notionalValue": 94750.00,
"submittedBy": "trader.dubai"
}'
Watch it progress through the pipeline in real time on the dashboard.
What This System Demonstrates
The firms that care most about this kind of project (ION Group, Murex, Finastra, Broadridge, FIS, and the in-house technology teams at major banks) are all hiring engineers who understand the business domain, not just the technology.
Anyone can build REST microservices. Not many graduate-level developers can explain why T+2 settlement exists (it is a historical artifact from paper certificate delivery that modern systems are slowly moving to T+1), what an LEI is and why it is mandatory in regulatory reports, why compliance rules need to live in Redis rather than being fetched from PostgreSQL on every check, or what happens operationally when a bilateral confirmation mismatch occurs.
Valoris is not a demo. It is a working model of infrastructure that processes trillions of dollars of trades every day across global financial markets.
The code is at github.com/Ra9huvansh/Valoris-Systems.
Raghuvansh is a pre-final year Computer Science student at JIIT Noida targeting capital markets infrastructure and blockchain engineering roles in Dubai, Hong Kong, and Shanghai.
Top comments (0)