Most fintech teams build their audit trails in the application layer.
An ORM plugin here. A middleware hook there. A background job that "eventually" writes to an audit table. Then — six months into production — an auditor asks a question that breaks everything.
"Can you prove this record was never modified between 2:14am and 6:30am on March 12th?"
And suddenly the team is debugging race conditions in their message queue at 11pm on a Friday.
In this post, I'll show you an alternative that moves the audit responsibility down to where it belongs: the database engine itself.
The Problem with Application-Layer Audit
Application-layer audit trails look like this in typical Node.js/Java/Python stacks:
async function updateLoanStatus(loanId, newStatus, userId) {
const oldLoan = await db.loans.findById(loanId);
await db.loans.update(loanId, { status: newStatus });
await db.auditLog.create({
table: 'loans',
recordId: loanId,
oldValues: oldLoan,
newValues: { ...oldLoan, status: newStatus },
changedBy: userId,
changedAt: new Date()
});
}
This pattern has four failure modes that regulatory audits will find:
A developer bypasses the service layer. They write a direct
UPDATE loans SET status = 'APPROVED' WHERE id = 123via pgAdmin or a migration script. The audit table has no record of it.The audit write fails silently. The main update succeeds, the audit write times out, and nobody notices until an auditor pulls a sample.
Transaction boundaries get sloppy. The audit write happens outside the transaction, so a rollback leaves orphaned or missing audit records.
Eventually-consistent audit queues introduce race conditions. Audit entries arrive out of order, making it impossible to reconstruct the state at a specific point in time.
None of these are theoretical. Every one of them has shown up in real banking audits.
The Fix: Database Triggers
PostgreSQL triggers fire at the database engine level. If a row changes — regardless of how it changed — the trigger fires. There is no way to bypass it short of dropping the trigger itself.
Here's a generic audit trigger function that works on any table:
CREATE SCHEMA IF NOT EXISTS audit;
CREATE TABLE audit.change_log (
log_id BIGSERIAL PRIMARY KEY,
table_name VARCHAR(100) NOT NULL,
record_id TEXT NOT NULL,
operation CHAR(1) NOT NULL CHECK (operation IN ('I','U','D')),
old_values JSONB,
new_values JSONB,
changed_by TEXT NOT NULL DEFAULT current_user,
changed_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE OR REPLACE FUNCTION audit.fn_audit_trigger()
RETURNS TRIGGER LANGUAGE plpgsql SECURITY DEFINER AS $$
DECLARE
v_old JSONB;
v_new JSONB;
v_op CHAR(1);
BEGIN
IF TG_OP = 'INSERT' THEN
v_op := 'I';
v_new := to_jsonb(NEW);
ELSIF TG_OP = 'UPDATE' THEN
v_op := 'U';
v_old := to_jsonb(OLD);
v_new := to_jsonb(NEW);
ELSE
v_op := 'D';
v_old := to_jsonb(OLD);
END IF;
INSERT INTO audit.change_log(table_name, record_id, operation, old_values, new_values)
VALUES (TG_TABLE_SCHEMA || '.' || TG_TABLE_NAME,
COALESCE(v_new->>'id', v_old->>'id', 'unknown'),
v_op, v_old, v_new);
RETURN COALESCE(NEW, OLD);
END;
$$;
Attach it to any table you want audited:
CREATE TRIGGER trg_loan_audit
AFTER INSERT OR UPDATE OR DELETE ON loan_account
FOR EACH ROW EXECUTE FUNCTION audit.fn_audit_trigger();
Now — every insert, update, and delete on loan_account captures a full before/after JSONB snapshot with the timestamp, operation type, and the database user who made the change.
Querying the Audit Log
Once data starts flowing, you can answer regulatory questions in a single SQL query:
-- All changes to a specific loan in the last 30 days
SELECT operation, changed_by, changed_at,
old_values->>'status' AS old_status,
new_values->>'status' AS new_status
FROM audit.change_log
WHERE table_name = 'public.loan_account'
AND record_id = 'loan-uuid-here'
AND changed_at >= NOW() - INTERVAL '30 days'
ORDER BY changed_at;
-- Compliance report: who changed what and when
SELECT changed_by, table_name, COUNT(*) AS change_count,
MIN(changed_at) AS first_change,
MAX(changed_at) AS last_change
FROM audit.change_log
WHERE changed_at >= NOW() - INTERVAL '7 days'
GROUP BY changed_by, table_name
ORDER BY last_change DESC;
Why JSONB Matters
Storing old and new values as JSONB gives you three advantages:
-
Schema evolution is automatic. Add a column to
loan_accounttomorrow, and the trigger captures it without any code changes. - Full snapshots. You know exactly what the row looked like at every point in time, not just which columns changed.
-
Queryable diffs. Use
old_values->>'field_name'andnew_values->>'field_name'to extract specific changes without parsing.
Performance Notes
Three things to watch:
-
Audit table growth. A high-traffic
transactiontable can generate millions of audit rows per day. Consider monthly partitioning onchanged_at. - JSONB storage cost. Full row snapshots double your storage. For huge tables, consider storing only changed columns (a more complex trigger but saves space).
- Trigger overhead. ~5-15% write performance overhead. Worth it for compliance-critical tables, not worth it for high-frequency logging tables.
When NOT to Use This
Database triggers are wrong for:
- Business logic that should be visible in code reviews
- Notifications or integrations (use CDC/Debezium)
- Validation (constraints and checks belong in the schema)
They're right for one job: capturing mutations reliably, where "reliably" means "the database guarantees it."
The Bigger Picture
This pattern is one piece of a larger architecture for regulated systems. In banking specifically, audit triggers combine with Row-Level Security (for multi-country data isolation) and BIAN-aligned schemas (for standardised naming) to produce a database that can pass MAS, BNM, or NBC technology audits without six months of remediation work.
I build reference schemas for banking engineers at Kenzura Technologies — the Loan Origination System schema includes these audit triggers, RLS policies, and a full BIAN service domain mapping.
What audit patterns have worked or failed for your team? I'd especially love to hear from anyone dealing with MAS TRM or BNM RMiT requirements.
Top comments (0)