I had 12 tenants sending the same data in 12 different field names. Instead of writing 12 transforms, I built one config-driven mapper. It worked great — until tenant 8's schema drifted and 3,400 records silently filled with nulls.
TL;DR
- Config-driven mapping lets you add tenants without code changes — one JSON config per client
- The core is 5 lines of DataWeave using dynamic key expressions
- The trap: if a config field references a source column that doesn't exist, you get null — silently
- The fix: validate config against actual record keys before mapping
- Went from 2-day onboarding to 30 minutes per new client
The Problem: 12 Tenants, 12 Schemas
All 12 clients sent the same logical data: customer ID, customer name, order amount. But the field names were different:
| Client | Customer ID | Customer Name | Order Amount |
|---|---|---|---|
| Acme Corp | cust_id | cust_name | order_amt |
| Globex Inc | customer_number | customer_name | amount |
| Initech | id | name | total |
Writing 12 separate DataWeave transforms meant every output schema change required updating 12 files. Deployment coordination across 12 configs was a disaster.
The Solution: 5 Lines of Config-Driven Mapping
%dw 2.0
output application/json
var config = payload.mappingConfig
---
payload.sourceData map (record) -> ({
(config map (field) -> ({
(field.target): record[field.source]
}))
})
The mapping config is a JSON array:
[
{"source": "cust_id", "target": "customerId"},
{"source": "cust_name", "target": "customerName"},
{"source": "order_amt", "target": "orderAmount"}
]
New tenant? Create a config file with their field names. Zero code deployment.
100 production-ready DataWeave patterns with tests: mulesoft-cookbook on GitHub
The Trap: Silent Nulls From Missing Fields
Tenant 8's source system renamed cust_id to customer_id in a schema update. Our config still said cust_id.
record["cust_id"] on a record that has customer_id returns null. Not an error. Not a warning. Just null.
3,400 records went through with customerId: null. The downstream CRM accepted them — null is valid JSON. I caught it 4 days later when the CRM team asked why customer names were missing.
The Fix: Config Validation Before Mapping
Add 3 lines before the mapping:
var sourceKeys = keysOf(payload.sourceData[0])
var configSources = config map (field) -> field.source
var missing = configSources -- sourceKeys
---
if (sizeOf(missing) > 0)
error("Config references fields not in source: " ++ (missing joinBy ", "))
else
payload.sourceData map (record) -> ({
(config map (field) -> ({
(field.target): record[field.source]
}))
})
Now the flow fails fast with: "Config references fields not in source: cust_id". Clear. Actionable. Catches the mismatch before it produces 10,000 records with null fields.
The Dynamic Key Expression Trap
The parentheses in (field.target): value are critical:
// CORRECT — dynamic key from variable
(field.target): record[field.source]
// WRONG — literal string "field.target" as key
field.target: record[field.source]
Without parentheses, DataWeave treats field.target as a literal key name. Your output has {"field.target": "C-100"} instead of {"customerId": "C-100"}.
This is the single most common mistake when building dynamic mappings. The parentheses syntax (expression): value evaluates the expression and uses the result as the key.
What I Do Now
- Every config-driven transform gets a validation step that runs before the mapping
- Config files are versioned alongside source system API versions
- I log the config-to-schema diff on every run — catches drift before it causes nulls
- New tenant onboarding takes 30 minutes instead of 2 days
100 patterns with MUnit tests: github.com/shakarbisetty/mulesoft-cookbook
60-second video walkthroughs: youtube.com/@SanThaParv
Top comments (0)