Posts 1 and 2 in this series explained the problem and showed what the connector code looks like before SYNAPSE. This post skips straight to showing it working.
What you are looking at
The demo runs a three-model legal document pipeline:
- A named entity recognition model extracts parties, jurisdictions, and dates from a contract clause
- An obligation classifier assigns each entity a role — licensor, licensee, jurisdiction
- A compliance scorer checks the obligations against a GDPR policy
Each model was built by a different team. Each one expects a completely different input format and produces a completely different output format. In a standard pipeline, you would write custom connector code between every model pair — and maintain it every time any model updates its schema.
In the demo, there is no connector code. There are adapter functions.
The one thing to watch for
When Hop 2 appears, look at the left panel — the native input the classifier receives from its ingress adapter.
The NER model produced a field called label. The classifier expects a field called entity_type. Same concept. Completely different names.
The ingress adapter translates between them in four lines, written once. It lives in the adapter, not in a connector file, not in shared pipeline utilities, not in a bridge module that only one person understands. It is part of the model's own interface definition.
When the obligation classifier is updated — when the team that maintains it changes their schema — that translation is updated in the adapter. The scorer downstream never knows anything changed. The NER model upstream never knows anything changed. The canonical IR absorbed it.
What the adapter actually looks like
def ingress(self, ir):
return [{
"text": e["text"],
"entity_type": e["label"], # label → entity_type
"context_window": ir.payload.content[:80],
"threshold": ir.task_header.quality_floor or 0.7,
} for e in (ir.payload.entities or [])]
That is the complete ingress function for the obligation classifier. It reads from the canonical IR and produces the classifier's native input format. The field name translation happens here and nowhere else.
The provenance chain
After the pipeline completes, the demo shows the full provenance chain — one immutable entry per model, appended in order. Each entry records which model ran, what confidence score it reported, how long it took, and what it cost.
No model can modify a prior entry. The chain is append-only by design. In a production pipeline running HIPAA or GDPR-sensitive data, this chain is your audit trail — automatically maintained by the adapters, without any application code.
Try it yourself
The demo uses pre-computed outputs, but the contract clause is editable. Change the party names or the jurisdiction and re-run — the pipeline logic is the same, the displayed entities update to reflect what you typed.
If you want to go further, the SDK is on PyPI:
pip install synapse-adapter-sdk
The validator will tell you if your adapter is conformant before you register it with any registry:
synapse-validate --adapter my_module.MyAdapter --all-fixtures
Links
This is post 3 in the Building SYNAPSE series. Post 1 covered what MCP solves and what sits above it. Post 2 showed what connector code actually looks like before and after.
Top comments (0)