Decoupling Orchestration from Reasoning
In this post, I’ll show how to design a clean, maintainable architecture for AI systems using Convo-Lang.
As a concrete example, I’ll use a hallucination-resistant AI agent that analyzes a job description, evaluates candidate fit against detailed professional experience, and generates a tailored resume only when the role is actually relevant.
In this setup, all reasoning and decision logic lives in Convo-Lang, while Python is used strictly for orchestration — loading inputs, executing agents, and wiring the pipeline together.
The goal of the example is not the resume itself. The goal is to demonstrate how to decouple orchestration from reasoning and build an AI system that is easy to understand, extend, and maintain over time.
The full working example is available in the Convo-Lang repository.
You can explore the complete code here: https://github.com/convo-lang/convo-lang/tree/main/packages/convo-lang-py/examples/02_patterns/resume_generator
You can clone it, run it locally, and experiment with it by simply replacing the job description and writing your own experience profile — the sample inputs live in the data/ folder.
What Convo-Lang actually is
Convo-Lang is not:
- a prompt template engine
- a thin wrapper around chat completions
- a “nicer way to write prompts”
Convo-Lang is a domain-specific language for LLM reasoning and agent workflows.
It allows you to define:
- explicit agent roles
- typed input and output contracts
- deterministic logic
- schema-enforced outputs
- multi-agent pipelines
All of this lives in .convo files — outside of application code.
Why resumes are a good stress test
Resume generation is a hostile domain for hallucinations:
- inventing skills is unacceptable
- inventing companies or roles is unacceptable
- inventing dates is unacceptable
- decisions must be explainable
A single “smart prompt” is the worst possible approach here.
So instead of asking how to prompt, I started by asking:
how should this system be modeled?
Modeling the system as Convo-Lang agents
The solution is built as five Convo-Lang agents, each responsible for exactly one thing:
JobDescriptionAnalyzer
Turns raw job text into structured requirements.CandidateProfileAnalyzer
Converts free-form experience text into factual, structured data.ProfileJobMatcher
Matches experience to requirements and explicitly lists gaps.ResumeWriter
Generates a resume strictly from verified data.FitEvaluator
Decides whether applying makes sense.
Each agent:
- lives in its own
.convofile - has a single responsibility
- communicates only through typed contracts
This separation is not cosmetic.
It is the foundation of reliability.
Typed contracts instead of “return JSON please”
In most LLM systems, structured output is a suggestion.
In Convo-Lang, it is a contract.
Here is a real schema used by the CandidateProfileAnalyzer agent:
>define
ProfileData = struct(
workExperience: array(
struct(
title: string
companyName: string
firstDate: string
lastDate?: string
summary: string
experience: array(string)
)
)
projects?: array(
struct(
title: string
firstDate: string
lastDate?: string
experience: array(string)
)
)
)
This immediately changes system behavior:
- required fields must exist
- optional fields are explicit
- invented fields are invalid
- downstream agents can trust the data shape
Hallucinations don’t silently propagate.
They violate the contract.
Validating inputs before any reasoning happens
Hallucinations often start before generation.
They start when invalid or ambiguous input quietly enters the system.
Convo-Lang allows agents to validate inputs explicitly, before any reasoning takes place.
>define
JobData = struct(
title: string
mustRequirements: array(string)
niceToHaveRequirements: array(string)
keywords: array(string)
)
>do
jobData = new(JobData job_data)
That single line enforces a lot:
- checks that
job_dataexists - validates required fields
- enforces correct types
- rejects malformed input early
If the input does not match JobData, the agent does not proceed.
The model never reasons over invalid data.
Here, input validation is part of the agent’s contract, not an afterthought.
Explainable matching instead of opaque scoring
The ProfileJobMatcher agent does not produce a mysterious score.
It produces:
- only relevant roles and projects
- explicit
matchReasonsfor each item - two concrete gap lists: must-have and nice-to-have
MatchData = struct(
coverageProfileData: struct(
workExperience: array(
title: string
companyName: string
firstDate: string
lastDate?: string
summary: string
experience: array(string)
matchReasons: array(string)
)
projects?: array(...)
)
gaps: struct(
mustRequirements: array(string)
niceToHaveRequirements: array(string)
)
)
Nothing is hidden.
Every match and every gap is inspectable.
This output becomes the single source of truth for all downstream steps.
Deterministic logic inside the agent (not in prose)
A key feature of Convo-Lang is that deterministic logic lives next to reasoning.
In the FitEvaluator, the final decision is not guessed.
It is calculated.
>do
jobData = new(JobData job_data)
matchData = new(MatchData match_data)
totalConfidence = 100
jobRequirementsAmount = jobData.mustRequirements.length
requirementPoints = div(totalConfidence jobRequirementsAmount)
requirementGapAmount = matchData.gaps.mustRequirements.length
mainConfidence = mul(
sub(jobRequirementsAmount requirementGapAmount)
requirementPoints
)
decision = "apply"
if (lt(mainConfidence 70)) then (
decision = "skip"
)
elif (lt(mainConfidence 90)) then (
decision = "maybe apply"
)
This is business logic:
- readable
- reviewable
- testable
The LLM explains the decision — but it does not invent the rules.
Schema-enforced output with @json
Convo-Lang does not rely on “please return JSON”.
It enforces it.
@json RecommendationData
>user
Help the candidate decide whether applying for this job makes sense.
If the output does not match RecommendationData, it is invalid.
Structured output is no longer a best-effort promise.
It is a guarantee.
Python as an orchestrator, not a reasoning layer
So where does Python fit into this architecture?
Python is intentionally boring.
It does not:
- contain prompts
- contain business rules
- interpret free-form model output
It only:
- loads input data
- executes agents
- passes validated JSON between them
- handles I/O
job_data = convo_job_description_analyzer.complete(...)
profile_data = convo_candidate_profile_analyzer.complete(...)
match_data = convo_profile_job_matcher.complete(...)
resume_data = convo_resume_writer.complete(...)
decision = convo_fit_evaluator.complete(...)
All intelligence lives in .convo.
Python is just the runtime.
This separation is deliberate.
Why this separation matters
By keeping reasoning in Convo-Lang and orchestration in Python:
- AI logic becomes portable
- behavior is consistent across CLI, editor, and SDK
- prompt changes don’t require backend redeploys
- agent logic can be reviewed like code
The agents folder becomes the product.
The SDK becomes an implementation detail.
What this example actually demonstrates
This post is not really about resumes.
It demonstrates that Convo-Lang lets you:
- treat LLM logic as first-class code
- build multi-agent systems without prompt chaos
- validate inputs and outputs explicitly
- make hallucinations visible instead of hidden
- scale reasoning without rewriting everything
That is why Convo-Lang is worth using.
Final takeaway
Hallucinations are rarely a model problem.
They are almost always an architecture problem.
Convo-Lang gives you the tools to fix that at the right level.
Resources
- Convo-Lang core: https://github.com/convo-lang/convo-lang
- Convo-Lang Python SDK: https://github.com/convo-lang/convo-lang/tree/main/packages/convo-lang-py
- Resume agent example: https://github.com/convo-lang/convo-lang/tree/main/packages/convo-lang-py/examples/02_patterns/resume_generator
- Documentation: https://learn.convo-lang.ai/
Top comments (0)