DEV Community

Sergey Inozemtsev
Sergey Inozemtsev

Posted on

Prompts are logic, not strings: Why I contributed to Convo-Lang

If you’ve built anything non-trivial with LLMs, you’ve probably written code like this:

prompt = f"""
Analyze this job description: {job_description}
Analyze this candidate profile: {candidate_profile}
Decide whether the candidate is a good fit.
Return JSON.
"""
Enter fullscreen mode Exit fullscreen mode

It works.
Until your project grows.


The problem: prompt spaghetti and technical debt

Hardcoding prompts directly into application code feels convenient at first.
But very quickly it turns into long-term technical debt:

  • Prompts become unreadable f-string monsters
  • Prompt changes require code changes and redeploys
  • Prompt versions drift across files and branches
  • Prompt engineers and copywriters are afraid to touch .py
  • Prompt logic, business logic, and orchestration logic get mixed together

At that point, prompts are:

  • hard to test
  • hard to reuse
  • hard to reason about

We already solved this problem for SQL, HTML, configs, and templates.
LLM prompts deserve the same treatment.


Prompts are not strings — they are logic

A modern LLM “prompt” is not just text.

It contains:

  • structure
  • contracts
  • conditions
  • branching
  • deterministic steps

Treating it as a Python string literal is the fastest way to lose control over your AI system.

That’s where Convo-Lang comes in.


Convo-Lang as an AI-native DSL

Convo-Lang is an open-source, AI-native DSL for building conversations and agent workflows.

Instead of embedding prompts into code, you define them in .convo files:

  • explicit schemas
  • role-based messages
  • deterministic logic blocks
  • structured outputs
  • multi-agent workflows

Your prompt becomes a first-class artifact, not a string buried in code.


How it works: Python as a thin runtime

Here’s all the Python code required to run a single agent:

print("Running FitEvaluator agent...")

convo_fit_evaluator = Conversation(agent_configs)
convo_fit_evaluator.add_convo_file("agents/fitEvaluator.convo")

job_apply_decision = convo_fit_evaluator.complete(
    variables={"job_data": job_data, "match_data": match_data}
)
Enter fullscreen mode Exit fullscreen mode

Notice what’s missing:

  • no prompt text
  • no formatting logic
  • no hidden reasoning

Python is just the runtime.
All intelligence lives in .convo.


Typed contracts instead of free-form prompts

The core idea that changed how I think about prompts:

Agents should communicate through typed contracts, not vague instructions.

Example schema definitions used by an agent:

>define
JobData = struct(
  title:string
  mustRequirements:array(string)
  niceToHaveRequirements:array(string)
)

RecommendationData = struct(
  recommendation: struct(
    decision: enum("apply","maybe apply","skip")
    confidence: number
    summary: string
  )
)
Enter fullscreen mode Exit fullscreen mode

This gives you:

  • explicit input shapes
  • explicit output contracts
  • predictable agent behavior

The agent no longer “guesses” what to return.


Deterministic logic lives next to the prompt

Convo-Lang is not just a prompt format.
It allows you to define explicit, deterministic logic inside the agent.

>do
jobData = new(JobData job_data)
matchData = new(MatchData match_data)

total = 100
reqCount = jobData.mustRequirements.length
niceCount = jobData.niceToHaveRequirements.length

reqPoints = div(total reqCount)
nicePoints = div(reqPoints 4)

reqGaps = matchData.gaps.mustRequirements.length
niceGaps = matchData.gaps.niceToHaveRequirements.length

confidence = add(
  mul(sub(reqCount reqGaps) reqPoints)
  mul(sub(niceCount niceGaps) nicePoints)
)

decision = "apply"
if (lt(confidence 70)) then (decision = "skip")
elif (lt(confidence 90)) then (decision = "maybe apply")
Enter fullscreen mode Exit fullscreen mode

This is business logic, not prompt prose.


Schema-enforced output with @json

@json RecommendationData
>user
Return recommendation for this candidate and job.
Enter fullscreen mode Exit fullscreen mode

This is not a suggestion.
It is schema‑enforced output validation.

Malformed or invalid responses don’t silently pass through.


Cross-SDK portability by design

Because the Convo-Lang core is implemented in TypeScript, it guarantees identical behavior across environments:

  • VS Code preview
  • CLI
  • Python runtime

If your prompt passes validation in the editor, it will behave the same way in your Python backend.

Write once. Run anywhere.


Architecture: a smart bridge, not a rewrite

The Python SDK does not reimplement the Convo-Lang engine.

Instead, it acts as a high‑performance bridge to the Node.js core, which handles parsing, validation, and async I/O.

This preserves full syntax and behavior parity across SDKs.


Separation of concerns — for real

With this approach:

  • .convo files own AI reasoning and decision logic
  • Python only orchestrates execution
  • Prompt engineers don’t touch backend code
  • Developers don’t rewrite prompts

The agents/ folder is the product.
Python is just the runtime.


Why I contributed to the Python SDK

I believe AI workflows need standards.

Prompts should be portable, testable, and explicit.

That’s why I helped bring Convo-Lang to Python.


Resources

Top comments (0)