DEV Community

João André Gomes Marques
João André Gomes Marques

Posted on

Add governance to DSPy pipelines

DSPy pipelines are interesting because they optimize themselves. You write modules, the optimizer rewrites your prompts, and the final program runs in loops making LLM calls and tool invocations without you watching each step. That's the whole point.

But it also means you can lose track of what happened. A pipeline that ran overnight - which LLM calls did it make? What tools did it invoke? If it produced a weird output at 4am, good luck reconstructing the chain of events from logs alone.

Two lines after your imports

pip install asqav[dspy]
Enter fullscreen mode Exit fullscreen mode
import asqav
import dspy
from asqav.extras.dspy import AsqavDSPyCallback

asqav.init(api_key="...")
dspy.configure(callbacks=[AsqavDSPyCallback(agent_name="my-agent")])
Enter fullscreen mode Exit fullscreen mode

That's the entire setup. Once that callback is registered, every operation in your DSPy pipeline gets signed automatically.

What gets signed

The callback hooks into DSPy's event system and signs six event types:

  • dspy.module.start / dspy.module.end - when a module begins and finishes execution
  • dspy.lm.start / dspy.lm.end - every LLM call, with the prompt going in and completion coming out
  • dspy.tool.start / dspy.tool.end - any tool invocations the pipeline makes

Each event gets an ML-DSA-65 signature (FIPS 204). The signing happens server-side so the SDK is just a thin API client. You end up with a cryptographic audit trail of every operation your pipeline performed.

Why this fits DSPy well

DSPy programs are compiled. After optimization, the prompts and routing can look nothing like what you originally wrote. That's great for performance but terrible for observability. You need to know what the optimized program actually did, not what you think it should have done.

The callback sits at the execution layer, so it captures what really happens at runtime. Not the source code, not the original prompts - the actual calls the compiled program makes.

Also, DSPy pipelines tend to run in loops during optimization. That means lots of LLM calls, lots of tool invocations. Having a signed record of each one lets you audit the optimization process itself, not just the final output.

Fail-open design

Same as the other asqav integrations - if the signing service has an issue, the callback logs a warning and your pipeline keeps running. Governance doesn't become a bottleneck or a failure point. Your DSPy program doesn't care whether the signature succeeded or not.

Getting started

The SDK is MIT-licensed: github.com/jagmarques/asqav-sdk

Grab an API key at asqav.com/dashboard, add those two lines after your imports, and your next pipeline run will have a complete signed audit trail. No changes to your modules, no changes to your tools.

Top comments (0)