If you’re working with Qiskit, you’ve probably noticed something pretty quickly:
Running circuits is straightforward.
Understanding what changed between runs… is not.
As experiments grow, context gets scattered—across notebooks, logs, and memory. That’s where QObserva fits in.
This post walks through a focused, Qiskit-only setup so you can start capturing structured telemetry from your runs with minimal changes.
Why add observability on top of Qiskit?
Qiskit already gives you counts and raw results. But when you start iterating, a few gaps show up:
- It’s hard to compare runs over time
- Metadata (backend, parameters, noise) isn’t consistently tracked
- Context lives outside the code
QObserva adds a lightweight layer to address this:
- A standard schema across runs
- Derived metrics (entropy, dominance, efficiency)
- Dashboards for comparing experiments
- A consistent way to tag and search runs
1. Install QObserva with Qiskit support
If you’re starting with Qiskit only:
pip install qobserva
pip install qobserva-agent[qiskit]
If you plan to explore other SDKs later:
pip install qobserva-agent[all-sdks]
2. Start QObserva locally
Once installed, bring up the local stack:
qobserva up
Keep this running while executing your Qiskit scripts.
3. Instrument your Qiskit run
You don’t need to rewrite your workflow. Just wrap your existing run with @observe_run:
from qobserva import observe_run
@observe_run(
project="qiskit_lab",
tags={
"sdk": "qiskit",
"algorithm": "bell_state",
"backend": "ibmq_qasm_simulator",
},
)
def run_experiment():
result = execute_quantum_circuit()
return result
if __name__ == "__main__":
run_experiment()
This captures run-level metadata automatically.
Explore a sample qiskit simulation here - https://github.com/BuildersArk/qobserva/blob/main/examples/qiskit_example.py
4. Tagging Qiskit runs (what actually matters)
Good observability starts with good tagging.
A simple pattern:
-
sdk→ always"qiskit" -
algorithm→"vqe","qaoa","bell_state" -
backend→ simulator or device name
Optional but useful:
noise_modeloptimizeransatz
Start simple—expand only when needed.
5. Exploring runs in the dashboard
After running a few experiments:
- Filter by
project="qiskit_lab" - Compare runs by
algorithmorbackend - Observe how metrics evolve across iterations
This is where things become clearer:
- Which changes actually mattered
- How different backends behave
- Where performance shifts occur
Below are few sample screen shots from the dashboard:
- Execution overview
- Run Details
- Compare Runs
6. What this unlocks
Even with a few runs instrumented, you’ll start to see patterns:
- Small parameter changes → measurable differences
- Backend choices → visible tradeoffs
- Experiments → easier to compare and reason about
Instead of reconstructing context, you have it captured upfront.
Explore QObserva
If you’d like to go deeper or try this out in your own workflows:
- Website: https://qobserva.dev
- GitHub: https://github.com/BuildersArk/qobserva
The repository includes setup instructions, examples, and the latest updates as we continue to iterate on QObserva.
What’s next
In upcoming posts, we’ll show how this pattern is applied to:
- Amazon Braket
- Cirq
- PennyLane
- pyQuil
- D-Wave
The goal is to keep a single observability layer across all SDKs, so comparisons become straightforward.


Top comments (0)