DEV Community

Cover image for Configuring observability for your Qiskit runs with QObserva
QObserva Labs
QObserva Labs

Posted on

Configuring observability for your Qiskit runs with QObserva

If you’re working with Qiskit, you’ve probably noticed something pretty quickly:

Running circuits is straightforward.
Understanding what changed between runs… is not.

As experiments grow, context gets scattered—across notebooks, logs, and memory. That’s where QObserva fits in.

This post walks through a focused, Qiskit-only setup so you can start capturing structured telemetry from your runs with minimal changes.


Why add observability on top of Qiskit?

Qiskit already gives you counts and raw results. But when you start iterating, a few gaps show up:

  • It’s hard to compare runs over time
  • Metadata (backend, parameters, noise) isn’t consistently tracked
  • Context lives outside the code

QObserva adds a lightweight layer to address this:

  • A standard schema across runs
  • Derived metrics (entropy, dominance, efficiency)
  • Dashboards for comparing experiments
  • A consistent way to tag and search runs

1. Install QObserva with Qiskit support

If you’re starting with Qiskit only:

pip install qobserva
pip install qobserva-agent[qiskit]
Enter fullscreen mode Exit fullscreen mode

If you plan to explore other SDKs later:

pip install qobserva-agent[all-sdks]
Enter fullscreen mode Exit fullscreen mode

2. Start QObserva locally

Once installed, bring up the local stack:

qobserva up
Enter fullscreen mode Exit fullscreen mode

Keep this running while executing your Qiskit scripts.


3. Instrument your Qiskit run

You don’t need to rewrite your workflow. Just wrap your existing run with @observe_run:

from qobserva import observe_run

@observe_run(
    project="qiskit_lab",
    tags={
        "sdk": "qiskit",
        "algorithm": "bell_state",
        "backend": "ibmq_qasm_simulator",
    },
)
def run_experiment():
    result = execute_quantum_circuit()
    return result

if __name__ == "__main__":
    run_experiment()
Enter fullscreen mode Exit fullscreen mode

This captures run-level metadata automatically.
Explore a sample qiskit simulation here - https://github.com/BuildersArk/qobserva/blob/main/examples/qiskit_example.py


4. Tagging Qiskit runs (what actually matters)

Good observability starts with good tagging.

A simple pattern:

  • sdk → always "qiskit"
  • algorithm"vqe", "qaoa", "bell_state"
  • backend → simulator or device name

Optional but useful:

  • noise_model
  • optimizer
  • ansatz

Start simple—expand only when needed.


5. Exploring runs in the dashboard

After running a few experiments:

  • Filter by project="qiskit_lab"
  • Compare runs by algorithm or backend
  • Observe how metrics evolve across iterations

This is where things become clearer:

  • Which changes actually mattered
  • How different backends behave
  • Where performance shifts occur

Below are few sample screen shots from the dashboard:

  • Execution overview

Execution Overview

  • Run Details

Run Details

  • Compare Runs

Comparisons

6. What this unlocks

Even with a few runs instrumented, you’ll start to see patterns:

  • Small parameter changes → measurable differences
  • Backend choices → visible tradeoffs
  • Experiments → easier to compare and reason about

Instead of reconstructing context, you have it captured upfront.


Explore QObserva

If you’d like to go deeper or try this out in your own workflows:

The repository includes setup instructions, examples, and the latest updates as we continue to iterate on QObserva.


What’s next

In upcoming posts, we’ll show how this pattern is applied to:

  • Amazon Braket
  • Cirq
  • PennyLane
  • pyQuil
  • D-Wave

The goal is to keep a single observability layer across all SDKs, so comparisons become straightforward.

Top comments (0)