π Hello World!
If you are a developer exploring quantum machine learning, or a physicist tired of rewriting code to make it run on GPUs, you have likely faced the "Framework Dilemma."
- Do you write in PyTorch because you need the dataloaders?
- Do you switch to JAX for that sweet JIT compilation speed?
- Do you stick to TensorFlow because of legacy production pipelines?
What if your quantum simulator didn't care?
Meet TensorCircuit-NG (Next Generation)βthe open-source, tensor-native platform that unifies quantum physics, AI, and High-Performance Computing.
π What is TensorCircuit-NG?
TensorCircuit-NG is not just another circuit simulator. It is a backend-agnostic computational infrastructure.
It is designed to let you define your physics logic once and execute it anywhere. It wraps industry-standard ML frameworks (JAX, TensorFlow, PyTorch) into a unified engine, making quantum simulation end-to-end differentiable and hardware-accelerated.
π οΈ The "Write Once, Run Anywhere" Philosophy
The killer feature of TensorCircuit-NG is Infrastructure Unification.
You don't need to learn a new dialect for every backend. You simply switch the engine with one line of code:
Python
import tensorcircuit as tc
# Want JAX for JIT speed?
tc.set_backend("jax")
# Want PyTorch for easy integration with your existing DL models?
tc.set_backend("pytorch")
# Legacy TensorFlow project?
tc.set_backend("tensorflow")
This flexibility enables radical interoperability. You can train a hybrid model where the data pipeline lives in PyTorch, but the heavy-duty quantum circuit simulation is JIT-compiled via JAX/XLA for massive speedupsβall handling zero-copy tensor transfers (DLPack) under the hood.
β‘ Why You Should Try It
1. Native Machine Learning Integration
We treat quantum circuits as first-class citizens in the computational graph.
Plug-and-Play Layers: Use
tc.TorchLayerortc.KerasLayerto insert parameterized quantum circuits directly into classical ResNets or Transformers.Automatic Differentiation (AD): Forget parameter-shift rules. We compute gradients via backpropagation through the tensor network, making VQE and QML training exponentially faster.
2. HPC-Ready Scalability
Stop simulating on your CPU. TensorCircuit-NG supports:
GPU/TPU Acceleration: Move simulations to NVIDIA GPUs or Google TPUs without changing your physics code.
Distributed Computing: We support automated data parallelism (scaling to multiple devices) and model parallelism (tensor network slicing across GPU clusters).
Benchmark: We've demonstrated near-linear speedups on 8x NVIDIA H200 GPU clusters, simulating end-to-end variational quantum algorithms with 40+ qubits.
3. Advanced Physics Engines
Itβs not just for qubits. TCNG comes with batteries included for:
Fermions Gaussian States: Efficiently simulate thousands of fermions.
Qudits: Native support for high-dimensional systems (dβ₯3).
Noise Modeling: Customizable noise channels for realistic hardware simulation.
π» Show Me The Code
Here is how simple it is to build a differentiable variational circuit (VQE) that runs on any backend:
Python
import tensorcircuit as tc
# 1. Select your fighter (Backend)
tc.set_backend("jax") # or "pytorch", "tensorflow"
def vqe_loss(params, n=6):
c = tc.Circuit(n)
# 2. Build circuit (Hardware efficient ansatz)
for i in range(n):
c.rx(i, theta=params[i])
for i in range(n-1):
c.cnot(i, i+1)
# 3. Calculate Expectation
# This entire process is differentiable!
e = c.expectation_ps(z=[0, 1])
return tc.backend.real(e)
# 4. Get Gradients (Backend Agnostic API)
# This works regardless of whether you chose JAX, TF, or Torch
val_and_grad = tc.backend.jit(tc.backend.value_and_grad(vqe_loss))
# Run it!
print(val_and_grad(tc.backend.ones(6)))
π€ Join the Community
TensorCircuit-NG is Open Source (Apache 2.0) and ready for you to hack on.
- GitHub: Check out the Repository
-
Install:
pip install tensorcircuit-ng - Docs: Read the Documentation
Whether you are building the next QML image classifier or simulating many-body physics, we'd love to see what you build.
Happy Coding! βοΈ
Top comments (0)