I bought four NVIDIA CMP 100-210 cards off the secondhand market for about $130 each. They are ex-mining cards based on the
Volta GV100 die — same silicon as the V100 — with 16 GB of HBM2 each. On paper, four of them give me 64 GB of HBM2 for
the price of a single used 3090.
In practice, NVIDIA had crippled them in hardware.
The throttle
The CMP 100-210 has its tensor cores throttled 64×. HMMA latency is stretched from 8 cycles to 512. cuBLAS WMMA caps out
at about 5 TFLOP per card. PCIe is locked to Gen1 x1, no P2P, no NVLink. CUPTI is blocked, so you can't even use NVIDIA's
own profiler.
The throttle is enforced by an e-fuse + PMU bootrom double-lock on the die. This isn't a firmware switch — it's blown into
the silicon. There is no software unlock. (Yes, I tried.)
The result: anything that goes through cuBLAS tensor cores runs at 1/64 speed or fails outright. That's vLLM, llama.cpp's
default cuBLAS path, FlashAttention, bitsandbytes, PyTorch's default matmul. The standard LLM inference stack is unusable
on this hardware.
So I wrote my own.
The workaround
It turns out NVIDIA only throttled tensor cores. Two other paths on the same chip are full speed:
- DP4A (4-way packed int8 dot product): ~17 TFLOP, no throttle
- HFMA2 (2-way packed fp16 fused multiply-add): ~24 TFLOP, no throttle
Neither is as fast as a healthy V100's tensor cores, but both are far above the 5 TFLOP cuBLAS WMMA ceiling. Routing all
of inference through these two paths gets you back to roughly half of what an unthrottled V100 would do, which is still
vastly better than nothing.
Building on that, qengine is a from-scratch CUDA inference engine for Qwen3.5 / Qwen3.6 hybrid models. (Worth noting:
Qwen3.5 / 3.6 are a different architecture from Qwen3 — they are dense GDN (Gated DeltaNet) + Attention hybrids, not pure
transformers. The kernels look quite different.)
The engine has:
- A hand-written Q8_0 GEMM tile path for prefill, all DP4A
- A fused FlashAttention kernel (score + softmax + value online)
- Split-K FlashAttention for long context (more on this below)
- 3-bit Walsh-Hadamard + Lloyd-Max KV cache so 27B fits 256K context on three 16 GB cards
- An OpenAI-compatible HTTP API with streaming, tool calls, vision, continuous batching, and per-slot prefix caching
It's not a fork. Every kernel is written for sm_70 + CMP constraints.
Honest benchmarks
I'm comparing against llama.cpp build 8462 with -fa 1, the same Q8_0 GGUFs, on the same hardware. Bigger numbers are
better.
▎ Qwen3.5-9B, single GPU prefill (qengine vs llama.cpp, tokens/sec):
▎ - 297 — 594 vs 199 (2.99x)
▎ - 1.16K — 683 vs 316 (2.16x)
▎ - 4.62K — 584 vs 361 (1.62x)
▎ - 18K — 393 vs 324 (1.22x)
qengine leads at the first three lengths and reaches parity at 18K.
Generation: qengine wins by +48–51% on both sizes (9B: ~70 t/s vs 46.6; 27B: 26.3 vs 17.7).
The honest weak point: 9B dual-GPU at 18K still trails llama.cpp (~0.48×). Their layer pipeline overlaps activation
transfer with compute; mine does the transfers sequentially through pinned host memory, because no P2P. Single-GPU 9B is
faster than either dual-GPU run anyway, so it's mostly a theoretical gap, but it's there.
What was hard
A few things that took real time to get right:
Multi-GPU without P2P. With CMP cards there's no peer-to-peer, no NVLink. Hidden state has to bounce through pinned host
memory between GPUs. I keep a pinned-host buffer per cross-GPU edge and a worker thread per GPU. It works, it's just
sequential.
Numerical drift killing Korean output. Qwopus3.5-9B distill has weak Korean circuits to begin with — small fp16 reorder
noise shifts argmax decisions and the model starts producing garbled Korean. I learned this the hard way after a
chunked-prefill kernel optimisation that "passed" my English greedy-argmax tests broke Korean entirely. Now every kernel
that touches the attention reduction order gets a Korean argmax-stability check before it ships.
Split-K FA without breaking determinism. The 64-block FA grid was under-utilising the SMs at long context (only 64 blocks
across 3×68 SM = 204), so each block was running a 575-iteration K/V tile loop in isolation. I added a split-K variant
that maps each (kv_head, t_idx) to N independent blocks, each handling a contiguous tile range, and merged the partials
with the standard log-sum-exp identity:
m_global = max_s m_s
l_global = Σ_s exp(m_s − m_global) · l_s
o_global = Σ_s exp(m_s − m_global) · acc_o_s
First version stored partial o accumulators as half. That truncation caused a small drift after about 31 generated tokens
at 4.6K prefill — not bit-exact with the base FA path. Korean argmax flipped. Storing partials as fp32 brings drift down
to fp32-reordering noise (~1e-7 per add), and greedy argmax is stable across 32+ generated tokens. That's the version I
shipped. 18K prefill went from 270 → 393 t/s on 9B and 104 → 139 t/s on 27B.
Speculative decoding I never got working. I have DFlash + DDTree code in the repo for the eventual fine-tuned drafter.
Right now the pretrained drafter (lucebox-hub/dflash) is trained on stock Qwen3.5, and the Qwopus distill output
distribution doesn't match — accept rate is roughly 0% and the chains degenerate. Listed in the README as broken on
purpose. MTP K=1 single-token spec works fine.
What this is and isn't for
If you have an RTX 30/40-series, A100, or H100, you should be using vLLM or SGLang. They are far more optimised for those
targets and have actual test coverage. qengine would be slower and weirder.
If you have:
- Ex-mining cards (CMP 100-210, ex-mining V100, P104-100, etc.)
- Older Volta workstations (V100 16/32 GB, Titan V, Quadro GV100)
- A T4 or RTX 20-series and the standard stacks have been disappointing
— then qengine might be useful. It targets sm_70 specifically. sm_75 should work but isn't tuned. sm_60 won't work (no
DP4A). AMD and Apple Silicon definitely won't work.
Repo
https://github.com/Haru-neo/qengine — Apache 2.0.
The benchmarks in this post are reproducible with the bench_curl.sh script in the repo. The 27B 3-GPU numbers were
measured 2026-05-03 on my machine. If you have the hardware and try it, I'd love to know what you see.
Solo project. Heavy AI assist on the CUDA — I drove the architecture, profiling, and debugging across many sessions;
Claude did most of the kernel implementation. I'm a Korean high school student. Slow PR turnaround.
Top comments (0)