The strange path from ultrasound physics to rethinking how solvers move through space
I didn’t expect this to start with kidney stones, but that’s honestly where it began.
I was reading about ultrasound lithotripsy, how they break stones using focused waves, and I got stuck on the geometry of it. Ellipses, focal points, energy landing exactly where it needs to.
It is one of those cases where physics feels less like equations and more like choreography.
That idea just sat there for a while.
Then, separately, I was dealing with solver code. Big systems, messy residuals, the usual “why is this not converging” loop. At some point I stopped thinking in terms of matrices. The system started to feel like a place.
Some parts resisted everything, like trying to push something heavy across rough ground. Other parts moved too easily and felt unstable. Residuals stopped feeling abstract and started feeling like forces pushing things out of balance.
That is roughly where PICD came from.
PICD does not try to replace anything. It wraps what already works.
GMRES, CG, Newton – Krylov, BDF. They still do the actual solving. PICD just watches what is happening and keeps some memory: residual history, how the system is partitioned, how different parts relate to each other.
Then it adjusts the setup for the next solve. Preconditioners, damping, small corrections. Carefully.
There is a hard boundary it does not cross. If a step does not reduce the residual, it does not count. The usual acceptance rules still apply.
The “conic” part is just how the system gets split up.
Instead of one big vector, you break it into regions. Each one tracks its own behavior. Its residual pattern, its neighbors, what worked last time.
It sounds heavier than it feels. In practice it just gives the solver a bit of context it did not have before.
The unusual part is treating those regions like they have physical properties.
It sounds heavier than it feels. In practice it just gives the solver a bit of context it did not have before.
Underneath all that is a graph.
Connections between regions depend on how similar their residuals are, how often they activate together, and the actual structure of the problem. From that you get a Laplacian:
L = D -W
It does not replace the solver. It just helps decide what should be grouped together and what should be prioritized.
The solve loop itself is pretty normal:
Pick a solver, partition, build state, adjust preconditioner, run, accept or reject, update.
The results are interesting.
Everything in the current validation set runs. 98 tests, 22 examples.
On direct comparisons, same solver with and without PICD, the PICD version is faster in the published benchmark set and uses less memory there as well.
Linear problems stand out the most. Most cases improve, sometimes by a lot. There is a Helmholtz example that jumps by hundreds of times faster.
Nonlinear and time-dependent cases are less clean. Some improve. Some do not. There is a turbulence example that clearly gets worse, with more rejected steps and slower runtime.
That part I trust more than the wins.
If there is one thing I would keep in mind, it is that PICD is deliberately limited in what it claims.
It works well in same-method comparisons. Beyond that, it depends. It does not assume every physics-inspired term helps, and the controller can reduce or disable them when they start hurting convergence.
I still come back to that original picture of energy being guided instead of forced.
That is really what this is. Instead of brute-forcing convergence, you reshape the space a little so the solver has an easier path.
But it changes how you think about the problem. And for me, that shift was the interesting part.
Read more on my reasearch here and cite it if you find it useful : https://doi.org/10.13140/RG.2.2.10721.06243


Top comments (0)