Building a Python Grid Search Optimizer Faster than scipy.optimize.brute
Grid search is one of the simplest ways to optimize a function: evaluate the objective function at every point on a grid, then keep the best result.
Python already ships with a tool for this in scipy.optimize.brute. It works well, but once the grid gets large, performance becomes a bottleneck because the search cost grows exponentially with each added dimension.
I built gridoptim to make that workflow faster while keeping the brute-force, deterministic behavior that makes grid search useful in the first place.
The Problem with scipy.optimize.brute
Consider this 4-variable objective:
f(x, y, z, w) =
x^2 + y^2 + z^2 + w^2
+ 0.10xy - 0.20zw + 0.05xz + 0.03yw
+ 3x - 2y + z - 0.5w
If we evaluate it on a grid with 64 points per dimension, that gives:
64^4 = 16,777,216 evaluations
Here is what that looks like with SciPy:
from scipy.optimize import brute
def objective(v):
x, y, z, w = v
return (
x * x + y * y + z * z + w * w
+ 0.10 * x * y - 0.20 * z * w
+ 0.05 * x * z + 0.03 * y * w
+ 3.0 * x - 2.0 * y + 1.0 * z - 0.5 * w
)
step = 20.0 / 64.0
ranges = (
slice(-10.0, 10.0, step),
slice(-10.0, 10.0, step),
slice(-10.0, 10.0, step),
slice(-10.0, 10.0, step),
)
best = brute(objective, ranges, finish=None)
This is perfectly valid, but it becomes slow once the search space is large enough.
A Faster Approach
Modern CPUs have multiple cores, so brute-force search does not need to happen one point at a time in Python.
gridoptim pushes the heavy work into a compiled backend and keeps the API small:
from gridoptim import GridSearchOptimiser
opt = GridSearchOptimiser().function(
"x*x + y*y + z*z + w*w"
"+ 0.10*x*y - 0.20*z*w"
"+ 0.05*x*z + 0.03*y*w"
"+ 3.0*x - 2.0*y + 1.0*z - 0.5*w"
)
step = 20.0 / 64.0
opt.set_range("x", -10.0, 10.0, step)
opt.set_range("y", -10.0, 10.0, step)
opt.set_range("z", -10.0, 10.0, step)
opt.set_range("w", -10.0, 10.0, step)
best_value, best_point = opt.optimise("min")
The goal is not to change what grid search is. The goal is to make exhaustive search practical for larger parameter sweeps.
Benchmark
Using the benchmark checked into this repository for the same 4D, 16.7 million-point search:
| Method | Time |
|---|---|
gridoptim |
1.05 s |
scipy.optimize.brute |
205.54 s |
That run is about:
195x faster
Both implementations produced the same optimum:
Best point: [-1.5625, 0.9375, -0.3125, 0.3125]
Best value: -3.609375
Exact speedups will vary depending on hardware, Python environment, and the objective function itself, but the result shows that a compiled brute-force engine can dramatically reduce runtime.
When Grid Search Is Still Useful
Even though more advanced optimization methods exist, grid search is still a good fit when you want:
- deterministic results
- full coverage of a bounded search space
- parameter sweeps for simulations
- reproducible experiments
- a simple baseline before moving to more complex optimizers
Installation
Install from PyPI:
pip install gridoptim
Project Links
Closing Thoughts
scipy.optimize.brute is a solid reference implementation, but it is not ideal for very large grids.
If you want the simplicity and determinism of brute-force search without paying the full Python-level performance cost, gridoptim is a much better fit.
Top comments (0)