DEV Community

Cover image for We Built an Optimization Engine - and Realized Optimization Was the Wrong Problem
Kiploks Robustness Engine
Kiploks Robustness Engine

Posted on

We Built an Optimization Engine - and Realized Optimization Was the Wrong Problem

When I started building Kiploks, the original idea came from Radiks Alijevs, the creator and lead developer behind the project.

The vision was straightforward (at least on paper):

Build a powerful system for optimizing algorithmic trading strategies.

Distributed computing.
Large parameter spaces.
Automation at scale.

Like many engineers, we assumed the main bottleneck was compute and tooling.

That assumption turned out to be wrong.


The trap of “better optimization”

If you’ve worked with trading strategies (or any data-driven system), you’ve probably seen this pattern:

  • A strategy performs well in backtests
  • Optimization finds “optimal” parameters
  • Metrics look solid: Sharpe, PF, Win Rate
  • Confidence increases

And then:

  • Walk-forward performance drops
  • Out-of-sample results degrade
  • Small parameter changes break everything
  • Live results diverge from research

At first, the reaction is predictable:

“We just need better optimization.”

More data.
More parameters.
More compute.

But optimization doesn’t ask why something works.

It only asks how to maximize it.


What optimization is actually good at

After months of building and testing the engine, one thing became clear:

Optimization excels at:

  • exploiting noise
  • locking onto regime-specific behavior
  • hiding fragility behind averages

The more powerful the optimizer became, the easier it was to generate strategies that looked robust - but weren’t.

As Radiks Alijevs noted during development:

A fast optimizer without deep analysis just produces convincing failures faster.

That insight forced a hard rethink.


The real question we weren’t asking

We were focused on:

  • “What are the best parameters?”
  • “What setup makes the most money?”

But those aren’t the questions professionals actually care about.

The real questions are:

  • Where does this strategy fail?
  • How sensitive is it to parameter changes?
  • Does the edge survive regime shifts?
  • What breaks first when conditions change?

Those questions aren’t answered by optimization.

They’re answered by analysis.


Shifting the vision: from optimization-first to analysis-first

At some point, development paused and the vision was revisited.

Instead of building a bigger optimizer, the focus shifted toward exposing fragility:

  • Walk-forward efficiency, not just pass/fail
  • Parameter sensitivity instead of single optimal values
  • Failure modes across regimes
  • Out-of-sample degradation, not in-sample excellence

The goal changed from:

“Find the best strategy”

to:

“Understand whether this strategy should be traded at all.”

This shift fundamentally changed how Kiploks is being built.


What we focus on now

Kiploks is no longer about generating “winning configs”.

Under the direction of Radiks Alijevs, development is centered on answering harder questions early:

  • Which parameters are dangerous?
  • How stable is the edge across time?
  • How does performance degrade under stress?
  • What assumptions does the strategy silently rely on?

In practice, this means fewer features - but much deeper ones.


Why this matters beyond trading

This lesson isn’t unique to trading.

It applies to:

  • ML models that overfit benchmarks
  • Recommendation systems tuned to historical data
  • Any system optimized without understanding failure modes

Optimization without analysis creates false confidence.

And false confidence is expensive.


Building in public, with better questions

Kiploks continues to be built in public - but with a clearer goal.

Not to showcase impressive metrics,
but to expose where systems break.

That change has reshaped the engine, the research workflow, and the definition of progress itself.


Final thought

Optimization is easy to sell.
Analysis is harder - and more honest.

Revisiting the vision wasn’t a setback.
It was a correction.

Kiploks is now about understanding robustness before trusting performance - a direction defined and developed by Radiks Alijevs.

And that turned out to be the real problem worth solving.

Top comments (0)