DEV Community

Jasper Blank
Jasper Blank

Posted on

One day with acados: 8 errors I hit and what they meant

Yesterday I spent a day learning acados.

I was not trying to become an expert in nonlinear MPC in one sitting. I wanted to see where the friction actually was when going from "the examples run" to "I can change things and still understand what is happening."

I used the Python examples, changed them, broke them on purpose, and kept notes. I also used Claude and Codex the whole time, but mostly as pair engineers: terminal output in, next step out, then back to the code.

The main surprise was that the hardest part was usually not the control part. It was setup, code generation, environment state, and figuring out which changes were runtime-safe versus which changes forced regeneration.

These were the eight most useful failures from that day.

1. EOFError after "Tera template render executable not found"

The first Python example did not just fail. It failed in a misleading way:

Tera template render executable not found ...
Do you wish to set up Tera renderer automatically?
...
EOFError: EOF when reading a line
Enter fullscreen mode Exit fullscreen mode

The last line points you toward Python input handling. The real problem was simpler:

  • t_renderer was missing
  • the script tried to prompt for setup
  • the shell was non-interactive

Fix:

  • install t_renderer

This one set the tone for the rest of the day. The raw error was not fake, but it was also not the useful diagnosis.
Yesterday I spent a day learning acados.

I was not trying to become an expert in nonlinear MPC in one sitting. I wanted to see where the friction actually was when going from "the examples run" to "I can change things and still understand what is happening."

I used the Python examples, changed them, broke them on purpose, and kept notes. I also used Claude and Codex the whole time, but mostly as pair engineers: terminal output in, next step out, then back to the code.

The main surprise was that the hardest part was usually not the control part. It was setup, code generation, environment state, and figuring out which changes were runtime-safe versus which changes forced regeneration.

These were the eight most useful failures from that day.

1. EOFError after "Tera template render executable not found"

The first Python example did not just fail. It failed in a misleading way:

Tera template render executable not found ...
Do you wish to set up Tera renderer automatically?
...
EOFError: EOF when reading a line
Enter fullscreen mode Exit fullscreen mode

The last line points you toward Python input handling. The real problem was simpler:

  • t_renderer was missing
  • the script tred to prompt for setup
  • the shell was non-interactive

Fix:

  • install t_renderer

This one set the tone for the rest of the day. The raw error was not fake, but it was also not the useful diagnosis.

2. Missing .so really meant LD_LIBRARY_PATH was wrong

Later I unset LD_LIBRARY_PATH and reran a working example. The failure was:

OSError: libqpOASES_e.so: cannot open shared object file: No such file or directory
Enter fullscreen mode Exit fullscreen mode

That points at one specific shared library. In practice the right action was:

  • add <acados_root>/lib back to LD_LIBRARY_PATH

This is a pattern I see a lot in technical tooling. The message is factually correct and still not the fastest route to a fix.

3. Copying a getting-started example was not enough

I copied minimal_example_closed_loop.py into my own folder so I could experiment on it without touching the original. It failed immediately:

ModuleNotFoundError: No module named 'pendulum_model'
Enter fullscreen mode Exit fullscreen mode

The script looked standalone. It was not.

Fix:

  • copy pendulum_model.py and utils.py too
  • or keep running from the original example layout

This is small, but it matters. It is exactly the kind of thing that makes a tool feel brittle when you are new to it.

4. WSL clock skew looked worse than it was

After changing N_horizon, I ran the copied example from /mnt/c and got:

make: Warning: File 'Makefile' has modification time in the future
make: warning: Clock skew detected. Your build may be incomplete.
Enter fullscreen mode Exit fullscreen mode

At first glance that looks like generated code might be corrupted. In my case it was mostly a filesystem issue:

  • active build artifacts were on a Windows-mounted path
  • build tools inside WSL did not like the timestamps

Fix:

  • keep active generated code on the Linux filesystem

This is one of those cases where a good tool should say "probably environment noise" before the user goes hunting inside the OCP.

5. yref shape errors were understandable, but late

I injected a bad yref shape and got:

AcadosOcpSolver.set(): mismatching dimension for field "yref" with dimension 5 (you have 6)
Enter fullscreen mode Exit fullscreen mode

This message was actually decent. The annoying part was somewhere else:

  • the example still did startup codegen/build first
  • only after that did the local shape problem show up

In this case the fix was simple:

  • use the right path-stage yref length

So this is less "decoder magic" and more "tell me this earlier."

6. W shape errors leaked internal details

Then I broke the running cost matrix:

AcadosOcpSolver.cost_set(): mismatching dimension for field "W" at stage c_int(0) with dimension (np.int32(5), np.int32(5)) (you have (6, 6))
Enter fullscreen mode Exit fullscreen mode

This is the kind of output experienced users can parse and newer users still hate:

  • it tells you the shape is wrong
  • it also dumps internal type details into the message

The real takeaway was just:

  • expected 5x5
  • got 6x6

That is a good example of where normalization helps even if the solver is technically being precise.

7. status 4 was not one thing

One of the strongest cases came from contradictory bounds. The output looked like this:

QP solver returned error status 3 (ACADOS_MINSTEP)
...
acados returned status 4
Enter fullscreen mode Exit fullscreen mode

This is the point where the stack stops being beginner-friendly:

  • one layer is talking about the QP
  • another layer is talking about the OCP solve
  • the mapping between them is not obvious if you are still learning the tool

In this case the real issue was:

  • contradictory bounds
  • specifically lbu > ubu

So the useful diagnosis was not just "status 4." It was:

  • the QP failed
  • start by checking feasibility of the bounds

8. ACADOS_MAXITER meant different things in different contexts

At first I had a clean example:

  • set nlp_solver_max_iter very low
  • get status 2

That made ACADOS_MAXITER look simple.

Then I moved to a planar quadrotor MPC example and it stopped being simple.

I found at least three versions of "status 2":

  1. the iteration cap was genuinely too low
  2. I tightened runtime bounds and the maneuver became too hard
  3. a constrained maneuver failed on a short horizon, then succeeded when I lengthened the horizon

The third case was the useful one.

I had a tight active angle bound. A maneuver failed. The first guess was obvious:

  • the constraint is too tight

I softened the constraint.
Still failed.
Slack stayed zero.

Then I increased the horizon.
The same maneuver succeeded.

That changed how I think about debugging these failures. A useful tool should not flatten status 2 into one canned explanation. Sometimes the right next step is:

  • try a longer horizon before relaxing the constraint

That is much closer to what an experienced user would actually do.

What I actually found useful about Claude and Codex

The part that worked was not "AI solves control engineering for you."

What worked was much narrower:

  • paste exact terminal output
  • ask for the next concrete step
  • go run it
  • write down what happened

Every time something useful happened, I turned it into one of four things:

  • an incident note
  • an error/fix pair
  • a recipe
  • a regression test case

That kept the whole day grounded in real failures instead of vague "can AI help with MPC?" discussion.

What I had by the end of the day

By the end of the sprint I had:

  • a small local acados decoder CLI
  • a recipe layer for setup and runtime-vs-rebuild questions
  • a semi-realistic trial pack of pasted logs
  • a regression evaluator

More importantly, I had a better picture of the actual scope.

The useful thing here is probably small:

  • decode confusing acados errors
  • suggest the next check
  • explain a few common runtime-vs-rebuild boundaries

That is enough to be useful. It does not need to be a universal control assistant.

One external signal that mattered

After I posted about this on the acados forum, one person replied to say:

  • yes, the segfault was real
  • no, they never solved it
  • they stopped using acados because of it

They did not have the old log anymore, so it was not a good eval case for the tool.

But it was still useful. It showed that a confusing first failure can be enough to make someone leave the tool entirely.

If you use acados

I would like to know whether the same pattern shows up for you:

  • setup errors that surface as misleading Python failures
  • status code combinations that are hard to interpret
  • runtime-vs-rebuild confusion
  • failures that look like infeasibility but turn out to be horizon or authority issues

If you have an acados log that confused you, send it to me. Those are the cases I care about most right now.

Top comments (0)