There's a moment in every ML researcher's life when they look at a training script and realize something has gone wrong — not with the model, but with the code around it.
For me, it was staring at a PyTorch training script — just the model, optimizer, and loop, dataset loading aside — where the argparse block was longer than everything else combined. I had written more code to describe my experiment than to run it.
The irony was that those argparse lines weren't really doing anything new. They were just re-expressing values I'd already written as hardcoded variables — values that were perfectly readable, perfectly reasonable, right there at the top of the file. argparse made me say everything twice.
That was the day I wrote fargv — not to replace hardcoded variables, but to make them overridable from the CLI without touching the code.
The ML Prototyping Trap
If you've written research code, you know the arc. It starts clean:
learning_rate = 0.001
dropout = 0.3
epochs = 50
Hard-coded variables at the top. Simple. Readable. This is your config section.
Then you want to run a sweep. So you reach for argparse. You add a parser = argparse.ArgumentParser(). You add add_argument calls. You add args = parser.parse_args(). You replace your clean variables with args.learning_rate, args.dropout, args.epochs. You've now written 15 lines to replace 3, and your script has two separate "config sections" — one that actually runs, and one that describes what could run.
Now add a new parameter. You touch three places: the default, the add_argument call, and wherever you use it. Every time you want to make something parametric — try a different optimizer, expose a new regularization option — you break your flow to write boilerplate. So you don't. You hardcode it. You tell yourself you'll formalize it later.
Later never comes, or it comes too late, after the experiment is long done.
What I Actually Wanted
I wanted argument parsing that looked like a config file. I wanted to write this:
p, _ = fargv.parse({
"lr": 0.001,
"dropout": 0.3,
"epochs": 50,
})
And have it just work — with CLI flags, help text, and sensible defaults — without changing anything else in my script. I wanted making a variable parametric to be a one-line change, not a context switch.
I also had a specific fear: dependency legibility. I'd spent time reading through someone else's ML codebase that used Click, and being unfamiliar with Click's decorator patterns made the code harder to follow than it needed to be. If fargv never became widely adopted — and I was realistic that it might not — I wanted code using it to remain readable to anyone who had never heard of it. The argument dict sitting at the top of a script is self-explanatory in a way that a chain of decorators simply isn't.
A Different Mental Model
Most CLI libraries ask you to think of parameters as things that must come from the command line, with defaults as a fallback. fargv inverts this. The hardcoded value is the truth — sensible, immediate, the thing that makes your script run right now. The CLI is just an escape hatch that lets you override it without touching the code.
This is how researchers actually think. You don't start a script imagining 47 configurable parameters. You start with the values that make sense today, and you expose them to the CLI only when you need to vary them. fargv makes that the default experience rather than a refactor.
In the legacy API, every parameter type was inferred purely from its default value. As fargv matured it gained support for mandatory parameters too — but the philosophy remains: every parameter should have a reasonable default. A script using fargv should always run as-is, with the CLI there when you need it, invisible when you don't.
The Design Decisions That Followed
Once the core idea was clear, everything else flowed from it:
Type inference from defaults. If your default is 0.001, it's a float. If it's False, it's a boolean switch. No type=float annotations needed.
The dict is the config. You can read a fargv dict cold and understand exactly what the script accepts and what the defaults are. It doubles as documentation.
String interpolation. {"output_dir": "{data_dir}/results"} — because in research you're constantly building paths relative to other paths, and you shouldn't need a separate config system for that.
Zero new concepts to learn. Pass a dict, get a namespace back. That's the whole API you need to know.
The original "legacy" version was written before 2020. Honestly, had I known about python-fire at the time, fargv might never have existed — Fire scratches some of the same itches. But fargv grew in a different direction: toward the idea that your argument definition should be a readable, self-contained block that serves as documentation, default config, and CLI interface all at once.
The Script That Freed Me
After fargv, my training scripts had a natural shape. The top of the file was a dict — the "what" of the experiment. Everything below was the "how". Making a new hyperparameter accessible from the CLI meant adding one line to the dict. Nothing else changed.
No more "I'll formalize this later." No more boilerplate interrupting the thought. No more argparse blocks that outweigh the model.
If you're writing research code and you recognize this pain, the source is on GitHub and it's also available on PyPI:
pip install fargv
It's small, it has no runtime dependencies, and if you never use it again, your code will still make sense to whoever reads it next — and that matters more than people think.
Recently I used LLMs to finally implement a backlog of features I'd always wanted but never had time for — Tk/Qt/Jupyter form interfaces, custom tuple parameters, dataclass and function-based parser definitions, and better type handling. These mostly add more options at runtime without changing the core workflow.
A note on stability: all 0.x releases remain available through fargv.fargv and still work fine. The 1.x API is available through fargv.parse. I'm committed to keeping the parser definition API backwards compatible and to keeping behavior changes minimal across versions.
Top comments (0)