Every time I debug a container I run the same loop:
docker ps -a # find the thing
# squint, copy a name with the mouse
docker logs casely-postgres-1
docker inspect casely-postgres-1
docker exec -it casely-postgres-1 sh
The container name doesn't change between those commands. The shell doesn't help me; I copy and paste — sometimes typoing a hash prefix.
So I built dux — a single Rust binary with a TUI (ratatui) and a browser UI (axum), 105 curated docker commands, and after every run it parses the output and prefills the next command's args.
brew install nickciolpan/tap/dux
dux # terminal UI
dux serve # http://127.0.0.1:7878/dux
Source: github.com/nickciolpan/dux. MIT.
The rest of this post walks through the two ideas that make it work.
Idea 1 — typed placeholders
Each command in the catalog is a const struct:
pub struct Cmd {
pub id: &'static str,
pub name: &'static str,
pub desc: &'static str,
pub long_desc: &'static str,
pub template: &'static str, // "docker logs {container}"
pub category: &'static str,
pub produces: Produces, // what the stdout lists
pub follow_ups: &'static [&'static str], // ids
}
template uses {name} placeholders. Each placeholder maps to an ArgKind:
pub enum ArgKind { Free, Container, Image, Network, Volume, Service }
pub fn arg_kind(name: &str) -> ArgKind {
match name {
"container" => ArgKind::Container,
"image" | "source" => ArgKind::Image,
"network" => ArgKind::Network,
"volume" => ArgKind::Volume,
"service" => ArgKind::Service,
_ => ArgKind::Free,
}
}
That's the entire kind system. Five real kinds plus Free for raw text (file paths, env values, port numbers, etc.).
produces says what the command's stdout contains:
pub enum Produces { None, Containers, Images, Networks, Volumes, Services }
So docker ps is Produces::Containers, docker images is Produces::Images. Most commands are Produces::None.
Two annotations. Everything else falls out.
Idea 2 — parse stdout into typed candidates
After every run, extract::extract(produces, stdout) returns a typed bag:
pub struct Candidates {
pub containers: Vec<String>,
pub images: Vec<String>,
pub networks: Vec<String>,
pub volumes: Vec<String>,
pub services: Vec<String>,
}
For docker ps, the parser is small and fault-tolerant — skip the header row, then for each line take the first whitespace token (the ID) and the last (the NAME):
fn parse_containers(s: &str, c: &mut Candidates) {
for line in data_lines(s) {
if line.contains('\t') {
// ps with --format 'table {{.ID}}\t{{.Names}}\t...'
let parts: Vec<&str> = line.split('\t').map(str::trim).collect();
if let Some(id) = parts.first() { dedup_push(&mut c.containers, id.to_string()); }
if let Some(name) = parts.get(1) { dedup_push(&mut c.containers, name.to_string()); }
continue;
}
let toks: Vec<&str> = line.split_whitespace().collect();
if toks.is_empty() { continue; }
dedup_push(&mut c.containers, toks[0].to_string());
let last = toks[toks.len() - 1];
for n in last.split(',') { dedup_push(&mut c.containers, n.trim().to_string()); }
}
}
docker images parses REPOSITORY, TAG, and IMAGE ID columns into repo:tag and the bare ID. docker network ls and docker volume ls are even simpler. The whole module is ~150 lines and unit-tested.
The TUI keeps the most recent Candidates in app state. When a follow-up command needs {container}, it looks up arg_kind for that placeholder and pulls candidates from the matching bucket. First candidate auto-fills; ↑ / ↓ cycle.
The web UI does the same thing client-side: each POST /api/run/:id returns candidates alongside stdout, and the form renders a <datalist> per typed arg.
Search across the explainers
Every command also has a long_desc — one or two sentences describing what it does and the non-obvious flags. The catalog search filters on id, name, desc, long_desc, template, and category, all live as you type.
The third filter in that clip — rotation — only matches because the kill -s SIGNAL command's explainer mentions log rotation. The explainers aren't decoration; they're a searchable index.
Two surfaces from one catalog
main.rs is just clap subcommands:
match cli.command.unwrap_or(Cmd::Tui) {
Cmd::Tui => tui::run()?,
Cmd::Serve { addr, route } => {
let rt = tokio::runtime::Runtime::new()?;
rt.block_on(web::serve(addr.parse()?, route))?;
}
Cmd::Catalog => println!("{}", serde_json::to_string_pretty(&CATALOG)?),
}
Both surfaces import the same catalog::CATALOG and call the same runner::run and extract::extract. There's no separate "model layer." The TUI is ratatui::Frame widgets; the web UI is axum::Router returning Json<CmdView> (where CmdView: From<&'static Cmd>). The HTML page is a single file embedded with include_str!("../assets/index.html") — no build step, no bundler.
This means adding a command is one place: append to CATALOG. Both UIs pick it up. Adding a new arg kind (e.g. service for compose) was 4 lines in catalog.rs and 3 lines in extract.rs.
What it isn't
-
Not a Docker rewrite. Every command is
sh -c "docker …"shelled out. - Not a dashboard. There's no live state polling; the model is run-on-demand, like the CLI itself.
-
Not a daemon.
dux serveis a thin HTTP-to-shell layer for your local Docker socket, intended forlocalhostuse (or behind your VPN). Don't expose it raw to the internet.
That's also why it stays small — the binary is ~2.4 MB release-stripped and starts in ~50 ms.
Try it
brew install nickciolpan/tap/dux
dux # TUI
dux serve # http://127.0.0.1:7878/dux
dux catalog | jq . # full command catalog as JSON
Or build from source if you'd rather:
git clone https://github.com/nickciolpan/dux.git
cd dux
cargo install --path .
If you make it interesting, the catalog is data — open a PR with a new Cmd { … } entry and your follow-up suggestions, and you've extended both UIs at once.
Repo: github.com/nickciolpan/dux
Site: nickciolpan.github.io/dux



Top comments (0)