Original post: https://hankchiu.tw/articles/a-simple-rust-cli-git-subcommand-for-ai-generated-commit-messages
Most of my commits these days come from an AI agent. It edits files, runs tests, writes the message. But maybe one in five times I still open the editor myself — a typo fix, a config tweak, a line I'd rather write than describe. For those, asking the agent "now please commit" felt like more friction than just typing the message.
So I built a tiny Git subcommand: git-ca. It reads the staged diff, asks AI provider for a Conventional Commits draft, and drops me into the normal commit editor. If I like the draft, I save. If I don't, I edit. Like this:
git add src/auth.rs
git ca
# Copilot drafts a message, $EDITOR opens with it,
# I save, git commits.
It's a small Rust binary that calls one HTTP endpoint and shells out to git commit -e -F. That's the whole product. This post is about what I learned building it as someone whose previous Rust experience was zero.
What this tool is not
git-ca doesn't replace an AI coding agent. It writes one commit message, opens your editor(optional), and gets out of the way. If you are confortable with committing through an agent, you don't need it.
Why a real tool, not a script
Previously, I wrote Auto-generate Commit Messages with LLMs in Your Terminal. It works, but with some friction. Some considerations:
- A generic LLM client tool like LLM might help, but it's overkill for this use case.
- The non-interactive mode of agent tools is feasible, but with too much overhead under the hood.
- I want a solution that's portable.
- I don't use LLMs through API keys.
The GitHub Copilot device authentication flow or OpenAI Codex was chosen so that I could leverage the subscription quota.
A small static binary I could install once and forget about felt right. The choice of language was secondary — I picked Rust because I wanted to learn it and because "single binary, no runtime dependencies" matched what I was after.
How I worked with AI through the build
I leaned on AI agents through the whole build. How I used them mattered more than which model was on the other end.
Prime the agent with domain context before writing code. Before any implementation, I loaded Rust-specific guidance into the agent's context — idioms, ownership patterns, the conventional crates for error handling and HTTP. The first scaffold was shaped by how Rust people write Rust, not a generic OO design in Rust syntax.
Scaffold first, fill in later. Empty modules came before any feature: auth, Copilot HTTP client, git wrapper, command dispatch. Each had a clear job. Only after the skeleton compiled did I ask the agent to flesh one out at a time — much easier than refactoring a wall of code generated all at once.
Modularize by feature, not by layer. Each module owns one feature end-to-end: auth handles tokens and storage, Copilot handles HTTP and retry, git handles diff reading and commit invocation. New capabilities land in one place. AI doesn't default to this — left alone, it spreads a feature across utility files and shared helpers.
Run it locally every step, add features in small increments. As soon as the scaffold compiled, I ran it against a real repo. Then I added one capability, ran it again, fixed what broke. The first version handled a single hardcoded account; multiple named accounts came later; manual token storage came later still, when device-flow login turned out to be awkward in headless environments. When something broke, the cause was almost always the last thing I'd changed.
That last point matters more than it sounds. AI will happily generate a complete tool with every flag on the first try — it compiles, looks reasonable, and is nearly impossible to debug because you have no working baseline. Shipping each feature in isolation kept the surface small enough to read.
Where I had to leave the chat window
Release tooling was the place I most clearly had to step outside AI and read real projects.
AI knows the individual pieces: bumping the version, updating the changelog, tagging the commit, running pre-release checks, building cross-platform binaries, publishing to a registry, opening a GitHub release, handing off to Homebrew and npm. Ask about any one and you'll get a reasonable answer.
The trouble is the combination. A small CLI that ships to several package managers needs those steps wired into one workflow, in the right order, with the failure modes covered. That recipe isn't in any one place. AI answers either glued the steps together with brittle shell, or reached for a heavyweight setup meant for a much larger project.
What worked was reading the official docs. From those I learned which piece handles version and tag orchestration, which builds cross-platform binaries, which one bumps strings inside auxiliary files like the man page, and how they hand off to each other. None of that was a single search-result away.
The dull takeaway: AI compresses well-trodden paths. For anything that's a specific composition of those paths — especially something that only exists as configuration scattered across files — learn by first principles.
What I'd tell someone starting the same project
Three things, none of them about Rust:
- Pick a project small enough to finish. A tool I'd use every day, with one well-defined external API, was about the ceiling for a first project in a language I didn't know.
- Ask comparison questions, not design questions. "How do production X handle Y?" beats "design Y for me."
- Learn by first principles. You need that foundation to judge what AI gives you.
The tool is on GitHub if you want to poke at it: hankcraft/git-ca. Issues and disagreements welcome.
Top comments (0)