Update (2026): The project has been renamed from dotclaude to dotbabel to reflect its model-agnostic positioning. v1.x setups continue to work via a one-release-window read-fallback compat layer (~/.config/dotclaude/, DOTCLAUDE_* env vars, etc.); compat shims are removed in 3.0.0. Migration guide: https://github.com/kaiohenricunha/dotbabel/blob/main/docs/upgrade-guide.md
One skill from the dotbabel project, and how it solved cross-CLI session transfer.
It happened on a Tuesday.
I was four hours into a careful refactor of a GraphQL gateway in Claude Code. The kind of session where you've walked the model through three layers of internal context, agreed on a strategy, and started touching files. The plan was tight, the momentum was real.
Then I hit the context limit.
Claude told me to pick it up later with claude --resume <some-uuid>. Codex was already open in the next tmux pane, idle. I had three options:
- Paste a 14k-character transcript into Codex and cross my fingers.
- Ask Claude to "summarize this for the next agent," then watch it omit the load-bearing details.
- Start over from scratch in Codex and waste the morning.
None of them were good. So I built /handoff. It's why I keep dotbabel installed on every machine I work on.
A word on dotbabel
If you read my earlier piece on dotbabel, you already know the project: an MIT-licensed governance layer for Claude Code, with a portable skills library on one side and a CI-friendly validation CLI on the other. That post covered the architecture and motivation. This one zooms into a single skill from the library.
The premise: skills travel with you across machines. That's the whole point of dotbabel path 1. Conversations didn't travel with them. You'd open a fresh CLI on a new machine and lose every working assumption from the last session. /handoff closes that gap with three verbs and a private git repo as transport.
Three verbs in sixty seconds
dotbabel handoff pull <id> # render a local session as markdown
dotbabel handoff push --from <cli> # ship a session to a private git repo
dotbabel handoff fetch <id> # grab a session from any other machine
Those three verbs are the entire surface. pull is local-only: it reads a session transcript from disk and emits a <handoff> block you can paste anywhere. push and fetch use a private git repo as transport, so you can move context across machines without standing up new infrastructure.
One note on push arguments. --from <cli> is required only when no <query> is given, since the tool needs to know whose latest to ship. With an explicit <query> (UUID, short UUID, alias, or latest), --from is optional and acts as a filter that narrows the resolver to one CLI's sessions.
I'll call the rendered output the digest for the rest of this article. It's the thing all three verbs operate on.
A small note on invocation
Claude Code is the primary host for dotbabel skills. It autoloads ~/.claude/skills/, so /handoff is available as a native slash command from the moment the binary is installed. Inside Claude Code I just type /handoff push --from claude.
Other CLIs aren't there yet. Codex, Copilot, and Gemini don't autoload the skill manifest, so you call the underlying binary directly via the CLI's bash escape:
!dotbabel handoff push --from gemini
Same code path, same behavior, slightly more typing. Native slash-command support for Codex, Copilot, and Gemini is on the roadmap; for now, the ! prefix is the contract. For brevity, the rest of this article uses the bare dotbabel handoff … form. Prepend ! if you're calling from inside a non-Claude CLI.
Prerequisites
npm install -g @dotbabel/dotbabel
That covers the local-only path. pull works the moment the binary is installed: no network, no auth, no config.
For cross-machine work, you need a private git repo and one environment variable:
export DOTBABEL_HANDOFF_REPO=git@github.com:you/handoff-store.git
Or skip the manual setup. The first time you run push, dotbabel detects an unset DOTBABEL_HANDOFF_REPO, checks whether gh is authenticated, offers to create a private repo for you, and persists the URL to ~/.config/dotbabel/handoff.env. The whole bootstrap is one yes-or-no prompt.
Verify your setup any time:
dotbabel handoff doctor
You're looking for ok and a non-empty DOTBABEL_HANDOFF_REPO. Anything else, the doctor prints a structured remediation block telling you exactly what's wrong and how to fix it.
Walkthrough: local handoff
Say I want to move my current Claude Code session into Codex. No transport repo needed for that case: same machine, same filesystem.
dotbabel handoff pull latest --from claude
This finds my most recent Claude session, extracts the user prompts and the last few assistant turns, and prints a digest to stdout:
<handoff origin="claude" session="a1b2c3d4" cwd="/home/dev/projects/gateway" target="claude">
**Summary.** Session opened with: "/refactor the resolver layer to use dataloaders".
Last assistant output (truncated): "Approved. Applying the changes to resolvers/user.ts".
Full prompt log and assistant tail follow for context.
**User prompts (last 10, in order).**
1. /refactor the resolver layer to use dataloaders
2. Show me the existing resolver shape first
3. Why are we batching by tenant_id and not user_id?
…
**Last assistant turns (tail).**
> The current resolver hits the DB once per request. Batching by tenant…
> Plan: introduce a DataLoader keyed on (tenant_id, user_id) and migrate…
> Approved. Applying the changes to resolvers/user.ts
**Next step.** Continue from the last assistant turn using the same file scope and goals summarized above.
</handoff>
The <handoff> tag is deliberate: it's a machine-readable marker that lets a receiving agent detect the digest and treat it as a task specification with explicit scope.
Three variants worth knowing:
# Same digest, but written to a markdown file under docs/handoffs/
dotbabel handoff pull <id> -o auto
# A terser prose summary, useful when you just want to remember what a session was about
dotbabel handoff pull <id> --summary
# Specific output path
dotbabel handoff pull <id> -o /tmp/handoff.md
Resolving an ID
<id> accepts more than UUIDs. The resolver tries, in order: full UUID → short UUID (the first 8 hex chars) → the literal latest → a deliberate-label alias. Aliases are case-insensitive and come from whatever the source CLI calls them:
- Claude's
customTitleoraiTitle(set withclaude --resume "my-feature"). - Codex's
thread_name(set withcodex resume <name>). - Copilot's
workspace.yaml:name. - Gemini's
checkpoint-<tag>.json(set with/chat save <tag>inside the session).
Aliases are why the workflow stays out of my way. I rename my Claude session gateway-refactor, walk to my desktop the next morning, run dotbabel handoff fetch gateway-refactor, and it works. No UUID copy-paste, no scrolling through directory listings.
Walkthrough: across machines
Now the scenario that motivated the whole skill: moving a session from my laptop to my desktop.
On the laptop, before I close my coffee shop tab:
dotbabel handoff push --from claude --tag end-of-day
The output is short and useful:
handoff/gateway/claude/2026-05/a1b2c3d4
git@github.com:you/handoff-store.git
handoff:v2:gateway:claude:2026-05:a1b2c3d4:laptop-mbp:end-of-day
[scrubbed 0 secrets]
The first line is the canonical branch name. The shape is intentional:
handoff/<project>/<cli>/<YYYY-MM>/<short-id>
It's namespaced by project (derived from the session's git root), then origin CLI, then year-month, then the 8-hex short id. The structure does double duty as a collision domain. Two sessions in the same project, same CLI, same month, with the same short-id prefix would clash, and the binary's collision probe catches that before any push lands.
On the desktop, an hour later:
dotbabel handoff fetch a1b2c3d4
fetch clones just the one branch, reads handoff.md from its tip, and prints it to stdout. Same digest I produced on the laptop. I paste it into a fresh Claude, Codex, or Gemini session, and the new agent picks up where the old one left off, with the file scope and plan intact.
You can list and search before fetching:
dotbabel handoff list --remote --limit 10
dotbabel handoff search "dataloader"
The list view shows location, CLI, short id, and timestamp. Search runs a substring/regex match against the digest content. Lucene this is not, but it's good enough to find a specific session by a phrase you remember from the prompt log.
The redaction pass
Here's the part I wasn't willing to hand-wave. The digest is plaintext markdown going to a remote git repo. If I accidentally pasted an API key into a session three days ago, that key is in the transcript. If push doesn't strip it, my secrets manager just got bypassed by a developer-experience tool.
So push runs the digest through a redaction script before it ever leaves the machine. The script operates on stdin, applies eight regex passes, and emits the redacted text plus a scrubbed:<N> count on stderr. Eight things go through:
- GitHub tokens (
ghp_…,gho_…,ghs_…) - OpenAI / Anthropic-style keys (
sk-…) - AWS access keys (
AKIA…) - Google API keys (
AIza…) - Slack tokens (
xox[baprs]-…) - HTTP
Authorization: Bearer …headers - Environment variable assignments matching
*TOKEN,*KEY,*SECRET,*PASSWORD - PEM private-key block headers
The design constraint is "fail closed." If the script can't run for any reason (missing perl, I/O error, malformed input), the push aborts with an error and nothing reaches the remote. There's no --skip-scrub flag. There never will be.
The skill itself reinforces this. Look at skills/handoff/SKILL.md and you'll see an explicit instruction to the LLM: "if the binary cannot be executed, do not fabricate a <handoff> block from raw session JSONL." The reasoning is concrete: without the binary's scrub pass, a hand-rolled digest would silently bypass redaction.
Scrubbing is best-effort. It does not catch:
- Custom enterprise secret formats.
- Secrets broken across lines (IDE copy-paste sometimes wraps).
- Anything you wrote in prose ("my password is correct horse battery staple").
For sensitive sessions, my workflow is: pull <id> first, eyeball the digest locally, then push if it's clean. The local render and the remote push produce identical content modulo the scrub markers, so what you see is what gets uploaded.
The interesting edges
A few details that turned out to matter once I started using this daily.
The first is the short-id collision probe. Eight hex chars of a UUIDv4 give you ~4 billion combinations per project-CLI-month bucket, so collisions are rare without being impossible. Before any push, the binary runs a git ls-remote for the target branch. If it exists and the remote metadata.json's session_id matches yours, it's the same session, and the push proceeds as an update. If they don't match, the push refuses with a clear error and points at --force-collision for the override. No silent clobbers.
The second is connectivity caching. Both push and fetch run a connectivity check before each operation, then cache the result for five minutes so you don't pay the round-trip cost on a sequence of related commands. Pass --verify to force a fresh probe.
The third is the choice of git as transport. It's a substrate I already trust, with cheap branches, well-understood ACLs, and prune semantics that map naturally onto branch deletion. There's no new service to operate, no new credentials to rotate, and any private git provider works: GitHub, GitLab, Gitea, self-hosted, or a file:// URL pointing at a USB stick for air-gapped transfer.
What it isn't
A short list of capabilities I deliberately did not build, and why:
- Not end-to-end encrypted. Transport is access-controlled by your private repo's ACL; content is plaintext on the remote. If your threat model demands encryption at rest in the transport repo, that's a feature for a future version.
-
Not fuzzy or semantic search.
searchis substring/regex only. The corpus is small enough that a smartgrepis faster and more predictable than a vector index. - Doesn't invoke the target CLI for you. The skill prints; you paste. That's deliberate. Keeping the human in the transfer loop preserves auditability and avoids automating a step where wrong context is worse than no context.
I should also be honest about the rough edges. While writing this article I exercised the full round-trip end-to-end and surfaced three small bugs in the process: a flag silently dropped on the wrong verb, a misleading prune failure report, and a default-branch trap baked into the auto-bootstrap path. They're tracked publicly as #178, #179, and #180. None of them block daily use; all three are cosmetic or recoverable. The transport itself is solid.
Try it
Install is one npm command. The first push walks you through repo setup:
npm install -g @dotbabel/dotbabel
dotbabel handoff push --from claude
Source, issue tracker, and contribution guide live at github.com/kaiohenricunha/dotbabel. PRs, bug reports, and "this didn't work on my $obscure-shell" notes all welcome. The broader project tour is in the dotbabel governance article.
Top comments (0)