DEV Community

Callum Ward
Callum Ward

Posted on

I built a CLI to sync Cursor chat history between machines

The Problem

If you use Cursor across multiple machines — a laptop, a desktop, a VM over SSH — you've hit this: you switch devices and your entire conversation history is gone.

All the context the AI had about your codebase, the decisions you made together, the thread of reasoning across dozens of messages — stuck on whichever machine you happened to be using.

This matters because Cursor's agent mode builds up a deep understanding of your project over a conversation. Losing that context means starting from scratch, re-explaining your architecture, and burning credits on information the AI already had.

Why This Happens

Cursor stores all chat data in local SQLite databases, not in the cloud. Even when you're connected to a remote server via SSH, the chats live on the machine running Cursor's UI.

There's no built-in sync. No export. No way to move conversations between machines.

The Fix: cursaves

cursaves is a CLI tool that exports your Cursor conversations to a private git repo and imports them on another machine.

# Install (once per machine)
uv tool install git+https://github.com/Callum-Ward/cursaves.git

# Set up sync repo (once per machine)
cursaves init --remote git@github.com:you/my-cursaves.git
Enter fullscreen mode Exit fullscreen mode

Then the workflow is two commands:

# Machine A: save your chats
cursaves push

# Machine B: restore them
cursaves pull
# Restart Cursor to see the imported chats
Enter fullscreen mode Exit fullscreen mode

That's it.

What It Actually Does

Push reads Cursor's SQLite databases, exports each conversation as a self-contained JSON snapshot (compressed with gzip), commits to a git repo, and pushes.

Pull fetches from git, writes the conversation data back into Cursor's databases, rewrites file paths to match the target machine, and registers the chats in the correct workspace.

It handles the tricky parts:

  • Project matching — uses git remote URLs to identify projects, so the same repo at /Users/alice/myapp and /home/bob/myapp syncs correctly
  • Path rewriting — file references in chat metadata are automatically rewritten for the target machine
  • SSH workspaces — Cursor stores SSH chats locally, so cursaves lets you target specific workspaces with -w or interactive selection with -s
  • Safety — reads use a temp copy of the database, writes create backups first, and imports are blocked while Cursor is running

Interactive Selection

You don't have to push/pull everything. The -s flag lets you pick:

$ cursaves push -s

  Conversations in my-project  (5 total)

  #    Name                                      Msgs  Last Updated
  ---------------------------------------------------------------------------
  1    Refactor auth middleware                   1004  2026-03-03 17:39 UTC
  2    Fix deployment pipeline                    993  2026-03-02 17:19 UTC
  3    Add rate limiting to API                    55  2026-02-13 14:41 UTC

  Select chats to push (e.g. 1,3 or 1-3 or 'all') [all]:
  > 1,2
Enter fullscreen mode Exit fullscreen mode

For pull, it shows each snapshot with its date and message count so you know exactly what you're importing.

SSH Remote Workflow

This was the use case that motivated the whole project. When you SSH into a VM through Cursor, your chats are on your laptop — not the VM. If you switch to a different laptop, those chats are gone.

# List all workspaces (local + SSH)
cursaves workspaces

#    Type   Path                                     Host         Chats
# --------------------------------------------------------------------------
# 1  ssh    /home/user/repos/myapp                   prod-vm          3
# 2  ssh    /home/user/repos/myapp                   dev-vm           5
# 3  local  /Users/me/Projects/webapp                                 2

# Push from a specific SSH workspace
cursaves push -w 1

# On another machine, pull it back
cursaves pull -s
Enter fullscreen mode Exit fullscreen mode

Important: run cursaves in a local terminal, not Cursor's integrated terminal (which runs on the remote).

Same-Machine Workspace Copying

It's not just for syncing between machines. Cursor isolates chats per workspace — if you clone the same repo to a new directory, or open it from a different path, your previous conversations won't be there.

cursaves handles this too:

# Export from the old workspace
cd /path/to/old/checkout
cursaves push

# Import into the new workspace
cd /path/to/new/checkout
cursaves pull
Enter fullscreen mode Exit fullscreen mode

No remote repo needed — cursaves init without --remote works for local-only use.

Performance

The tool needs to be fast since you'll run it frequently. Some numbers:

  • Snapshot listing: reads tiny metadata sidecar files instead of decompressing full snapshots — 3ms vs 15+ seconds
  • Conversation listing: single shared DB connection — 800ms for 5 conversations vs 4.7 seconds
  • Batch DB writes: imports 50K+ message entries in a single SQLite transaction instead of individual connections

Setup

You need Python 3.10+, uv, and git. No external Python dependencies. Tested with Cursor 2.6.11.

# 1. Install
uv tool install git+https://github.com/Callum-Ward/cursaves.git

# 2. Create a PRIVATE repo on GitHub for your chat data
#    (snapshots contain your full conversations — keep it private)

# 3. Initialize on each machine
cursaves init --remote git@github.com:you/my-cursaves.git

# 4. Start syncing
cursaves push   # save
cursaves pull   # restore (then restart Cursor)
Enter fullscreen mode Exit fullscreen mode

There's also a watch command that auto-syncs in the background:

cursaves watch -p /path/to/project
Enter fullscreen mode Exit fullscreen mode

How Cursor Stores Chats (for the curious)

Cursor uses two SQLite databases:

  • Workspace DB (workspaceStorage/{id}/state.vscdb) — sidebar metadata mapping conversation IDs to workspaces
  • Global DB (globalStorage/state.vscdb) — actual message content stored as individual JSON entries keyed by bubbleId:{composerId}:{messageId}

The workspace DB entry needs a type: "head" field and several metadata flags for Cursor to render it in the sidebar. The global DB can be 3+ GB and uses WAL mode. Cursor caches everything in memory at startup and never watches for external changes — which is why a full restart is required after import.

I wrote up the full details in the docs if you want to dig deeper.

Links

If you find it useful, a star on the repo or a coffee goes a long way. Issues and PRs welcome.

Top comments (0)