DEV Community

Cover image for Stop Losing AI Context: A Cross-AI Workflow for Copilot, Claude, and Codex
Mario Costa
Mario Costa

Posted on

Stop Losing AI Context: A Cross-AI Workflow for Copilot, Claude, and Codex

The Problem: AI Amnesia

You know the drill.

Monday morning, you ask Copilot to help refactor an authentication module. You spend 45 minutes in chat, get a solid plan, implement half of it. Then you get interrupted by a meeting.

Tuesday afternoon, you come back. Copilot has no memory of Monday. Claude doesn't know what Copilot suggested. You're starting from scratch.

Your context died in chat history.

This isn't a tool problem. It's a workflow problem.

The Solution: Artifact-Driven Development

What if your AI assistants shared a single source of truth? What if state lived in your repository—not in ephemeral chat windows?

Enter IA Boilerplate – a cross-AI workflow bootstrap that standardizes how Copilot, Claude, and Codex interact with your repo through shared artifacts, planning contracts, and execution rules.

One workflow, any AI. State lives in Git, not chat history.

How It Works (The 30-Second Version)

Define objective → Plan atomic slice → Execute focused diff

STATE.md → plans/PLAN.md → code changes

↓ ↓ ↓

ROADMAP.md → verification steps → TSDoc/PHPDoc

(phased work) → done criteria → SOLID boundaries

Every AI follows the same rules from the same source of truth. Planning artifacts are version-controlled. Summaries are capped at 180 words (token efficiency matters).

Trivial fix? Edit directly.

Bug fix? Short plan + validate.

New feature or risky refactor? Full workflow: map → plan → execute → verify → capture.

Why This Matters

Stop context drift – Your state lives in .planning/STATE.md, not in a chat that disappears overnight.

Built-in quality bar – Complete in-code documentation (TSDoc/PHPDoc) and SOLID architecture are non-negotiable defaults. Not suggestions. Requirements.

Token-efficient by design – Atomic loops, compact handoffs, and explicit scope boundaries reduce waste across every phase.

Auditable – Every planning decision, every verification step, every summary is committed. You can see why something was built six months later.

What's Inside v1.0.0

  • Canonical workflow and artifact contracts (docs/ai/WORKFLOW.md, docs/ai/ARTIFACTS.md)
  • Runtime adapters for Copilot, Claude, and Codex
  • Bootstrap script to personalize the template for your project
  • Conformance validation script (scripts/validate-workflow.sh)
  • Integration tests for the bootstrap process
  • End-to-end usage example
  • Multi-stack Todo API examples (Node, Python, Go, Rust, PHP)

Quick Start

# Clone and bootstrap
git clone https://github.com/mlucascosta/ia_boilerplate.git my-project
cd my-project
./scripts/bootstrap-template.sh --project-name "My Project"

# Validate everything is wired correctly
bash scripts/validate-workflow.sh

# Open the workflow and start
# docs/ai/WORKFLOW.md   — execution contract
# .planning/STATE.md    — set your current objective here
Enter fullscreen mode Exit fullscreen mode

Prerequisites: git, node, npx (comes with Node.js).

Real-World Example

Let's say you need to add a rate-limiting feature to your API.

Without this workflow: You explain the requirement to Copilot. It suggests a plan. You implement. Two days later, Claude has no context. You re-explain. Someone asks why you chose Redis over in-memory store. You don't remember.

With this workflow:

  1. You write the objective in .planning/STATE.md
  2. AI generates plans/PLAN.md with verification steps
  3. You review the plan (it's committed – everyone sees it)
  4. AI executes, following SOLID + documentation rules
  5. Results logged to verification/
  6. Next state step captured in STATE.md

Every AI sees the same artifacts. The rationale is documented. Your future self (and your team) will thank you.

The Philosophy

This isn't about replacing developers. It's about making AI-assisted delivery predictable, portable, and auditable.

Treat documentation as operational memory. Make planning and execution artifact-driven instead of chat-history-driven. Keep workflow expectations explicit, reviewable, and durable.

Influences: This setup combines ideas from J-Pster/Psters_AI_Workflow and gsd-build/get-shit-done – credited in the README. Evolution, not plagiarism.

Current Status

v1.0.0 – The workflow contract, tooling, and examples are in place. Tests exist. Multi-stack examples are done.

What's coming:

  • Cross-platform install docs (Linux, WSL, Windows)
  • Migration tooling for future v1.x → v2.x upgrades

Who Is This For?

  • Solo developers who use multiple AI tools (Copilot at work, Claude/Codex for side projects)
  • Teams that want consistent, auditable AI-assisted development
  • Anyone tired of repeating themselves across AI sessions

Try It Today

GitHub: github.com/mlucascosta/ia_boilerplate ⭐ Star if you find it useful – it helps others discover the project.

MIT Licensed. Contributions welcome.


Stop losing context. Start shipping.


Appendix: Quick Commands Reference

# Bootstrap a new project
./scripts/bootstrap-template.sh --project-name "Your Project"

# Validate workflow conformance
bash scripts/validate-workflow.sh

# Run integration tests
bash tests/test-bootstrap.sh
Enter fullscreen mode Exit fullscreen mode

Did this resonate? Drop a comment below or open an issue on GitHub. I'd love to hear how you're solving the AI context problem in your workflow.

Top comments (0)