Most developers I know are loyal to their agentic coding platform. You're a Copilot person, a Claude Code person, etc. That made sense when each required its own special way of managing context.
But AGENTS.md is quietly changing that equation. It's a universal context standard that works across GitHub Copilot, Claude Code, Gemini CLI, OpenAI Codex, and others. Write your project spec once, use any platform that fits the moment.
I tested this by building the same application three different ways. Here's what I learned about the current state of agentic coding and why your workflow might benefit from a multi-platform approach.
What is AGENTS.md?
AGENTS.md is a standardized markdown file that provides context to AI coding assistants. Think of it as a project brief that lives in your repository: requirements, technical specifications, coding preferences, architectural decisions, and context an AI needs to work effectively.
What makes it useful:
- Universal standard: Works across GitHub Copilot, Claude Code, Gemini CLI, OpenAI Codex, and other AI coding tools
- Plain markdown: No special syntax required
- Persistent context: The AI reads it each time, so you're not re-explaining your project
What goes in it:
Project overview, technical requirements, file structure, coding standards, dependencies, and setup instructions.
Where it lives:
Place it in your project root directory. Some platforms support AGENTS.md files at multiple levels for more granular context.
Platform-specific notes:
GitHub Copilot also supports Instructions.md at various levels, but AGENTS.md works universally. Claude Code and Gemini CLI both use AGENTS.md as their primary context source.
The key advantage: write your context once, and multiple AI coding tools can use it.
The Experiment
For this experiment, I needed a project complex enough to stress-test these tools: Conway's Game of Life with real-time pattern recognition (to make the coding challenge a bit harder) and a retro arcade aesthetic. The AGENTS.md specification was 2,000 words covering the cellular automaton logic, visual effects (CRT scanlines, glow), and automatic detection and color-coding of emergent patterns like gliders, oscillators, and still lifes.
Same spec. Three platforms:
- GitHub Copilot with GPT-5 (my daily driver, typically with Claude Sonnet 4.5)
- Claude Code (Anthropic's command-line coding agent)
- Gemini CLI (Google's terminal-based coding tool)
I ran each from a clean slate, pointing them at the same AGENTS.md file. No hand-holding, no iterative fixes, just one shot to see what each would build.
What Actually Happened
All three tools produced working implementations. But the approaches, results, and developer experience differed in revealing ways.
Claude Code: The Planner
Claude Code paused before writing code. It read the specification, presented a detailed roadmap of what it intended to build (file structure, implementation approach, feature priorities), then required my approval before proceeding.
This felt collaborative. Less "AI does the thing" and more "AI proposes a plan, human signs off."
The result? The most polished one-shot implementation. Pattern recognition worked correctly, visual effects were solid, code was well-structured. It felt production-ready.
Gemini CLI: The Honest Craftsman
Gemini got close. The implementation was visually true to the requested aesthetic. But it was upfront about not being finished: "Next, I will focus on enhancing the pattern detection to recognize more complex patterns like gliders and other oscillators, as specified in the project requirements."
I appreciated the honesty. It delivered something genuinely good while acknowledging where it fell short of the spec. The transparency felt valuable.
GitHub Copilot + GPT-5: The Capable Generalist
Copilot produced a solid implementation quickly. The game worked, the retro aesthetic was there, the code was clean. But pattern recognition (specifically the color-coding of oscillators) didn't quite work as specified.
Not broken, just incomplete on one of the core features. Still impressive, just not as polished as Claude Code's output.
The Objective Analysis
I didn't want this to just be my opinion. So I had Grok Code Fast 1 conduct a blind code review of all three implementations.
I gave Grok the AGENTS.md specification and all three complete codebases. No context about which tool built which. Just: evaluate these against the spec.
Claude Code: 9/10
• Pattern Recognition: Excellent (gliders in all 4 orientations, multiple still lifes, oscillators)
• Advanced Features: Afterglow trails, extinction alerts, stable pattern detection ✓
• Visual Polish: Full retro arcade UI with CRT scanlines and legend ✓
• Weaknesses: Missing LWSS spaceship detection; potential performance lag in dense grids
GitHub Copilot + GPT-5: 9/10
• Pattern Recognition: Strong (gliders in all 4 orientations, LWSS spaceship, oscillators, still lifes)
• Advanced Features: Scanlines, vignette, vector-style glow, stability alerts ✓
• Visual Polish: Balanced retro aesthetic with optional FPS display ✓
• Weaknesses: Oscillator detection relies on state comparison, potentially missing edge cases
(I'd give it an 8, that 9 is a bit generous to my mind.)
Gemini CLI: 6/10
• Pattern Recognition: Limited (only block still life and horizontal blinker)
• Advanced Features: Basic trail effects for dead cells
• Visual Polish: Clean, functional UI with retro styling ✓
• Weaknesses: Severely limited pattern detection (misses gliders, spaceships, most oscillators); no stability/extinction detection
The Workflow Insight
Beyond the scores, this experiment revealed something practical: using multiple AI coding tools on the same project is now genuinely viable and maybe even optimal.
Both Claude Code and Gemini CLI install via Homebrew on Mac (brew install claude-code
/ brew install gemini-cli
), which makes experimentation trivially easy. Both also make your terminal look fantastic, which shouldn't matter but somehow does.
The real insight: if you're already using Copilot in VSCode, you'd be missing an opportunity not to open a Terminal pane and occasionally run Claude Code or Gemini CLI for a second opinion. Both tools will read your AGENTS.md file for context. You're not starting over. You're getting a different perspective on the same project.
The AGENTS.md file makes this seamless. One specification, multiple tools that can execute against it, for those times when one agent gets stuck on a hard problem.
What This Means
We're at an interesting moment with AI-assisted development. These tools aren't experimental anymore. They're genuinely capable. Claude Code delivered something close to production-ready code in one shot. Copilot's implementation was solid and reliable. Even Gemini, despite its pattern recognition gaps, built something functional and visually appealing, and I'm sure given a second shot, it would nail the pattern recognition.
The AGENTS.md standard makes it practical to use multiple tools without rewriting context each time. This isn't about abandoning your preferred assistant. It's about recognizing that different tools have different strengths. Claude Code's planning phase caught edge cases. Copilot's spaceship detection was more complete. Gemini's aesthetic choices were compelling even where its pattern detection fell short.
You don't need to pick one. The infrastructure for multi-tool workflows already exists.
Try It Yourself
All three implementations are available to explore:
The AGENTS.md file that powered all three is here.
If you're already using one AI coding assistant, consider experimenting with another. The barrier to entry is lower than you think, and the insights from seeing different approaches to the same problem are worth the fifteen minutes it takes to try.
Top comments (0)