If you’re like me, you’ve toggled between AI coding assistants trying to find the "best" one. Gemini generates features fast, while OpenCode’s models are good for catching edge cases. But why choose?
I built a custom workflow using Gemini CLI to orchestrate three specialized agents that bridge these two worlds. Here’s how I get the best of both: Gemini's speed for implementation and OpenCode's rigor for review.
The Three-Agent Setup
My .agents directory contains three distinct roles. The magic of Gemini CLI is its ability to not only write code but also manage other CLIs and agents:
- code-writer (Gemini-powered): The primary builder. It handles the heavy lifting of implementation and iterates on feedback.
- opencode-code-reviewer (Gemini-powered): The "Bridge Agent." This Gemini agent knows how to run the
opencodeCLI, capture its feedback, and hand it back to the writer. - code-reviewer (OpenCode-powered): The "Expert Reviewer." This is the native agent inside OpenCode that provides the actual technical critique.
The Workflow (Step by Step)
This setup allows me to move from an issue to a verified PR with just two main commands:
Step 1: Implementation
I start by asking Gemini to implement the feature:
Use the code-writer to implement ISSUE-1
The code-writer generates the initial code, runs local tests, and ensures everything is idiomatic.
Step 2: The Cross-Model Bridge
Next, I trigger the review. This is where it gets interesting:
Use the opencode-code-reviewer to review the
changes and ask code-writer to address them.
Under the hood, the Bridge Agent does the following:
- Executes
opencode run --agent code-reviewerto get a deep-dive analysis. - Captures the feedback (Status, Summary, Action Items).
- Invokes the
code-writeragain, passing the OpenCode feedback as the new instructions.
Step 3: Iterate until Approved
The loop repeats automatically or manually until the OpenCode reviewer returns an "APPROVED" status.
Why This Works
- Model Diversity: Different models have different blind spots. Having a Gemini agent write code and an OpenCode agent review it catches bugs that a single model might miss during self-review.
- Automated Orchestration: Gemini CLI handles the tool-calling and context-passing. You don't have to copy-paste code into different web UIs.
- Specialization: You use the best tool for each job.
The Source Code
Here is the core of the setup. You can drop these into your .agents/ folder and customize them for your own models.
1. .agents/code-writer.md
---
name: code-writer
tools: ["write", "edit", "bash"]
---
# Code Writer Agent
You are an expert Google engineer. Implement features, write tests, and address feedback from the reviewer agents.
2. .agents/opencode-code-reviewer.md
---
name: opencode-code-reviewer
tools: ["run_shell_command", "invoke_agent"]
---
# Bridge Agent
1. Run: `opencode run --agent code-reviewer "Review changes..."`
2. Capture output.
3. Call `code-writer` with that output to fix any issues.
3. .agents/code-reviewer.md
---
name: code-reviewer
---
# Expert Reviewer
You are an expert Google engineer. Provide a structured review with Status (APPROVED/CHANGES_REQUESTED), Summary, and Action Items.
Leveraging multiple AI tools via a single CLI changed how I build. It’s not about finding the "one" assistant; it's about building the right team.
Top comments (0)