Iāve shared how I use sub-agents and parallel workflows with my team of subagent āminions.ā Another real breakthrough came when I stopped treating them like generalists and started giving them specialized tools.
This shift changed everything. I went from treating AI like a single, all-knowing oracleāalways chasing the perfect promptāto seeing it as a team of specialized, junior engineers that I lead. My job became less about asking questions and more about orchestrating a workflow: giving the right task to the right specialist with the right tool.
This is where the Model Context Protocol (MCP) comes in. Itās how you give your agents superpowers. Iām learning that effective AI engineering is less about a magic prompt and more about building this kind of robust process.
Effective AI engineering, Iām learning, is less about a magic prompt and more about building a robust process. MCPs are a huge part of that.
Whatās an MCP, Anyway?
In short, MCPs are secure connections that give your AI agents access to specialized, real-time information and tools. Think of them as the unique gadget you hand to each agent for their specific mission.
You can configure them globally in a ~/.claude.json
file or on a per-project basis. For example, I have one project with production and development Supabase instances. I can set up two different Supabase MCPs, making sure my "dev minion" only ever talks to the dev database.
Some of my Go-To MCPs as part of my core workflow:
1. The Zen MCP: The Utility Belt
The Zen MCP is like a high-tech utility belt full of gadgets. My agents can now do more than just write code. For example, they can analyze, debate, and secure it.
How it works: My sub-agents are now designed to use these tools by default. For instance, the system prompt for Stuart the Reviewer instructs him to use the secaudit tool, not just "review the code."
Think of it like this:
- It's a high-tech utility belt. Kevin the Architect now has access to a āphoneā. He can "call" other models like Gemini and debate the best approach.
- Stuart the Reviewer gets a security toolkit with blueprints and a circuit tester (secaudit tool) to run a thorough inspection of the building's security and CCTV system that we are building (our application).
2. The Supabase MCP: The Library Card
This has been a huge help for keeping my agents synced with my database schema, RLS policies, and edge functions. But safety comes first.
How it works: Iām sure weāve all heard stories of an agent accidentally deleting a production database. Thatās why for most tasks, I set the Supabase MCP to read-only. No matter how advanced these agents get, fundamental practices like the principle of least privilege havenāt changed. Humans accidentally DROP TABLE, and so can agents. The same safety guardrails should apply to both. The only exception is for a brand new, non-production project, where I might grant write-access to let an agent help build out the initial schema.
Think of it like this:
Itās a library card for the database that powers my app. Bob the Debugger gets a read-only card. He can enter the library, browse the shelves (tables), check the card catalog (schema), and read the librarianās logs, but he canāt rip out any pages.
3. The Context7 MCP: The Updated Documentation Blueprint
This MCP is crucial for both planning and debugging. When Kevin the Architect is about to build something with a new library, it ensures he has the latest specs. And when Bob the Debugger needs to check why an old function is misbehaving, it gives him access to the right version of the documentation.
How it works: An LLMās knowledge is frozen in time, based on when it was trained. A library could have released a new version yesterday with breaking changes, and the agent wouldnāt know. Context7 connects the agent to a live feed of information from external knowledge bases, giving it real-time, accurate context.
Think of it like this:
- Kevin canāt rely on an old paper manual he found in the back of the truck. Instead, he gets a tablet thatās live-linked directly to the manufacturerās official documentation server. When a new version of a library is released, his manual updates instantly. Heās never working from an old blueprint, which prevents him from using deprecated functions or outdated security practices.
- Similarly, when Bob is hunting a strange bug, he uses it to pull up the exact documentation for the specific library version weāre using, helping him spot issues caused by subtle API changes.
4. The GitHub MCP
In my last post, I talked about parallel development. The GitHub MCP helps my agents manage that complexity.
How it works: It helps agents track GitHub issues, create branches, and manage worktrees. It can even analyze tasks to identify potential merge conflicts before they happen and even writes better commit messages than I do.
Think of it like this:
Itās the construction foremanās clipboard. It gives my lead agent a real-time overview of the entire construction site ā seeing all work zones (worktrees) and progress logs (commits) to prevent two teams from building in the same place.
Hereās a simplified example of the kind of analysis agent, powered by the GitHub MCP, can provide. You can ask it to review all open issues and suggest a parallel development plan, it might return something like this:
Analysis of Open Project Issues (12 total):
Phase 1 - Core Auth & Security (3 remaining):
Issue #34: Implement JWT refresh token rotation
Issue #35: Refactor user permission checks to use middleware
Issue #37: Add Content Security Policy (CSP) headers
Phase 2 - UI & User Experience (5 remaining):
Issue #41: Implement dark mode toggle and theme provider
Issue #42: Improve accessibility by adding ARIA labels
Issue #44: Upgrade component library to latest version
Issue #45: Refactor component state from useState to Zustand
Issue #46: Create a global notification/toast component
Phase 3 - Backend & API (4 remaining):
Issue #50: Add database index to the 'projects' table for faster lookups
Issue #52: Optimize API response for the main dashboard query
Issue #53: Standardize API error response format
Issue #55: Increase test coverage for the API services layer
Recommended Parallel Workstreams:
Based on the issues, here are three potential streams with their estimated conflict risk:
Stream A - Low Conflict (Backend Performance):
Issue #50: Add database index
Issue #52: Optimize API dashboard query
Issue #53: Standardize API errors
This stream primarily involves backend and database changes with minimal impact on the frontend codebase.
Stream B - Medium Conflict (UI Foundation):
Issue #41: Implement dark mode
Issue #44: Upgrade component library
Issue #46: Create global notification component
This stream involves broad, application-wide UI changes. The component library upgrade carries a medium risk of creating conflicts with feature-specific UI work.
Stream C - Low Conflict (Security Hardening):
Issue #34: Implement JWT refresh token rotation
Issue #35: Refactor permission middleware
These tasks are critical and isolated to the authentication and server layers, posing a low risk of conflict with other streams.
Recommendation:
You can safely begin Stream A (Backend) and Stream C (Security) in parallel. They have minimal file overlap and address critical performance and security priorities. Hold Stream B (UI) until the component library upgrade is assessed for breaking changes.
From Planning to Merging: The Agent as a Git Partner
Planning helps minimize chaos, but with parallel worktrees, merge conflicts are inevitable. This is where the agent surprised me. Itās not just a planner; itās a partner in resolving the messy parts.
It can read the git log to understand whatās changed in each branch. When divergent branches need to be merged, I can instruct it on how to handle the merge and even how to resolve conflicts. For example, I can tell it: āPrioritize all changes from the feat/major-refactor
branch over the feat/ui-tweaks
branch.ā
Here's a simple example of the agent walking me through a merge. (I have had other real cases where the agent was able to handle much more complex conflict scenarios) It found a conflict, analyzed it, resolved it, and then summarized the outcome.
ā Bash(git status)
> On branch feature/user-dashboard.
> Your branch is up to date with 'origin/feature/user-dashboard'.
ā Great! Now let's initiate the merge with the bug fix branch:
ā Bash(git merge fix/login-bug)
> Error: Auto-merging package-lock.json
> CONFLICT (content): Merge conflict in package-lock.json
> Automatic merge failed; fix conflicts and then commit the result.
ā Perfect! We have a merge conflict, but it's actually quite manageable ā only in package-lock.json. Let me check the conflict details.
ā Excellent! This is actually much better than expected. The merge conflict is ONLY in package-lock.json ā all the critical files have been automatically merged successfully! Let me resolve the package-lock.json conflict by keeping our current version.
ā Update(package-lock.json)
> Updated package-lock.json with 4 removals
ā Merge Successfully Completed!
ā
**Merge Status: SUCCESS** ā Much smoother than expected!
ā
**Files Successfully Merged:**
- components/dashboard/edit-profile-form.tsx - Enhanced form handling
- components/dashboard/user-data-table.tsx - Improved UI interactions
- lib/api/queries.ts - Integrated new login logic
- package-lock.json - Resolved naming conflict
š« **No Conflicts in Critical Files:**
- Authentication system ā Completely preserved from feature/user-dashboard
- Database functions ā User creation system untouched
- API routes ā All routing logic remains consistent
ā
**Integration Success:**
The merge was exceptionally clean because the workstreams had different focus areas and Git's auto-merge handled the code integration perfectly.
---
**Next Step: Comprehensive Testing**
And hereās the part that really impressed me. After the successful merge, the agent didnāt just stop. It proactively generated a follow-up prompt for my testing agent:
āHereās a verification prompt for test-writer-dave: Please conduct a comprehensive regression test to verify that both the new user login system AND the profile editing functionality work correctly together on the newly merged branch.ā
So much fun. It connected the dots from planning, to merging, to the final step of verification, truly acting like a senior member of the team.
5. The Playwright MCP: The Simulation Chamber
This is the ultimate tool for Dave the Test-Writer, my QA expert. He uses it for end-to-end testing and debugging.
How it works:
- Daveās system prompt instructs him to use Playwright to write and execute comprehensive test scripts that simulate user interactions. He runs the tests, reports back with pass/fail status, and even saves screenshots of the test runs in a dedicated folder.
- During debugging, Bob the Debugger also pulls up the Playwright tool sometimes. Heāll write a small, targeted script to simulate the exact user actions that triggered a bug, confirm the failure, and then run it again after a fix to generate a ātest and fixā report.
Think of it like this: Itās like giving my agents a powerful simulation game that perfectly mirrors our application.
- Dave the Test-Writer is the āmission designer.ā He writes detailed scripts that make a character (the simulated user) run through the entire game ā logging in, completing quests, buying items from the shop. If the character gets stuck or something doesnāt work as expected, the game flags it as a bug. It even records the entire playthrough with screenshots for review.
- Bob the Debugger, on the other hand, is the āglitch hunter.ā He doesnāt play the whole campaign. He loads a specific level, inputs a very precise sequence of actions to replicate a reported bug, and watches to see if the gameās physics break. Itās his way of confirming the bug and then verifying his fix worked.
The Symphony of Agents: How It All Fits Together
Before we look at the specific tools, hereās how all the pieces work together in harmony:
- Me (The Conductor): I set the high-level goal for each task (e.g., āFix the XYZ issueā). My role is strategic direction.
- Main Claude Code Session (The Orchestra): It maintains the overall context, understands the goal, and delegates the task to the right specialist agent.
- Custom Sub-Agents (The Section Leaders): These are my specialized agents like Kevin-the-Architect or Dave-the-Test-Writer. Each has a specific role and expertise.
- MCP Tools (The Instruments): These are the powerful tools ā like secaudit from Zen, or access to Supabase ā that my specialist agents use to perform their jobs with precision.
A Final Thought: Itās like a āMinion Inceptionā / āAgenticeptionā
My latest experiment? Asking one agent to analyze a task and write a more effective mission briefing for another specialized agent to execute. It feels a bit like inception, but itās another step toward building a smarter, more automated workflow.
After the successful Git merge in the last section, the main agent knew the next logical step was regression testing. I asked it to generate the prompt, and this is what it created and passed to test-writer-zen:
> next, can you please help me to prompt test-writer-zen with your generated prompt?
thanks.
ā I'll help you test the system using the test-writer-zen agent with a comprehensive
testing prompt. Let me launch the test-writer-zen agent to validate both the user
signup flow and order management functionality work correctly together.
ā test-writer-zen(Task: Test user signup and order system)
Prompt: Comprehensive Regression Testing: User Signup & Order System Integration
You need to validate that both the recently implemented secure user signup flow
and the order editing functionality work correctly together after the recent
branch merge. This is critical testing to ensure no regressions were introduced.
Test Scope & Context
--------------------
Recently Merged Features:
1. Secure User Signup Flow - Auth provider integration with database functions
2. Order Editing Fix - Enhanced order modification functionality
3. Authentication Flow - Custom JWT token handling and user mapping
Critical Integration Points to Test:
- Auth provider authentication ā Supabase user creation/mapping
- User creation API (/api/users, /api/auth/register)
- Database functions (create_user_profile, link_auth_provider_id)
- Order CRUD operations with proper user context
- Row Level Security (RLS) policy enforcement
Test Categories Required
------------------------
1. Authentication & User Creation Flow
- Auth provider login process and token generation
- New user signup with business profile creation
- Existing user ID mapping (for legacy users)
- JWT token refresh and validation
- Session persistence across page refreshes
This is my current workflow, but Iām constantly looking for better ways to equip my team of agents. The MCPs Iāve shared have made the biggest difference for me so far, but I know the ecosystem is huge. What other superpowers are you giving your agents? Iād love to hear about the MCPs you canāt live without.ā :)
Top comments (0)