GitHub Copilot's evolution into an agent mode (VS Code 1.100+) marks a significant leap forward in AI-assisted development. No longer just a reactive chatbot, it's now capable of autonomous tool execution, self-correction, and orchestrating sub-agents. This shift promises unprecedented productivity gains, yet a critical challenge remains: every session starts from scratch. Without memory of project conventions, architectural decisions, or past bug fixes, complex tasks demand constant manual context injection. This limitation can hinder developer velocity and complicate efforts to understand and improve team output, directly impacting how to measure performance of software developers.
A recent GitHub Community discussion by ptnghia offers an ingenious solution: explicitly building persistent context and memory for Copilot, much like tools such as Claude Code do by design. The proposed framework transforms Copilot from a reactive chatbot into a proactive, knowledge-aware development partner, offering a blueprint for a more intelligent and integrated AI workflow.
The Challenge: Copilot's Ephemeral Memory
The core issue identified by ptnghia is simple yet profound: Copilot's agent mode, despite its advanced capabilities, operates without a persistent memory of the project or previous interactions. Imagine a new team member joining your project every morning, needing to be briefed on:
- Project conventions (naming, error handling, commit style)
- Architectural decisions made last week, and their rationale
- Specific bugs that were fixed, and the methods used
This constant re-injection of context is not only tedious but also inefficient, especially for complex tasks. It creates friction, slows down progress, and makes it difficult to leverage the AI's full potential. For engineering and delivery managers, this translates to unpredictable timelines and a struggle to maintain consistent code quality, making it harder to establish a clear baseline for how to measure performance of software developers within an AI-assisted environment.
Developer struggling with manual context injection for AI## The Vision: A Persistent, Intelligent AI Workflow
ptnghia's solution is elegant: if Copilot doesn't have persistent memory by default, we can build it. The core of this innovative approach lies in two dedicated repositories designed to work seamlessly with both the VS Code extension's agent mode and the GitHub Copilot CLI:
-
copilot-workspace-setup: A workspace template providing context injection, persistent memory, an agent pipeline, and lifecycle hooks. -
mcp-error-learning: An MCP (Multi-Context Processor) server that accumulates knowledge from bug history, ensuring the Debugger agent doesn't 'forget' between sessions.
Building the Brain: The Context System
At the heart of this framework is a 2-tier instruction system that loads automatically, eliminating the need for manual prompting. This ensures Copilot is always aware of the project's foundational rules:
-
Global Conventions: A
.github/copilot-instructions.mdfile defines overarching rules for naming, commit messages, security practices, and general coding standards. -
Stack-Specific Rules: Within
.github/instructions/, files likelaravel.instructions.mdornextjs.instructions.mdprovide guidance specific to certain technologies. These rules are intelligently applied only when the agent interacts with matching file types, preventing context pollution with irrelevant information.
The Project's Long-Term Memory: The .context/ Directory
Beyond immediate instructions, projects need long-term memory. The .context/ directory serves as the project's enduring knowledge base, auto-injected every session via a SessionStart hook:
-
HISTORY.md: A change log, with only the last 15 entries injected to keep context lean. -
DECISIONS.md: An index of architectural decisions, linking to detailed ADR (Architecture Decision Record) files indecisions/. -
ERRORS.md: An index of known bugs, linking to detailed bug reports inerrors/. -
plans/: Stores system designs and phase-specific plans. -
sessions/: Automatically logs per-session activities via aPostToolUsehook.
This structured approach ensures that as a project grows, Copilot can access relevant historical data without being overwhelmed, providing a valuable resource for maintaining code quality and understanding project evolution. This persistent memory is a powerful asset, offering a Gitential free alternative for teams looking to maintain historical context without specialized tools.
Two-repository system for GitHub Copilot context and error learning### Orchestrating Intelligence: The Agent Pipeline and Lifecycle Hooks
To move beyond simple task execution, ptnghia designed a sophisticated agent pipeline:
Pipeline Agents (user-invocable: false)
These specialized agents work in sequence, coordinated by a central agent:
-
planner: Analyzes tasks, creates breakdowns. -
implementer: Writes code per conventions. -
tc-writer: Develops test cases. -
qa-tester: Runs tests, analyzes failures, fixes, re-runs.
Coordinator (user-invocable: true)
The oryn-dev agent orchestrates the full pipeline: Plan → Implement → Test → Commit → Log.
On-Demand Agents (user-invocable: true)
For specific, non-pipeline tasks:
-
architect: Pre-project system design. -
debugger: Bug fixing, integrates with MCP. -
code-reviewer: PR reviews, inline comments, offering a consistent positive feedback for software developer example. -
security-auditor: OWASP Top 10 scans. -
quick: Simple, one-off tasks.
Lifecycle Hooks further automate the workflow, responding to four key events: SessionStart (injects context), UserPromptSubmit (checks task status), PostToolUse (logs file edits), and Stop (ensures HISTORY.md update).
Bootstrapping Success: Starting New Projects with AI Architects
One of the most impactful features is the pre-project design phase. Before running oryn-dev, the architect agent is invoked to produce a comprehensive blueprint:
#architect "I need to build system X with requirements Y"The architect agent analyzes the project through four sequential lenses: Architecture, Data Model, API Surface, and Risk & Phase. Crucially, each lens reads the output of the previous, allowing for natural conflict detection. For example, the API Surface lens can flag inconsistencies if the database schema (from the Data Model lens) doesn't align with proposed API endpoints. This proactive approach significantly reduces design flaws and accelerates the initial development phase.
Once the blueprint is reviewed, oryn-dev takes over, implementing phase by phase, with the entire pipeline automating code generation, testing, and documentation updates.
Learning from Mistakes: The Error Learning MCP
A common frustration with AI is its lack of long-term memory regarding bug fixes. The mcp-error-learning server addresses this by accumulating knowledge from bug history. When a new bug arises, the Debugger agent can search for similar past issues, suggest known fixes, or, if no match is found, perform a Root Cause Analysis (RCA), fix the bug, and record the solution for future reference. This system transforms temporary fixes into persistent learning, continually improving the AI's diagnostic and problem-solving capabilities.
GitHub Copilot agent pipeline with coordinator and on-demand agents### Tailoring AI to Your Team: Project-Specific Agents
While the provided agents are general-purpose, the framework allows for easy creation of custom agents. Teams can add project-specific agents to .github/agents/ to automate unique workflows. The distinction between using the quick agent and creating a new one is clear:
- Use
quickfor one-off tasks without special toolsets or fixed workflows. - Create a new agent for repeated tasks requiring specific tools, MCP integration, or a defined checklist/review process.
The example of a migration-reviewer agent for a Django project perfectly illustrates this. It automates checks for missing indexes, breaking changes, and data loss risks before merges, ensuring consistent quality and adherence to best practices.
Getting Started
Implementing this robust workflow is straightforward:
git clone https://github.com/orynvn/copilot-workspace-setup.git temp-setup
cp -r temp-setup/.github/ your-project/
cp -r temp-setup/.context/ your-project/
cp -r temp-setup/.vscode/ your-project/
Pick your stack template
cp temp-setup/templates/nextjs/.github/copilot-instructions.md \
your-project/.github/copilot-instructions.md
Optional: Error Learning MCP
cd your-project
git clone https://github.com/orynvn/mcp-error-learning.git
pip install -e mcp-error-learning/This setup requires VS Code 1.100+ with GitHub Copilot, and Python 3.12+ for the MCP.
Beyond the Horizon: Open Questions and Future Potential
As with any innovative solution, questions remain. ptnghia himself poses:
- Is
architectagent design quality sufficient without significant manual editing? - Does Error Learning MCP accumulate knowledge fast enough for medium-length projects?
- Which hook might cause enough friction to be disabled?
These questions highlight areas for further exploration, but the potential impact on developer productivity, delivery consistency, and software quality is undeniable.
Conclusion
ptnghia's workflow design for GitHub Copilot Agent Mode offers a compelling vision for the future of AI-assisted development. By explicitly building persistent context, memory, and an intelligent agent pipeline, teams can transform Copilot from a smart assistant into a truly knowledge-aware development partner. This framework not only streamlines complex tasks and reduces manual overhead but also provides a more consistent and reliable development environment. For dev team members, product/project managers, delivery managers, and CTOs, this means enhanced productivity, improved delivery predictability, and a clearer path to understanding how to measure performance of software developers in an increasingly AI-integrated landscape. It's a powerful step towards truly agentic AI that learns, adapts, and grows with your project.
Top comments (0)