Layer 2 of the Agentic OS: On-Demand Capabilities for GitHub Copilot
In our previous article, we discussed how Layer 1 (Always-On Context) forms the bedrock of your Agentic OS by silently injecting global coding standards into every prompt. It is incredibly powerful, but it introduces a new scaling problem: context exhaustion.
If you stuff every architectural diagram, deployment runbook, and security protocol into your .github/copilot-instructions.md, you will inevitably overwhelm the Large Language Model (LLM). This dilutes the AI's focus, increases latency, and triggers the dreaded "hallucinations."
To keep the AI sharp, we need intentionality. Enter Layer 2 — On-Demand Capabilities.
The Philosophy of Layer 2: Progressive Loading
Layer 2 is all about providing context only when it is hyper-relevant to the task at hand. Instead of a single massive set of rules, you break down complex operations into discrete, manually invoked tools.
The Agentic OS defines three distinct primitives residing in Layer 2:
1. Prompt Files (.github/prompts/*.prompt.md)
Prompt files are essentially saved macros. They bundle up complex instructions and context into simple "slash commands" that a developer can invoke in the chat interface.
Why ask the AI: "Please review this code for security vulnerabilities, focusing specifically on SQL injection and XSS according to our internal security policy" when you can just type /security-review?
Common Use Cases:
-
/changelog: Automatically generate release notes from git history using a specific template. -
/refactor-tests: Apply a specific mocking structure to legacy unit tests.
2. Custom Agents (.github/agents/*.agent.md)
While standard Copilot acts as a generalist, Custom Agents act as specialist personas.
You can define agents equipped with their own specific tools and Model Context Protocol (MCP) servers. The true power of Custom Agents lies in hand-offs (agent chaining).
Instead of asking one monolithic AI to build an entire feature, you can chain highly focused agents together:
- Planning Agent: Analyzes the GitHub issue and generates an architecture document.
- Implementation Agent: Reads the architecture document and writes the exact code.
- Review Agent: Critiques the generated code against specific enterprise security standards.
3. Skills (.github/skills/<name>/SKILL.md)
Skills are self-contained folders that encapsulate instructions, scripts, and Markdown references for repeatable technical operations.
They rely on progressive loading. Copilot only reads the short description of the Skill at first. If it decides the specific Skill is needed to answer a developer's prompt (e.g., navigating a server outage), it will proactively load the full set of instructions inside SKILL.md into its active context window.
Common Use Cases:
- Incident Triage: A skill that knows how to parse production logs and cross-reference them with recent deployments.
- Infrastructure as Code (IaC) Risk Analysis: A skill explicitly invoked to check Terraform Pull Requests against cloud compliance rules.
Conclusion
Layer 2 transforms Copilot from a single chat window into an arsenal of specialized tools. By segmenting your team's knowledge into distinct Prompts, Agents, and Skills, you maintain a lean, highly articulate AI assistant capable of scaling up for complex tasks without losing its mind.
But what happens when things go wrong? What if you want to prevent an AI from executing a tool unless a human reviews it? Or what if you want a workflow to run autonomously in the background without a developer even opening their IDE?
In the next article, we will explore Layer 3 — Enforcement & Automation, where probabilistic AI finally meets enterprise-grade deterministic rules.
Note: This article is part of the "GitHub Copilot Agentic OS" series.
Top comments (0)