Greetings! Today, I’m sharing a short guide on how to set up a project to work with GitHub Copilot.
Reliable AI workflow with GitHub Copilot: complete guide with examples
This guide shows how to build predictable and repeatable AI processes (workflows) in your repository and IDE/CLI using agentic primitives and context engineering. Here you will find the file structure, ready-made templates, security rules, and commands.
⚠️ Note: the functionality of prompt files and agent mode in IDE/CLI may change - adapt the guide to the specific versions of Copilot and VS Code you use.
1) Overview: what the workflow consists of
The main goal is to break the agent's work into transparent steps and make them controllable. For this there are the following tools:
- Custom Instructions (
.github/copilot-instructions.md) - global project rules (how to build, how to test, code style, PR policies). - Path-specific Instructions (
.github/instructions/*.instructions.md) - domain rules targeted viaapplyTo(glob patterns). - Chat Modes (
.github/chatmodes/*.chatmode.md) - specialized chat modes (for example, Plan/Frontend/DBA) with fixed tools and model. - Prompt Files (
.github/prompts/*.prompt.md) - reusable scenarios/"programs" for typical tasks (reviews, refactoring, generation). - Context helpers (
docs/*.spec.md,docs/*.context.md,docs/*.memory.md) - specifications, references, and project memory for precise context. - MCP servers (
.vscode/mcp.jsonor via UI) - tools and external resources the agent can use.
2) Project file structure
The following structure corresponds to the tools described above and helps to compose a full workflow for agents.
.github/
copilot-instructions.md
instructions/
backend.instructions.md
frontend.instructions.md
actions.instructions.md
prompts/
implement-from-spec.prompt.md
security-review.prompt.md
refactor-slice.prompt.md
test-gen.prompt.md
chatmodes/
plan.chatmode.md
frontend.chatmode.md
.vscode/
mcp.json
docs/
feature.spec.md
project.context.md
project.memory.md
3) Files and their purpose - technical explanation
Now let's review each tool separately and its role. Below is how it’s arranged under the hood: what these files are, why they exist, how they affect the agent's understanding of the task, and in what order they are merged/overridden. The code examples below match the specification.
| File/folder | What it is | Why | Where it applies |
|---|---|---|---|
.github/copilot-instructions.md |
Global project rules | Consistent standards for all responses | Entire repository |
.github/instructions/*.instructions.md |
Targeted instructions for specific paths | Different rules for frontend/backend/CI | Only for files matching the applyTo
|
.github/chatmodes/*.chatmode.md |
A set of rules + allowed tools for a chat mode | Separate work phases (plan/refactor/DBA) | When that chat mode is selected |
.github/prompts/*.prompt.md |
Task "scenarios" (workflow) | Re-run typical processes | When invoked via /name or CLI |
docs/*.spec.md |
Specifications | Precise problem statements | When you @-mention them in dialogue |
docs/*.context.md |
Stable references | Reduce "noise" in chats | By link/@-mention |
docs/*.memory.md |
Project memory | Record decisions to avoid repeats | By link/@-mention |
.vscode/mcp.json |
MCP servers configuration | Access to GitHub/other tools | For this workspace |
Merge order of rules and settings: Prompt frontmatter → Chat mode → Repo/Path instructions → Defaults.
And now let's review each tool separately.
3.1. Global rules - .github/copilot-instructions.md
What it is: A Markdown file with short, verifiable rules: how to build, how to test, code style, and PR policies.
Why: So that all responses rely on a single set of standards (no duplication in each prompt).
How it works: The file automatically becomes part of the system context for all questions within the repository. No applyTo (more on that later) - it applies everywhere.
Minimal example:
# Repository coding standards
- Build: `npm ci && npm run build`
- Tests: `npm run test` (coverage ≥ 80%)
- Lint/Typecheck: `npm run lint && npm run typecheck`
- Commits: Conventional Commits; keep PRs small and focused
- Docs: update `CHANGELOG.md` in every release PR
Tips.
- Keep points short.
- Avoid generic phrases.
- Include only what can affect the outcome (build/test/lint/type/PR policy).
3.2. Path-specific instructions - .github/instructions/*.instructions.md
What it is: Modular rules with YAML frontmatter applyTo - glob patterns of files for which they are included.
Why: To differentiate standards for different areas (frontend/backend/CI). Allows controlling context based on the type of task.
How it works: When processing a task, Copilot finds all *.instructions.md whose applyTo matches the current context (files you are discussing/editing). Matching rules are added to the global ones.
Example:
---
applyTo: "apps/web/**/*.{ts,tsx},packages/ui/**/*.{ts,tsx}"
---
- React: function components and hooks
- State: Zustand; data fetching with TanStack Query
- Styling: Tailwind CSS; avoid inline styles except dynamic cases
- Testing: Vitest + Testing Library; avoid unstable snapshots
Note.
- Avoid duplicating existing global rules.
- Ensure the glob actually targets the intended paths.
3.3. Chat modes - .github/chatmodes/*.chatmode.md
What it is: Config files that set the agent’s operational mode for a dialogue: a short description, the model (if needed) and a list of allowed tools.
Why: To separate work phases (planning/frontend/DBA/security) and restrict tools in each phase. This makes outcomes more predictable.
File structure:
---
description: "Plan - analyze code/specs and propose a plan; read-only tools"
model: GPT-4o
tools:[ "search/codebase"]
---
In this mode:
- Produce a structured plan with risks and unknowns
- Do not edit files; output a concise task list instead
How it works:
- The chat mode applies to the current chat in the IDE.
- If you activate a prompt file, its frontmatter takes precedence over the chat mode (it can change the model and narrow
tools). - Effective allowed tools: chat mode tools, limited by prompt tools and CLI
--allow/--denyflags.
Management and switching:
- In the IDE (VS Code):
- Open the Copilot Chat panel.
- In the top bar, choose the desired chat mode from the dropdown (the list is built from
.github/chatmodes/*.chatmode.md+ built-in modes). - The mode applies only to this thread. To change - select another or create a new thread with the desired mode.
- Check the active mode in the header/panel of the conversation; the References will show the
*.chatmode.mdfile.
-
In the CLI: (a bit hacky, better via prompts)
- There is usually no dedicated CLI flag to switch modes; encode desired constraints in the prompt file frontmatter and/or via
--allow-tool/--deny-toolflags. - You can instruct in the first line: “Use the i18n chat mode.” - if the version supports it, the agent may switch; if not, the prompt frontmatter will still enforce tools.
- There is usually no dedicated CLI flag to switch modes; encode desired constraints in the prompt file frontmatter and/or via
Without switching the mode: run a prompt with the required
tools:in frontmatter - it will limit tools regardless of chat mode.
Diagnostics: if the agent uses "extra" tools or does not see needed ones - check: (1) which chat mode is selected; (2) tools in the prompt frontmatter; (3) CLI --allow/--deny flags; (4) References in the response (visible *.chatmode.md/*.prompt.md files).
3.4. Prompt files - .github/prompts/*.prompt.md
What it is: Scenario files for repeatable tasks. They consist of YAML frontmatter (config) and a body (instructions/steps/acceptance criteria). They are invoked in chat via /name or via CLI.
When to use: When you need a predictable, automatable process: PR review, test generation, implementing a feature from a spec, etc.
Frontmatter structure
-
description- short goal of the scenario. -
mode-ask(Q&A, no file edits) ·edit(local edits in open files) ·agent(multistep process with tools). -
model- desired model profile. -
tools- list of allowed tools for the scenario (limits even what the chat mode allowed).
Execution algorithm (sequence)
- Where to run:
-
In chat: type
/prompt-nameand arguments in the message field. -
In CLI: call
copilotand pass the/prompt-name …line (interactive or via heredoc /-pflag).
Context collection: Copilot builds the execution context in the following order:
repo-instructions→path-instructions (applyTo)→chat mode→frontmatter prompt(the prompt frontmatter has the highest priority and can narrow tools/change the model).-
Parameter parsing (where and how):
-
In chat: parameters go in the same message after the name, for example:
/security-review prNumber=123 target=apps/web. -
In CLI: parameters go in the same
/…line in stdin or after the-pflag. - Inside the prompt file they are available as
${input:name}. If a required parameter is missing, the prompt can ask for it textually in the dialog.
-
In chat: parameters go in the same message after the name, for example:
Resolving tool permissions:
- Effective allowed tools: chat mode tools, limited by prompt tools and CLI
--allow/--denyflags. - If a tool is denied, the corresponding step is skipped or requires confirmation/change of policy.
Executing steps from the prompt body: the agent strictly follows the Steps order, doing only what is permitted by policies/tools (searching the codebase, generating diffs, running tests, etc.). For potentially risky actions, it requests confirmation.
Validation gates: at the end, the prompt runs checks (build/tests/lint/typecheck, output format checks). If a gate fails - the agent returns a list of issues and proposes next steps (without auto-merging/writing changes).
Where the result appears (what and where you see it):
-
Main response - in the chat panel (IDE) or in stdout (CLI): tables, lists, textual reports, code blocks with
diff. - File changes - in your working tree: in IDE you see a diff/suggested patches; in CLI files change locally (if allowed by tools).
- Additional artifacts - e.g., a PR comment if GitHub tools are allowed and the prompt specifies it.
Output format and checks (recommended)
- Always specify the output format (for example, table "issue | file | line | severity | fix").
- Add validation gates: build/tests/lint/typecheck; require unified-diff for proposed changes; a TODO list for unresolved issues.
Example of a complete prompt file
---
mode: 'agent'
model: GPT-4o
tools: ['search/codebase']
description: 'Implement a feature from a spec'
---
Goal: Implement the feature described in @docs/feature.spec.md.
Steps:
1) Read @docs/feature.spec.md and produce a short implementation plan (bullets)
2) List files to add/modify with paths
3) Propose code patches as unified diff; ask before installing new deps
4) Generate minimal tests and run them (report results)
Validation gates:
- Build, tests, lint/typecheck must pass
- Output includes the final diff and a TODO list for anything deferred
- If any gate fails, return a remediation plan instead of "done"
Anti-patterns
- Watered-down descriptions: keep
description1–2 lines. - Missing output format.
- Too many tools: allow only what is needed (
tools).
Quick start
- Chat:
/implement-from-spec - CLI:
copilot <<<'/implement-from-spec'orcopilot -p "Run /implement-from-spec"
3.5. Context files - specs/context/memory
What it is: Helper Markdown files (not special types) that you @-mention in dialogue/prompt. Typically stored as documentation.
-
docs/*.spec.md- precise problem statements (goal, acceptance, edge cases, non-goals). -
docs/*.context.md- short references (API policies, security, UI styleguide, SLA). -
docs/*.memory.md- "decision log" with dates and reasons so the agent does not return to old disputes.
Example:
# Feature: Export report to CSV
Goal: Users can export the filtered table to CSV.
Acceptance criteria:
- "Export CSV" button on /reports
- Server generates file ≤ 5s for 10k rows
- Column order/headers match UI; locale-independent values
Edge cases: empty values, large numbers, special characters
Non-goals: XLSX, multi-column simultaneous filters
3.6. MCP - .vscode/mcp.json
What it is: Configuration for Model Context Protocol servers (for example, GitHub MCP) which enable tools for the agent.
Why: So the agent can read PRs/issues, run tests, interact with DB/browser - within allowed permissions.
Example:
{
"servers": {
"github-mcp": {
"type": "http",
"url": "https://api.githubcopilot.com/mcp"
}
}
}
Security. Connect only trusted servers; use allow/deny tool lists in prompts/chat modes/CLI.
3.7. General context merge order and priorities (rules & tools)
-
Instructions: copilot-instructions + all
*.instructions.mdwithapplyTothat match current paths. A specific instruction is added to the common context. - Chat mode: restricts the toolset and (if needed) the model for the session.
- Prompt frontmatter: has the highest priority; can limit tools and override the model.
- Context: anything you @-mention is guaranteed to be considered by the model.
Diagnostics. Check the References section in outputs - it shows which instruction files were considered and which prompt was run.
3.8. Example: full i18n cycle with Goman MCP (create/update/prune)
Below is the exact process and templates on how to ensure: (a) when creating UI components localization keys are created/updated in Goman; (b) when removing components - unused entries are detected and (after confirmation) deleted.
Code snippets and frontmatter are in English.
3.8.1. MCP config - connect Goman
/.vscode/mcp.json
{
"servers": {
"goman-mcp": {
"type": "http",
"url": "https://mcp.goman.live/mcp",
"headers": {
"apiKey": "<YOUR_API_KEY>",
"applicationid": "<YOUR_APPLICATION_ID>"
}
}
}
}
3.8.2. Repo/Path rules - enforce i18n by default
/.github/instructions/frontend.instructions.md (addition)
---
applyTo: "apps/web/**/*.{ts,tsx}"
---
- All user-facing strings **must** use i18n keys (no hardcoded text in JSX/TSX)
- Key naming: `<ui_component_area>.<name>` (e.g., `ui_button_primary.label`)
- When creating components, run `/i18n-component-scaffold` and commit both code and created keys
- When deleting components, run `/i18n-prune` and confirm removal of unused keys
3.8.3. Chat mode - limited i18n tools
/.github/chatmodes/i18n.chatmode.md
---
description: "i18n - manage localization keys via Goman MCP; enforce no hardcoded strings"
model: GPT-4o
tools:
- "files"
- "goman-mcp:*"
---
In this mode, prefer:
- Creating/updating keys in Goman before writing code
- Checking for existing keys and reusing them
- Producing a table of changes (created/updated/skipped)
3.8.4. Prompt - scaffold component + keys in Goman
/.github/prompts/i18n-component-scaffold.prompt.md
---
mode: 'agent'
model: GPT-4o
tools: ['files','goman-mcp:*']
description: 'Scaffold a React component with i18n keys synced to Goman'
---
Inputs: componentName, namespace (e.g., `ui.button`), path (e.g., `apps/web/src/components`)
Goal: Create a React component and ensure all user-visible strings use i18n keys stored in Goman.
Steps:
1) Plan the component structure and list all user-visible strings
2) For each string, propose a key under `${namespace}`; reuse if it exists
3) Using Goman MCP, create/update translations for languages: en, be, ru (values may be placeholders)
4) Generate the component using `t('<key>')` and export it; add a basic test
5) Output a Markdown table: key | en | be | ru | action(created/updated/reused)
Validation gates:
- No hardcoded literals in the produced .tsx
- Confirm Goman actions succeeded (report tool responses)
- Tests and typecheck pass
Example component code:
import { t } from '@/i18n';
import React from 'react';
type Props = { onClick?: () => void };
export function PrimaryButton({ onClick }: Props) {
return (
<button aria-label={t('ui.button.primary.aria')} onClick={onClick}>
{t('ui.button.primary.label')}
</button>
);
}
3.8.5. Prompt - prune unused keys when removing components
/.github/prompts/i18n-prune.prompt.md
---
mode: 'agent'
model: GPT-4o
tools: ['files','goman-mcp:*']
description: 'Find and prune unused localization keys in Goman after code deletions'
---
Inputs: pathOrDiff (e.g., a deleted component path or a PR number)
Goal: Detect keys that are no longer referenced in the codebase and remove them from Goman after confirmation.
Steps:
1) Compute the set of removed/renamed UI elements (scan git diff or provided paths)
2) Infer candidate keys by namespace (e.g., `ui.<component>.*`) and check code references
3) For keys with **zero** references, ask for confirmation and delete them via Goman MCP
4) Produce a Markdown table: key | status(kept/deleted) | reason | notes
Validation gates:
- Never delete keys that still have references
- Require explicit confirmation before deletion
- Provide a rollback list of deleted keys
3.8.6. Prompt - sync and check missing translations (optional)
/.github/prompts/i18n-sync.prompt.md
---
mode: 'agent'
model: GPT-4o
tools: ['files','goman-mcp:*']
description: 'Sync new/changed i18n keys and check for missing translations'
---
Goal: Compare code references vs Goman and fill gaps.
Steps:
1) Scan code for `t('...')` keys under provided namespaces
2) For missing keys in Goman - create them (placeholder text ok)
3) For missing languages - create placeholders and report coverage
4) Output coverage table: key | en | be | de | missing
4) How to use this (IDE and CLI)
4.1. In VS Code / other IDE
- Open Copilot Chat - choose Agent/Edit/Ask in the dropdown.
- For prompt files just type
/file-namewithout extension (e.g./security-review). - Add context using
@-mentions of files and directories. - Switch chat mode (Plan/Frontend/DBA) when the task changes.
4.2. In Copilot CLI (terminal)
- Example install:
npm install -g @github/copilot→ runcopilot. - Interactively: “Run
/implement-from-specon @docs/feature.spec.md”. - Programmatically/in CI:
copilot -p "Implement feature from @docs/feature.spec.md" --deny-tool shell("rm*"). - Add/restrict tools with flags:
--allow-all-tools,--allow-tool,--deny-tool(global or by pattern, e.g.shell(npm run test:*)).
4.3. Cookbook commands for CLI (chat modes and prompts)
Below are ready recipes. All commands should run from the repository root and respect your deny/allow lists.
A. Run a prompt file in an interactive session
copilot
# inside the session (enter the line as-is)
/security-review prNumber=123
B. Run a prompt file non-interactively (heredoc)
copilot <<'EOF'
/security-review prNumber=123
EOF
C. Pass prompt file parameters
copilot <<'EOF'
/implement-from-spec path=@docs/feature.spec.md target=apps/web
EOF
Inside the prompt you can read values as
${input:target}and${input:path}.
D. Run a prompt with safe tool permissions
copilot --allow-tool "shell(npm run test:*)" \
--deny-tool "shell(rm*)" \
<<'EOF'
/security-review prNumber=123
EOF
E. Use a chat mode (specialized mode) in the CLI
copilot
# inside the session - ask to switch to the required mode and run the prompt
Use the i18n chat mode.
/i18n-component-scaffold componentName=PrimaryButton namespace=ui.button path=apps/web/src/components
If your client supports selecting the mode via a menu - choose i18n before running the prompt. If not - specify constraints in the prompt frontmatter (
toolsand rules in the prompt body).
F. Send file links/diffs as context
copilot <<'EOF'
Please review these changes:
@apps/web/src/components/PrimaryButton.tsx
@docs/feature.spec.md
/security-review prNumber=123
EOF
G. Change the model for a specific run
We recommend specifying the model in the prompt frontmatter. If supported, you can also pass a model flag at runtime:
copilot --model GPT-4o <<'EOF'
/implement-from-spec
EOF
H. i18n cycle with Goman MCP (CHAT)
Run sequentially in a chat thread:
/i18n-component-scaffold componentName=PrimaryButton namespace=ui.button path=apps/web/src/components
/i18n-prune pathOrDiff=@last-diff
/i18n-sync namespace=ui.button
What you get:
- resulting tables/reports in the chat panel;
- code changes in your working tree (IDE shows diffs);
- no CLI commands for Goman MCP are required here.
5) Context engineering: how not to "dump" excess context
- Split sessions by phases: Plan → Implementation → Review/Tests. Each phase has its own Chat Mode.
- Attach only necessary instructions: use path-specific
*.instructions.mdinstead of dumping everything. - Project memory: record short ADRs in
project.memory.md- this reduces agent "forgetting" between tasks. - Context helpers: keep frequent references (API/security/UI) in
*.context.mdand link to them from prompt files. - Focus on the task: in prompt files always state the goal, steps and output format (table, diff, checklist).
6) Security and tool management
- Require explicit confirmation before running commands/tools. In CI use
--deny-toolby default and add local allow lists. - Permission patterns: allow only what is necessary (
shell(npm run test:*),playwright:*), deny dangerous patterns (shell(rm*)). - Secrets: never put keys in prompts or instructions; use GitHub Environments or local secret managers and
.envwith .gitignore. - Any MCP - only from trusted origins; review the code/config before enabling.
- Patch checks: require unified-diff and explanations in prompt files - this makes review easier.
7) CI/CD recipe (optional example)
Ensure "everything builds": run Copilot CLI in a dry/safe mode to produce a comment for the PR.
# .github/workflows/ai-review.yml
name: AI Review (Copilot CLI)
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
ai_review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
- name: Install Copilot CLI
run: npm install -g @github/copilot
- name: Run security review prompt (no dangerous tools)
env:
PR: ${{ github.event.pull_request.number }}
run: |
copilot -p "Run /security-review with prNumber=${PR}" \
--deny-tool shell("rm*") --deny-tool shell("curl*") \
--allow-tool shell("npm run test:*") \
--allow-tool "github:*" \
> ai-review.txt || true
- name: Comment PR with results
if: always()
run: |
gh pr comment ${{ github.event.pull_request.number }} --body-file ai-review.txt
Tip: keep tight deny/allow lists; do not give the agent "full freedom" in CI.
8) Small scenarios and tips that might be useful
- From idea to PR:
/plan- discuss the plan -/implement-from-spec→ local tests - PR -/security-review. - Maintenance:
/refactor-slicefor local improvements without behavior changes. - Tests:
/test-genfor new modules + manual additions for edge cases. - Gradual rollout: start with 1–2 prompt files and one chat mode; expand later.
9) Quality checks (validation gates)
In each prompt file, fix "what counts as done":
- Output format: risk table, unified-diff, checklist.
- Automated checks: build, unit/integration tests, lint/typecheck.
- Manual check: "OK to merge?" with rationale and residual risks.
10) Anti-patterns and hacks
- Anti-pattern: one huge instructions.md. Prefer multiple
*.instructions.mdwithapplyTo. - Anti-pattern: generic words instead of rules. Prefer concrete commands/steps.
- Anti-pattern: running dangerous shell commands without a gate. Use deny/allow and manual confirmation.
- Anti-pattern: forgetting specs/memory. Maintain
feature.spec.mdandproject.memory.md. - Anti-pattern: mixing tasks in one session. Create a Chat Mode per phase.
11) Implementation checklist
- Add
.github/copilot-instructions.md(at least 5–8 bullets about build/tests/style). - Create 1–2
*.instructions.mdwithapplyTo(frontend/backend or workflows). - Add
plan.chatmode.mdand one prompt (for example,implement-from-spec.prompt.md). - Create
docs/feature.spec.mdanddocs/project.memory.md. - Include MCP (GitHub MCP at minimum) via
.vscode/mcp.json. - Run the workflow in VS Code:
/implement-from-spec- verify - PR. - (Optional) Add a simple AI review in CI via Copilot CLI with strict deny/allow lists.
12) Questions and answers (FAQ)
Q: How to ensure Copilot "sees" my instructions?
A: Check the response's summary/References; also keep rules short and concrete.
Q: Can I pass parameters dynamically into prompt files?
A: Yes, typically via placeholder variables (like ${prNumber}) or simply via the text query when running /prompt in chat.
Q: Where to store secrets for MCP?
A: In GitHub Environments or local secret managers; not in .prompt.md/.instructions.md.
Q: Which to choose: Chat Mode vs Prompt File?
A: Chat Mode defines the "frame" (model/tools/role). Prompt File is a "scenario" within that frame.
13) Next steps
- Add a second prompt for your most frequent manual process.
- Make
project.memory.mdmandatory after all architecture decisions. - Gradually move collective knowledge into
*.context.mdand reference it from prompt files.
Appendix A - Quickstart templates
All keys, paths, and flags match the docs (Oct 28, 2025).
/.github/copilot-instructions.md - repository-wide rules
# Repository coding standards
- Build: `npm ci && npm run build`
- Tests: `npm run test` (coverage ≥ 80%)
- Lint/Typecheck: `npm run lint && npm run typecheck`
- Commits: Conventional Commits; keep PRs small and focused
- Docs: update `CHANGELOG.md` in every release PR
/.github/instructions/frontend.instructions.md - path-specific rules
---
applyTo: "apps/web/**/*.{ts,tsx},packages/ui/**/*.{ts,tsx}"
---
- React: function components and hooks
- State: Zustand; data fetching with TanStack Query
- Styling: Tailwind CSS; avoid inline styles except dynamic cases
- Testing: Vitest + Testing Library; avoid unstable snapshots
/.github/instructions/backend.instructions.md - path-specific rules
---
applyTo: "services/api/**/*.{ts,js},packages/server/**/*.{ts,js}"
---
- HTTP: Fastify; version APIs under `/v{N}`
- DB access: Prisma; migrations via `prisma migrate`
- Security: schema validation (Zod), rate limits, audit logs
- Testing: integration tests via `vitest --config vitest.integration.ts`
/.github/instructions/actions.instructions.md - GitHub Actions
---
applyTo: ".github/workflows/**/*.yml"
---
- Keep jobs small; reuse via composite actions
- Cache: `actions/setup-node` + built-in cache for npm/pnpm
- Secrets: only through GitHub Environments; never hardcode
/.github/chatmodes/plan.chatmode.md - custom chat mode
---
description: "Plan - analyze code/specs and propose a plan; read-only tools"
model: GPT-4o
tools:
- "search/codebase"
---
In this mode:
- Produce a structured plan with risks and unknowns
- Do not edit files; output a concise task list instead
/.github/prompts/security-review.prompt.md - prompt file
---
mode: 'agent'
model: GPT-4o
tools: ['search/codebase']
description: 'Perform a security review of a pull request'
---
Goal: Review PR ${input:prNumber} for common security issues.
Checklist:
- Authentication/authorization coverage
- Input validation and output encoding (XSS/SQLi)
- Secret management and configuration
- Dependency versions and known CVEs
Output:
- A Markdown table: issue | file | line | severity | fix
- If trivial, include a unified diff suggestion
/.github/prompts/implement-from-spec.prompt.md - prompt file
---
mode: 'agent'
model: GPT-4o
tools: ['search/codebase']
description: 'Implement a feature from a spec'
---
Your task is to implement the feature described in @docs/feature.spec.md.
Steps:
1) Read @docs/feature.spec.md and summarize the plan
2) List files to add or modify
3) Propose code changes; ask before installing new dependencies
4) Generate minimal tests and run them
Validation gates:
- Build, tests, lint/typecheck must pass
- Provide a TODO list for anything deferred
/.github/prompts/refactor-slice.prompt.md - prompt file
---
mode: 'agent'
model: GPT-4o
description: 'Refactor a specific code slice without changing behavior'
---
Goal: Improve readability and reduce side effects in @src/feature/* while keeping behavior unchanged.
Criteria: fewer side effects, clearer structure, all tests pass.
/.github/prompts/test-gen.prompt.md - prompt file
---
mode: 'agent'
model: GPT-4o-mini
description: 'Generate tests for a given file/module'
---
Ask the user to @-mention the target file; generate unit/integration tests and edge cases.
/docs/feature.spec.md - spec skeleton
# Feature: Export report to CSV
Goal: Users can export the filtered table to CSV.
Acceptance criteria:
- "Export CSV" button on /reports
- Server generates file ≤ 5s for 10k rows
- Column order/headers match UI; locale-independent values
Edge cases: empty values, large numbers, special characters
Non-goals: XLSX, multi-column simultaneous filters
/.vscode/mcp.json - minimal MCP config
{
"servers": {
"github-mcp": {
"type": "http",
"url": "https://api.githubcopilot.com/mcp"
}
}
}
Appendix B - Operational extras (CLI & CI examples)
These examples complement Appendix A; they cover runtime/automation usage and do not duplicate templates above.
Copilot CLI - safe tool permissions (interactive/CI)
# Start an interactive session in your repo
copilot
# Allow/deny specific tools (exact flags per GitHub docs)
copilot --allow-tool "shell(npm run test:*)" --deny-tool "shell(rm*)"
# Run a prompt file non-interactively (example)
copilot <<'EOF'
/security-review prNumber=123
EOF
GitHub Actions - comment review results on a PR
name: AI Security Review (Copilot CLI)
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
- name: Install Copilot CLI
run: npm install -g @github/copilot
- name: Run security review prompt
env:
PR: ${{ github.event.pull_request.number }}
run: |
copilot --allow-tool "shell(npm run test:*)" --deny-tool "shell(rm*)" <<'EOF'
/security-review prNumber=${PR}
EOF
- name: Post results
run: |
gh pr comment ${{ github.event.pull_request.number }} --body "Copilot review completed. See artifacts/logs for details."
Sources
Adding repository custom instructions for GitHub Copilot
How to build reliable AI workflows with agentic primitives and context engineering
🙌 PS:
Thank you for reading to the end! If the material was useful, we would be very glad if you:
- 💬 Leave a comment or question,
- 📨 Suggest an idea for the next article,
- 🚀 Or simply share it with friends!
Technology becomes more accessible when it is understood. And you have already made the first important step 💪
See you in the next article! Thank you for your support!
Top comments (0)