DEV Community

hagishun
hagishun

Posted on

I Confused Copilot Coding Agent with Agentic Workflows — Turns Out the Guardrails Are the Point

I Tried GitHub Agentic Workflows and It Was Nothing Like Copilot Coding Agent

This article is for developers who are already familiar with GitHub Actions and Copilot. The goal is to clarify how Agentic Workflows differs in design philosophy from what you might already know.

Background

In February 2026, GitHub announced "Agentic Workflows" as a technical preview.

https://github.blog/2026-02-16-automate-repository-tasks-with-github-agentic-workflows/

When I heard "AI that automates repository tasks," I jumped in to try it — and immediately got confused. This article documents that confusion along with what I actually learned by getting my hands dirty.

My Initial Mistake: Is Coding Agent = Agentic Workflows?

I asked Copilot Agent Mode in VS Code to walk me through experiencing GitHub Agentic Workflows. It guided me through this flow:

  1. Set up a Python Todo app in a repository
  2. Create a GitHub Issue
  3. Assign Copilot Coding Agent to the Issue
  4. A PR gets created automatically
  5. GitHub Actions runs the tests

It was genuinely useful. Just by assigning Copilot to an Issue, a PR with test code was auto-generated.

But this was "Copilot Coding Agent" — not "Agentic Workflows."

I only realized this after re-reading the official announcement. They're completely different things.

Note: Copilot Coding Agent (assign to Issue → auto-generate PR) is an existing feature. GitHub Agentic Workflows (gh-aw), announced in February 2026, is a separate mechanism entirely.

What GitHub Agentic Workflows Actually Is

In one sentence

A system where you write "what you want done" in Markdown, and an AI agent continuously automates repository tasks based on that intent.

How it differs from GitHub Actions

GitHub Actions (YAML) Agentic Workflows (Markdown)
How you write it Define "steps" in YAML Write "intent" in Markdown
Who executes Runs defined steps in order AI interprets intent and acts autonomously
Decision-making None (just if/else branching) AI reads context and judges
Handling unknowns Only predefined patterns AI adapts to novel situations
Best for Build, test, deploy Repetitive tasks requiring judgment

Actions and Agentic Workflows coexist — one doesn't replace the other. In my own repository, test.yml (Actions) and issue-triage.md (Agentic Workflows) ran side by side without conflict.

How it differs from Copilot Coding Agent

Copilot Coding Agent Agentic Workflows
Trigger Manually assign to an Issue Auto-triggered by schedule or events
Purpose One-off coding tasks Continuous repository automation
Output Creates a PR Comments, labels, Issues, PRs, and more
How it's defined Instructions in Issue body .github/workflows/*.md files

Let's Walk Through It

Setup

# Install the gh-aw CLI extension
gh extension install github/gh-aw

# Initialize the repository
gh aw init

# Create a workflow template
gh aw new issue-triage
Enter fullscreen mode Exit fullscreen mode

Writing the Workflow

I created .github/workflows/issue-triage.md:

---
on:
  issues:
    types: [opened, reopened]

permissions:
  contents: read
  issues: read

safe-outputs:
  add-comment:
    max: 1
  add-labels:

tools:
  github:
---

# Issue Triage

When a new Issue is created, automatically triage it.

## What to do

1. Read the Issue title and body to understand the content
2. Apply the appropriate label from the following:
   - `bug` - Bug report
   - `enhancement` - Feature request
   - `question` - Question
   - `documentation` - Documentation-related
3. Add a comment to the Issue:
   - Briefly summarize the Issue content
   - Mention any relevant source files if applicable
   - Suggest a priority level (high / medium / low)

## Rules

- Respond in English
- Keep comments polite and concise
- If unclear, apply the `question` label and ask for clarification
Enter fullscreen mode Exit fullscreen mode

The key is the two-layer structure: YAML (frontmatter) + Markdown (intent):

  • Frontmatter: When to run, what it can read, what it's allowed to do
  • Markdown body: What you want the AI to do (natural language)

Compile and Push

# Generate a lock file (.lock.yml) from the Markdown
gh aw compile

# Push to the repository
git add -A && git commit -m "feat: add Issue triage workflow" && git push
Enter fullscreen mode Exit fullscreen mode

Results

After creating an Issue, the following happened automatically.

Issue created: "I want the list command output to show timestamps"

What the AI did automatically:

  • ✅ Applied the enhancement label (correctly identified as a feature request)
  • ✅ Posted a comment automatically
  • ✅ Identified the relevant source file (todo.py) down to specific line numbers
  • ✅ Suggested an implementation direction
  • ✅ Assessed priority as "medium"

All from just writing "here's what I want" in natural language. The AI read the repository context and made its own judgments.

Guardrails: The Most Important Part of the Design

What stuck with me most after trying this was how strict the guardrails are.

Precisely because you're handing the AI ambiguous intent, a system to prevent unintended behavior is essential. Agentic Workflows implements this through three layers of guardrails.

1. permissions (input constraints)

Controls what the AI can "read."

permissions:
  contents: read    # Can read code, but cannot write
  issues: read      # Can read Issues, but cannot close them
Enter fullscreen mode Exit fullscreen mode

2. safe-outputs (output constraints)

Explicitly grants what the AI is "allowed to do."

safe-outputs:
  add-comment:      # Can add one comment max
    max: 1
  add-labels:       # Can add labels
  # create-pull-request ← Not listed, so PRs are off-limits
Enter fullscreen mode Exit fullscreen mode

Real experience: When I initially wrote issues: write, the compile step rejected it with an error:

strict mode: write permission 'issues: write' is not allowed for security reasons.
Use 'safe-outputs.add-comment', 'safe-outputs.add-labels' to perform write operations safely.
Enter fullscreen mode Exit fullscreen mode

Instead of "write access to do anything," the design enforces individually scoped output permissions like "only allow adding comments."

3. Sandboxed Execution

  • Network isolation
  • PRs are never auto-merged (humans must always review)

Why Guardrails Matter

What the AI might otherwise do Guardrail
Close all Issues without permission close-issue not in safe-outputs → blocked
Directly modify code permissions: contents: read → read-only
Send data to external APIs Network isolation

Allow ambiguity, but limit the blast radius — this is the design philosophy unique to agentic systems, and it's fundamentally different from CI/CD.

Where I Got Stuck

1. Confusing Copilot Coding Agent with Agentic Workflows

The flow that Copilot Agent Mode guided me through was "Actions + Coding Agent" — a completely different thing from Agentic Workflows (gh-aw). Interestingly, the AI assistant itself couldn't accurately distinguish between the two.

2. Secrets configuration is required

gh aw initgh aw compile → push is not enough. You also need to configure a token for the AI engine.

# Create a Fine-grained PAT (personal account, with Copilot Requests permission)
# ※ Resource owner must be a personal account, not an Org
gh aw secrets set COPILOT_GITHUB_TOKEN --value "<PAT>"
Enter fullscreen mode Exit fullscreen mode

3. A UI trap when creating the PAT

To get the "Copilot Requests" permission to appear on the Fine-grained PAT creation screen:

  • Set Resource owner to your personal account (it won't appear under an Organization)
  • Set Repository access to "Public Repositories" (required for the option to show)

This is because Copilot licenses are issued at the individual account level.

6 Use Case Patterns

From the official documentation:

  1. Continuous triage — Auto-classify Issues and apply labels (what I tried today)
  2. Continuous documentation — Auto-update README as code changes
  3. Continuous simplification — Identify refactoring opportunities and create PRs
  4. Continuous test improvement — Evaluate test coverage and auto-add tests
  5. Continuous quality maintenance — Investigate CI failures and suggest fixes
  6. Continuous reporting — Periodically generate repository health reports

Summary

What I learned Details
Agentic Workflows ≠ Coding Agent Completely different — it's about writing intent in Markdown for continuous automation
Not a replacement for Actions Use Actions for CI/CD, Agentic Workflows for tasks requiring judgment
Guardrails are the core design Three-layer defense: permissions + safe-outputs + sandbox
Setup gotchas Secrets configuration, UI constraints when creating PATs

GitHub Agentic Workflows is currently in technical preview. The vision it points toward: you open your repository in the morning and Issues are already triaged, CI failures are explained, and documentation is up to date.

For Enterprise adoption, the next questions will be around governance: who is allowed to create Agentic Workflows, and how far should permissions go.

References

Top comments (0)