We lint code. We run tests. We scan for vulnerabilities. But most teams shipping AI agents don't check whether those agents follow any governance rules.
I built a GitHub Action that does exactly that. It scans your Python files on every PR and tells you what's missing - audit trails, kill switches, human oversight. Takes 30 seconds to set up.
The workflow
Add this file to .github/workflows/ai-governance.yml:
name: AI Agent Governance
on: [pull_request]
jobs:
compliance:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: jagmarques/asqav-compliance@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
That's the whole thing. No config files, no tokens to generate, no dashboard to sign up for.
What it does
When a PR is opened, the action scans every Python file that imports an AI agent framework. It looks for 10 frameworks out of the box - LangChain, OpenAI, Anthropic, CrewAI, AutoGen, LlamaIndex, Haystack, Semantic Kernel, Google GenAI, and Smol Agents.
For each file, it checks five categories:
- Audit trail - are you logging what the agent does?
- Policy enforcement - rate limits, timeouts, scope restrictions?
- Revocation - can you shut the agent down in an emergency?
- Human oversight - is there a human-in-the-loop for risky actions?
- Error handling - are agent calls wrapped in try/except?
Then it posts a report directly on the PR as a comment.
What the output looks like
Here's a real example. Three agent files, two gaps:
## AI Agent Governance Report
| Metric | Value |
|-------------------------|--------------------------------|
| Compliance Score | 60/100 |
| Agent files scanned | 3 |
| Frameworks detected | langchain, openai, crewai |
### Governance Checks
| Category | Status | Details |
|----------------------|--------|------------------------------|
| Audit Trail | PASS | 3/3 files covered |
| Policy Enforcement | PASS | 3/3 files covered |
| Revocation Capability| GAP | 2/3 files missing coverage |
| Human Oversight | GAP | 3/3 files missing coverage |
| Error Handling | PASS | 3/3 files covered |
### Recommendations
- Add a kill switch or revocation mechanism
so agents can be disabled in an emergency.
- Add human-in-the-loop approval flows for
high-risk agent actions.
Each gap comes with a recommendation. Your team can see exactly what to fix before merging.
Block PRs that fail
If you want to enforce compliance as a gate:
- uses: jagmarques/asqav-compliance@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
fail-on-gaps: 'true'
Now the check fails if any governance gap is found. The PR can't be merged until the gaps are resolved.
You can also scope it to a specific directory:
- uses: jagmarques/asqav-compliance@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
scan-path: 'src/agents'
fail-on-gaps: 'true'
What regulations does this map to?
The five checks aren't random. They map to real requirements:
- EU AI Act - Article 14 (human oversight), Article 15 (accuracy and robustness)
- DORA - ICT risk management, incident response, operational resilience
- ISO 42001 - AI management system controls
You don't have to care about compliance to find this useful. Even if you just want to make sure every agent has error handling and a way to shut it down - that's a reasonable engineering standard.
The scoring
Each category is worth 20 points. If 2 out of 3 agent files have audit trails, you get ~13 points for that category. Total score is 0-100.
- 80-100: solid governance
- 50-79: some gaps to fix
- Below 50: you've got work to do
Try it
Add the workflow file above to any repo that uses AI agents. Open a PR. You'll get a report in about 30 seconds.
The action is open source and free: github.com/jagmarques/asqav-compliance
Top comments (1)
That's a solid approach. I've seen too many CI/CD pipelines skip security checks for third-party dependencies. It's easy to let them slide, but even one vulnerable library can bring the whole system down. Automating compliance checks makes sense — especially with how fast dependencies can change.