AI agents are showing up everywhere — in CI pipelines, internal tools, customer-facing products. But most teams have no easy way to answer basic questions: What agents are actually running in our codebase? What system prompts are they using? What tools do they have access to? Do their dependencies have known vulnerabilities?
We ran into this problem ourselves and ended up building an open-source CLI tool called Quin to solve it.
**What Quin does
**Quin scans your source code statically and:
Detects AI agents across 30+ frameworks (LangChain, CrewAI, AutoGen, Llama Index, and more)
Extracts system prompts where possible
Maps tool and function definitions attached to agents
Checks dependencies against OSV.dev for known CVEs
Generates a report in HTML, JSON, or YAML
Getting started
pip install quin-scanner
quin scan .
That's it. Point it at any directory and it'll produce a report of every agent it finds.
You can also scan an entire GitHub organization:
quin scan --org your-org-name
Why this matters
System prompts are the instructions that shape how an agent behaves — what it will and won't do, what persona it takes, what guardrails exist. In a lot of codebases these are scattered across files, hardcoded strings, or loaded from config. Without a scanner, auditing them manually is tedious and easy to miss.
Similarly, agent frameworks pull in a lot of dependencies. A vulnerable version of a core package can expose your entire agent pipeline.
Current state
Quin is early — v0.1.0b2. Prompt extraction is best-effort and works better on static strings than dynamically constructed prompts. We're actively improving coverage.
It's open source under Apache 2.0:
👉 https://github.com/Gaincontrol-Pte-Ltd/quin-agent-scanner
If you're building with AI agents and run it against your codebase, we'd love to hear what it finds — or what it misses.
Top comments (0)