Yesterday a supply chain attack hit litellm — 97 million monthly downloads. One pip install. SSH keys, AWS credentials, API tokens, git secrets, crypto wallets — all silently exfiltrated in under an hour.
This is TA05 in AAISAF — a framework I published today.
The Problem
Every company that deployed an AI system in 2023–2025 created an attack surface their security team has never seen.
Prompt injection. RAG pipeline poisoning. Agent-to-agent manipulation. MCP server exploitation. Voice AI bypass. Supply chain attacks on AI dependencies.
Existing frameworks tell you what to worry about. Nobody told you how to actually test for it.
OWASP LLM Top 10 — vulnerability categories, no testing methodology
MITRE ATLAS — adversary mapping, no practitioner guidance
NIST AI RMF — governance structure, no attack techniques
We built the missing layer.
What AAISAF Is
AAISAF (AI Security Assessment Framework) is an open-source, technique-level methodology for assessing AI system security.
Structured like MITRE ATT&CK — tactic → technique → sub-technique — applied to AI systems.
10 tactic categories
87 assessment techniques
9 domain checklists
6 compliance framework mappings
3 assessment types (30min / 1-2 day / 5-10 day)
5-level maturity model
Each technique includes attack description and prerequisites, AISS severity score (0.0–10.0), detection guidance, remediation steps, and mandatory evidence (CVE, documented incident, or peer-reviewed research).
Two Attack Surfaces With Zero Prior Coverage
TA10 — MCP Server & Tool Security
Model Context Protocol is Anthropic's standard for connecting AI to external tools. Released November 2024. Now the de facto integration standard with thousands of production deployments globally.
CVE-2025-6514 (CVSS 9.6). 1,467 exposed servers on the internet. Zero frameworks covering it.
We built 12 techniques:
MCP Attack Surface
├── Tool Poisoning via Malicious Description (AISS 8.1)
├── Rug Pull Attack (AISS 8.4)
├── Tool Shadowing (AISS 8.0)
├── Cross-Origin Injection via MCP Resource (AISS 8.3)
├── Privilege Escalation via Tool Chain (AISS 8.7)
├── SSRF via MCP Tool (AISS 7.2)
├── Data Exfiltration via Tool Output (AISS 7.5)
├── MCP Auth Bypass (AISS 9.1)
├── Malicious Server Registration (AISS 8.5)
├── Tool Argument Injection (AISS 7.0)
├── Transport Layer Exploitation (AISS 7.3)
└── Consent Fatigue Exploitation (AISS 5.8)
I run MCP servers in production as part of a 13-agent AI orchestration system. These techniques came from understanding the architecture from the inside.
TA06 — Voice AI Exploitation
Millions of AI phone agents handle customer calls daily across healthcare, finance, customer service, and sales. Real-time. Autonomous. Trusted by default because it sounds human.
No security framework had mapped the attack techniques against them.
9 techniques:
Voice AI Attack Surface
├── Voice Prompt Injection via Speech (AISS 7.0)
├── Synthetic Voice Spoofing / Deepfake (AISS 8.5)
├── Conversation Flow Bypass (AISS 5.5)
├── Audio Adversarial Examples (AISS 7.2)
├── Credential Harvesting via Voice Agent (AISS 8.3)
├── DTMF Signal Injection (AISS 6.8)
├── Voice Agent Vishing (AISS 8.7)
├── STT Pipeline Exploitation (AISS 5.8)
└── Real-Time Voice Cloning in Active Call (AISS 9.0)
I build and operate production voice agents on Retell AI infrastructure. Every technique here comes from first-hand knowledge of where these systems break.
The AISS Scoring System
Standard CVSS doesn't capture AI-specific risk dimensions.
We built AISS — AI Impact Severity Score — CVSS-compatible 0.0–10.0 with five additional metrics:
Autonomy Impact — can this attack trigger autonomous harmful action?
Cascade Potential — single agent to system-wide propagation risk?
Persistence — ephemeral to permanent compromise?
Data Sensitivity Exposure — what does the attacker actually access?
Financial Impact Potential — direct and indirect loss estimation
Every one of the 87 techniques is scored. Boards understand it. Compliance teams can document against it.
Compliance Mappings
Every technique maps to:
OWASP LLM Top 10 (2025)
MITRE ATLAS
NIST AI RMF + AI 600-1 (GenAI Profile)
ISO/IEC 42001
EU AI Act — high-risk system requirements hit August 2026 (5 months)
Australian Privacy Act, Essential Eight, VAISS/AI6, SOCI Act
Quick Start
bashgit clone https://github.com/Jbermingham1/aaisaf
1. Identify your system type (A–G: chatbot, RAG, agentic, multi-agent, voice, MCP, composite)
2. Choose your assessment type (30-min / standard / deep)
3. Work through the relevant checklists
4. Score findings using AISS
5. Map to compliance requirements
6. Report
## Repository Structure
aaisaf/
├── framework/
│ ├── tactics/ # 10 tactic overviews with attack trees
│ ├── techniques/ # 87 individual technique files
│ ├── compliance/ # 6 compliance mapping documents
│ └── maturity/ # 5-level maturity model
├── assessments/
│ ├── checklists/ # 9 domain checklists
│ └── scoring/ # AISS specification and templates
└── references/
├── glossary.md
├── cve-index.md
└── bibliography.md
What's Next
ares-scanner — open-source tooling that automates the AAISAF methodology. The framework tells you what to test. The scanner runs the tests.
Contributions welcome. If you've encountered AI attack techniques not in the framework — open a PR. The goal is for this to become the living standard the community maintains.
CC BY-SA 4.0. Free forever. No vendor pitch.
GitHub: github.com/Jbermingham1/aaisaf
Top comments (0)