If you ship AI-assisted code in 2026, three regulatory things have changed under your feet.
In December 2025, OWASP published the Top 10 for Agentic Applications. In April 2026, Microsoft released the Agent Governance Toolkit. In August 2026, the EU AI Act high-risk obligations take effect. ISO 42001 has become the AI management system standard auditors expect. NIST AI RMF is the framework most US agencies and primes will reference. The Colorado AI Act starts enforcement in June 2026. Tool qualification frameworks (DO-178C and DO-330 for avionics, IEC 62304 for medical devices, ISO 26262 for automotive, CMMC for defense) treat AI tooling with the same scrutiny they applied to legacy code generators.
That is a lot of paper. The good news is that most of it points at the same operational pattern. You need to know what your AI did, you need to enforce policy at the tool surface, you need evidence you can hand to a third party, and you need a retention story.
This post is a working checklist that maps each of those frameworks to actual Akmon commands. Map to your own controls as needed.
The shape of the work
Almost every AI compliance program asks for five things, in different language.
- A documented inventory of AI systems, with risk classifications and owners.
- A policy framework, ideally enforced at runtime, not only documented.
- An audit trail of agent activity, with the integrity to be admissible.
- A retention story for that audit trail.
- A way to extract evidence on demand, in a format a non-engineer can read.
If you build those five, you are most of the way to compliance with most frameworks. The frameworks layer on specific controls that fit your risk profile.
ISO 42001, the management system standard
ISO 42001 is a management system standard, like ISO 27001 but for AI. It does not tell you how to build the agent. It tells you how to govern the work.
The relevant controls for AI coding agents:
- A.5 AI policies. Document who owns what.
- A.6 Internal organization. Roles and responsibilities for AI.
- A.7 Operational planning and control. Including evidence of operations.
- A.8 Performance evaluation. Continuous monitoring and review.
The translation to engineering work, with the Akmon command that produces the evidence:
| Control | What you actually do | Akmon command |
|---|---|---|
| A.5 | Maintain an AI inventory with risk classification | An internal doc, ideally rendered from the agents that emit AGEF |
| A.6 | Assign owners for each agent | A field in the project's AKMON.md plus your CMDB |
| A.7 | Run policy at the tool boundary, log every call |
akmon audit verify, akmon evidence verify
|
| A.8 | Review evidence regularly, close the loop |
akmon slo verify, akmon slo trend
|
The trust pipeline already does most of A.7 and A.8. The rest is documentation.
EU AI Act Article 12, the recording requirement
For high-risk AI systems, the EU AI Act requires automatic recording of events that are relevant to the operation of the system. This is not a logging best practice. It is an obligation.
The recording must:
- Capture events automatically.
- Cover the operating life of the system.
- Be of sufficient detail to investigate incidents.
- Be retained for a period appropriate to the system's purpose.
The translation to engineering work:
| Requirement | What you actually do | Akmon evidence |
|---|---|---|
| Automatic recording | Record on every tool call, every model call, every policy decision | .akmon/audit/<session>.jsonl |
| Coverage | Make sure every session produces a record | Index by session ID, alarm on gaps |
| Sufficient detail | Inputs, outputs, decisions, parent and child IDs | AGEF event kinds, content-addressed objects |
| Retention | Store records for the contractual or legal minimum | Lifecycle policy on your storage |
Article 12 does not say "use AGEF". It says automatic, sufficient, retained. AGEF satisfies those structurally.
NIST AI RMF, the risk management framework
NIST AI RMF is voluntary, US flavored, and well respected. The four functions are Govern, Map, Measure, Manage.
For AI coding agents:
- Govern: assign accountability for AI tools used in development.
- Map: classify the risk of each agent and document tools it can use.
- Measure: monitor for misbehavior, track metrics.
- Manage: respond to incidents, retire systems that fail.
If you have ISO 42001 covered, NIST AI RMF is mostly a vocabulary translation. Same data, different headers in the report.
SOC 2 for AI engineering controls
For SOC 2 audits, AI coding agents tend to fall under CC4 (monitoring) and CC7 (system operations). Some auditors are starting to ask about CC2.3 (communications about responsibilities) for AI-specific roles.
What auditors want to see:
- A documented set of AI controls, with owners.
- Evidence that the controls run, not just that they exist.
- Logs that demonstrate operations and exceptions.
- Incident history with root cause and remediation.
The Akmon trust pipeline maps cleanly. SLO verify gives you the "controls run". Evidence verify gives you the "logs that demonstrate operations". Replay gives you the "incident history" framing.
Tool qualification frameworks
This is where Akmon is most differentiated. Generic AI agents do not address tool qualification at all. Akmon does.
DO-178C and DO-330 (aerospace and avionics)
DO-330 covers tool qualification. AI tools used in development have to either qualify or be classified as not affecting the certified output. Akmon's evidence chain is the artifact you need to make the case. The tool qualification kit (TQK) typically wants a deterministic procedure and recorded artifacts. Replay against recorded providers and tools is the closest thing in the AI space to a deterministic procedure.
IEC 62304 (medical device software)
IEC 62304 cares about the software life cycle. AI assistance in development is part of that life cycle. The evidence Akmon produces fits the V&V records expected at most safety classifications. The redaction flow is critical for protected health information.
ISO 26262 (automotive)
For ASIL-rated software, traceability is mandatory. AGEF's content-addressed events plus replay give you a defensible answer when an auditor asks where a particular line came from. The spec workflow (akmon spec) is the right entry point for high-ASIL changes.
CMMC (defense)
CMMC level 2 and above care about access control and audit. Akmon's policy profiles, the explicit deny posture in prod, and the audit chain map to several practices in AC, AU, and CM domains. Local-first execution (Ollama or your hosted endpoint inside a controlled environment) keeps controlled unclassified information inside the boundary.
OWASP Top 10 for Agentic Applications
Published December 2025. The list is technical, not regulatory, but it has become the shared vocabulary for failure modes. If you have not mapped your agent's risks to the list, do it.
The most relevant items for an AI coding agent:
- LLM01 prompt injection. Tool inputs that contain hidden instructions. Mitigated by policy at the tool boundary, the
prodprofile, and constrainedweb_fetchallow lists. - LLM03 sensitive information disclosure. Mitigated by
akmon redactfor outputs and by careful provider choice. - LLM06 excessive agency. Mitigated by the
prodpolicy profile and by team-specific packs that lock the tool surface. - LLM08 vector and embedding weaknesses. Where your agent uses retrieval, the events emit
RetrievalCallrecords, so you can audit what was retrieved.
A control map document lives in the docs. The mapping is concrete: each control points at a command or a configuration knob.
A short, copyable checklist
If you have one afternoon:
- Make an inventory. List your agents, owners, and risk levels in a one page document.
- Stand up Akmon in your repo. Choose
stagingprofile. Add one organization pack. - Run a session, then run the trust pipeline. Confirm the three exit codes are
0. - Map three controls each to ISO 42001 A.7, EU AI Act Article 12, and SOC 2 CC7. Use the AGEF event kinds as evidence.
- Set a retention policy. Pick a number, document it, automate the lifecycle.
- Schedule a weekly evidence review. One person, one hour, one summary.
If you have a quarter:
- Cover all repos with the same Akmon binary and policy framework.
- Build a control map that ties each AGEF event kind and each command to a control across frameworks.
- Add
slo trendto detect regressions across recent sessions. - Add a customer-facing surface (a small page in your help center) that explains what your AI coding agent records.
- Walk your auditor through a sample session. The first one will tell you what is missing.
What this gets you
What you get: an audit trail that survives a third-party review, a faster path through customer security questionnaires, and a much shorter incident loop.
What you do not get: a guarantee that the model behaves. The model is the model. The job of governance is to make the consequences of misbehavior bounded, observable, and provable.
If you want a place to start, install Akmon and run the trust pipeline on one session. The repo is at github.com/radotsvetkov/akmon. The format is at github.com/radotsvetkov/agef. The site is at radotsvetkov.github.io/akmon.
The next article in this series goes deep on the redaction workflow, which is the part of the kit you reach for the day before an external review.
Top comments (0)