august 2 is closer than you think, what the eu ai act actually requires from your agents
most compliance content on the EU AI Act reads like it was written by the same legal team that drafted the regulation. here is what it actually means if you are a developer shipping AI systems in the EU.
the hard deadline is august 2, 2026. if your system touches any of the Annex III categories like credit scoring, employment screening, education access, or critical infrastructure management, you are classified as high-risk. that is not a gray area. the law is explicit. conformity assessment, CE marking, and EU database registration are not optional.
the part most dev teams are missing is that "technical documentation" in the act is not a readme. it means documented architecture, training data provenance, model cards, human oversight procedures, and audit logs that a notified body can inspect. orrick's 6-step breakdown is the clearest public summary of what that looks like in practice: https://www.orrick.com/en/Insights/2025/11/The-EU-AI-Act-6-Steps-to-Take-Before-2-August-2026
so what does an audit actually look like.
i've been running AI audits for early-stage teams through BizSuite. the $997 version covers four surfaces. model behavior documentation, data handling alignment across GDPR and the Act, human oversight gaps, and logging and traceability for Annex III compliance. turnaround is 48 hours. it is not a notified body certification, that costs tens of thousands of euros and takes months, but it tells you what you would fail on before you pay that bill.
the teams in the worst shape right now are the ones who shipped an agent product in 2024 or 2025 and never documented training decisions, prompt injection vectors, or override mechanisms. those decisions are baked in. reconstructing the audit trail retroactively is painful. the ones doing it right started the documentation process before the model was in production.
three things you can do before august 2 regardless of budget.
first, classify your system honestly. use the Act's Annex III list and the AI Office's self-assessment tool. if you are high-risk, you need to know now, not in july.
second, get your logging in order. every inference path that affects a person's life, loan, job, benefit, enrollment, needs a traceable decision record. if your current stack cannot produce that, it is not a compliance problem, it is an architecture problem.
third, document your human oversight mechanism. the act requires it to be real, not a checkbox. if your human in the loop is an alert nobody checks, a notified body will catch it.
if you want a structured look at where you stand, the BizSuite AI audit starts here: https://getbizsuite.com/ai-audit
Top comments (0)