The EU AI Act is now in force and rolling out in phases; several obligations arrive before full application in 2026–2027 (see the EU’s implementation timeline and this key-dates overview). The European Commission’s AI Office will oversee providers — with a special focus on general-purpose AI (GPAI) — and may request information or evaluate models. For GPAI in particular, the Commission published obligations fact pages and a voluntary GPAI Code of Practice to reduce ambiguity while Article 53/55 duties phase in.
Below is the working checklist we use at Pynest to make AI systems shippable across jurisdictions without turning every release into a legal fire drill.
Start with scope: are you a provider or a deployer, and is it GPAI?
The Act differentiates between providers (who place on the market or put into service) and deployers (who use AI systems). If you provide or fine-tune GPAI models, you face specific duties such as technical documentation, a copyright policy, and a summary of training content (see the Commission’s GPAI obligations; also note the GPAI guidelines). For “systemic-risk” GPAI (the most capable models), add risk assessment/mitigation, incident reporting, and robust cybersecurity.
Pynest practice. We maintain a live registry that tags each model integration by role (provider/deployer), category (GPAI vs. task-specific), and jurisdiction exposure. That single view drives which obligations and controls apply.
Inventory first: models, datasets, prompts, and RAG sources
The fastest path to non-compliance is not knowing what you run. We treat inventory as a product:
- Model catalog. Version, provider, fine-tuning status, eval scores, intended use, and contact owner.
- Data contracts for every RAG source: lineage, freshness, completeness rules, retention, and allowed uses (we block flows when contracts fail — “red card” UX).
- Prompt & tool registry. Approved prompts, tool call scopes, and high-risk actions requiring manual checks.
- Decision logs. “Who-what-when-why” for changes, refusals, and overrides.
This aligns with the NIST AI Risk Management Framework (Govern/Map/Measure/Manage) and its implementation guidance (AI RMF 1.0
).
Pynest practice. Our HR knowledge assistant runs only on sources with signed data contracts; when freshness or lineage breaks, the UI blocks answers and routes a task to the data owner. That reduced “quiet” content drift and rework in audits.
Build the technical documentation once — and keep it living
Article 53 expects providers of GPAI to “draw up technical documentation” and share what downstream users need without disclosing IP (see Article 53 explainer). The Commission’s GPAI Code of Practice ships a model-documentation form (Transparency chapter) you can adopt now.
Pynest practice. We maintain a single documentation bundle per model integration:
- Model card (capabilities, limits, evals, safety scope).
- Data sheet / RAG card (sources, contracts, copyright policy, summary of training content when applicable).
- Safety case (refusal policy, escalation paths, abuse channels).
- Operational runbook (SLAs, rollback, change approvals, incident playbooks).
Because it’s the same bundle legal, security, and product review, updates don’t fork across teams.
Treat copyright & training-data transparency as production requirements
The Act expects a copyright policy and — for GPAI — a summary of training data. The GPAI Code of Practice provides templates; the Commission’s fact page clarifies Article 53 transparency duties. Downstream deployers benefit too: clearer provenance reduces takedown risk and support burden.
Pynest practice. For content-generating assistants, we embed citation hints and disallow outputs that cannot be traced to permitted sources. Legal reviews dropped from ad-hoc to scheduled because the bundle above makes copyright posture explicit.
Log decisions, not just predictions: auditability by design
Transparency obligations extend beyond user disclosure: you must show that humans can oversee and trace system decisions (see transparency overview). We log decision context (inputs, retrieved sources, tools called), policy gates triggered, human approvals, and reasons for refusal. This both satisfies audits and shortens incident investigations.
Pynest practice. In our sales-engineering “Answer Desk,” every security answer includes linked sources and a policy-decision record. During RFP reviews, that trail removes back-and-forth with compliance and preserves speed.
Control access and risk like you would for money movement
GPAI oversight is tightening; the AI Office can evaluate models and request information. Treat tool calls and data access as “financial transactions”:
- Short-lived identities for agents; least privilege scopes; JIT elevation for high-risk tasks.
- Session recording/logging for destructive or sensitive actions.
- Change previews & rollbacks for batch operations.
- Jurisdictional separation: for the EU and Middle East, we keep regional vector indexes and storage.
Pynest practice. When our HR assistant touches salary or PII, access auto-expires, the session is recorded, and a human approval step is enforced. That design cut legal review time and reduced rework across EU audits.
Use a stage-gate cadence the CFO and CISO can support
We run GenAI initiatives on a 15/45/90 rhythm with explicit cost caps and quality thresholds:
- 15 days: one workflow, one metric, cost ceiling (tokens/infra), refusal logic hardened.
- 45 days: baseline vs. after, error costs captured, quality above threshold.
- 90 days: either integrate (meets hurdle rate) or shut down.
This mirrors the shift from pilots to production across the market; obligations for GPAI begin August 2, 2025, with broader enforcement ramping toward 2026–2027 (see GPAI guidelines and the EU timeline).
Pynest practice. Our support copilot moved to production after three cycles with a documented ROI; because logs, data contracts, and copyright policy were already in the bundle, compliance added no net delay.
When to use the GPAI Code of Practice
If you are a GPAI provider (or fine-tune GPAI) and want a lower-friction path to demonstrate compliance, the Commission’s GPAI Code of Practice
offers a voluntary route now — covering Transparency and Copyright for all GPAI, with an extra Safety & Security chapter for systemic-risk models. It won’t replace your internal governance, but it standardizes what auditors and customers will ask.
Pynest practice. We borrowed the Code’s documentation form for our internal bundle, so if we ever switch to “provider” posture on a model, our paperwork already speaks the regulator’s language.
What to brief the board on (in one slide)
- Scope & role: Which uses make us a provider (incl. GPAI) vs. a deployer?
- Obligations & timing: Which Article 53/55-like duties apply this quarter vs. 2026–2027?
- Controls in place: Inventory, data contracts, decision logging, copyright policy, jurisdiction separation.
- Stage-gates: 15/45/90 cadence with cost and quality thresholds.
- Assurance: Alignment with NIST AI RMF to keep language consistent across global audits.
About me
About me
Here are a few of my recent features in major outlets:
- Inc.com: https://www.inc.com/john-brandon/how-to-break-up-with-bad-technology/91237809
- InformationWeek: https://www.informationweek.com/it-leadership/it-leadership-takes-on-agi
- CIO.com: https://www.cio.com/article/4033751/what-parts-of-erp-will-be-left-after-ai-takes-over.html https://www.cio.com/article/4064316/31-of-it-leaders-waste-half-their-cloud-spend.html https://www.cio.com/article/4059042/it-leaders-see-18-reduction-in-it-workforces-within-2-years.html
- CSOonline.com: https://www.csoonline.com/article/4062720/ai-coding-assistants-amplify-deeper-cybersecurity-risks.html
- The Epoch Times: https://www.theepochtimes.com/article/why-more-farmers-are-turning-to-ai-machines-5898960
- CMSWire: https://www.cmswire.com/digital-experience/what-sits-at-the-center-of-the-digital-experience-stack/
Top comments (0)