<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alexander Paris</title>
    <description>The latest articles on DEV Community by Alexander Paris (@supra-dev).</description>
    <link>https://dev.to/supra-dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/supra-dev"/>
    <language>en</language>
    <item>
      <title>EU AI Act + LangChain: What You Actually Need to Build Before August 2026</title>
      <dc:creator>Alexander Paris</dc:creator>
      <pubDate>Sun, 29 Mar 2026 06:43:39 +0000</pubDate>
      <link>https://dev.to/supra-dev/eu-ai-act-langchain-what-you-actually-need-to-build-before-august-2026-pah</link>
      <guid>https://dev.to/supra-dev/eu-ai-act-langchain-what-you-actually-need-to-build-before-august-2026-pah</guid>
      <description>&lt;p&gt;The EU AI Act high-risk enforcement deadline is August 2, 2026. That is 126 days from today.&lt;/p&gt;

&lt;p&gt;If you're &lt;strong&gt;running AI agents in production&lt;/strong&gt; — especially on LangChain, CrewAI, or any tool-calling framework — and you're serving EU customers or operating in the EU, you are likely subject to obligations you probably haven't operationalized yet.&lt;/p&gt;

&lt;p&gt;This is not a legal article. It's a technical one. Here's what Articles 9, 13, and 14 actually require you to build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The three articles that matter for agent developers&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Article 9&lt;/strong&gt; — Risk Management System&lt;br&gt;
Not a document. A running system that continuously identifies, estimates, and evaluates risks across the lifecycle of the AI system. For agent developers, this means: logging every tool call, every decision, every output — in a way you can query after the fact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Article 13&lt;/strong&gt; — Transparency and provision of information&lt;br&gt;
Every interaction must be traceable. The system must be able to explain what happened, when, and why. For LangChain agents, this means structured metadata per tool invocation — not just application logs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Article 14&lt;/strong&gt; — Human oversight&lt;br&gt;
High-risk AI systems must be designed so a human can intervene, override, and halt them. For agents, this means you need REQUIRE_APPROVAL policies on sensitive tool categories — not just after-the-fact monitoring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What most LangChain deployments are missing right now&lt;/strong&gt;&lt;br&gt;
Most production LangChain setups have:&lt;/p&gt;

&lt;p&gt;Application-level logging (what the user sent, what the LLM returned)&lt;/p&gt;

&lt;p&gt;Some prompt-level filtering&lt;/p&gt;

&lt;p&gt;Maybe a token budget set in the LLM client&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What they're missing:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tool call-level audit trail — a tamper-evident, append-only record of every tool invocation with inputs, outputs, timestamp, and agent context. Not just logs — logs can be edited. You need RSA-signed chains.&lt;/p&gt;

&lt;p&gt;Policy enforcement at the execution boundary — before the tool runs, not after. GDPR, DORA, and the AI Act all care about what actually executed, not what you intended.&lt;/p&gt;

&lt;p&gt;Credential isolation — agents that see plaintext API keys in their context are a live credential theft vector. JIT injection means the agent requests a capability; it never receives the underlying secret.&lt;/p&gt;

&lt;p&gt;Fail-closed defaults — if your compliance check times out, what happens? Most middleware silently degrades to "allow." That's worse than no check, because you have a false paper trail.&lt;/p&gt;

&lt;p&gt;A concrete implementation pattern&lt;br&gt;
Here's the minimal compliant pattern for a LangChain agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;suprawall&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;secure_agent&lt;/span&gt;

&lt;span class="c1"&gt;# Your existing agent — unchanged
&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_react_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# One line. Every tool call is now policy-checked,
# vault-protected, and audit-logged.
&lt;/span&gt;&lt;span class="n"&gt;secured_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;secure_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ag_your_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this gives you:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every tool call intercepted before execution&lt;/p&gt;

&lt;p&gt;Policy engine runs in &amp;lt;2ms (deterministic, not probabilistic)&lt;/p&gt;

&lt;p&gt;Credentials injected at runtime — agent sees capability, not secret&lt;/p&gt;

&lt;p&gt;RSA-signed audit trail written append-only per interaction&lt;/p&gt;

&lt;p&gt;Hard budget cap with circuit breaker — no infinite loops&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The deadline is real this time&lt;/strong&gt;&lt;br&gt;
GDPR took years before meaningful enforcement. The AI Act is different: the AI Office is actively staffing, the prohibited practices have been enforceable since February 2025, and high-risk obligations kick in on a fixed date with specific technical documentation requirements.&lt;/p&gt;

&lt;p&gt;126 days is enough time to instrument properly. It is not enough time to build the audit infrastructure from scratch while also shipping product.&lt;/p&gt;

&lt;p&gt;→ SupraWall is open-source (Apache 2.0). &lt;br&gt;
Early beta access at supra-wall.com or &lt;a href="https://github.com/wiserautomation/SupraWall" rel="noopener noreferrer"&gt;https://github.com/wiserautomation/SupraWall&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>langchain</category>
      <category>webdev</category>
      <category>agents</category>
    </item>
    <item>
      <title>How to Make Your LangChain Agent EU AI Act Compliant in 5 Minutes</title>
      <dc:creator>Alexander Paris</dc:creator>
      <pubDate>Sat, 21 Mar 2026 16:19:29 +0000</pubDate>
      <link>https://dev.to/supra-dev/how-to-make-your-langchain-agent-eu-ai-act-compliant-in-5-minutes-305c</link>
      <guid>https://dev.to/supra-dev/how-to-make-your-langchain-agent-eu-ai-act-compliant-in-5-minutes-305c</guid>
      <description>&lt;p&gt;The EU AI Act requires human oversight (Article 14), audit logging (Article 12), and risk management (Article 9) for production AI agents. Most LangChain deployments have none of these. If your agent is touching customer data, sending emails, executing financial transactions, or interacting with any external system, you are likely already non-compliant. Fines can reach €30 million or 6% of global annual turnover. The good news: you can add all three compliance pillars in under 5 minutes with a single middleware integration. Here's exactly how.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 3-Line Problem
&lt;/h2&gt;

&lt;p&gt;Most LangChain agents in production look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AgentExecutor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;create_openai_functions_agent&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;

&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_openai_functions_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;executor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AgentExecutor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;executor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Send a follow-up email to all leads from last quarter&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Clean, functional, and dangerously non-compliant. Here's what's missing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No audit trail.&lt;/strong&gt; You have no record of what the agent decided, which tools it called, what data it accessed, or when. Article 12 of the EU AI Act mandates automatic logging of all events necessary to trace the AI system's decisions throughout its lifecycle. A plain LangChain executor writes nothing to a compliance-grade log.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No human oversight.&lt;/strong&gt; Article 14 requires that high-risk AI systems allow human operators to monitor and intervene in real time. If your agent decides to bulk-email 10,000 leads at 2 AM, nothing stops it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No policy engine.&lt;/strong&gt; Article 9 demands a risk management system that identifies, analyzes, and mitigates risks specific to your deployment. There's no mechanism here to evaluate whether a particular tool call is permissible before it executes.&lt;/p&gt;

&lt;p&gt;This is the three-line problem: three lines of executor code, three articles of the EU AI Act violated.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 5-Minute Fix
&lt;/h2&gt;

&lt;p&gt;Install the integration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;langchain-suprawall
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now wrap your executor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AgentExecutor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;create_openai_functions_agent&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_suprawall&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SuprawallMiddleware&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;RiskLevel&lt;/span&gt;

&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_openai_functions_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;executor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AgentExecutor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;middleware&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="nc"&gt;SuprawallMiddleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SUPRAWALL_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="n"&gt;risk_level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;RiskLevel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HIGH&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;require_human_oversight&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;# Article 14
&lt;/span&gt;            &lt;span class="n"&gt;audit_retention_days&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;730&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c1"&gt;# Article 12
&lt;/span&gt;        &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;executor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Send a follow-up email to all leads from last quarter&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's the entire change. Let's break down what each parameter actually does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;api_key&lt;/code&gt;&lt;/strong&gt; connects to the SupraWall compliance backend, which is where your audit logs are stored, your policies are evaluated, and your human-in-the-loop notifications are dispatched.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;risk_level=RiskLevel.HIGH&lt;/code&gt;&lt;/strong&gt; tells SupraWall how aggressively to apply its policy engine. &lt;code&gt;HIGH&lt;/code&gt; maps directly to the EU AI Act's high-risk classification (Annex III), which applies to agents making decisions in HR, credit, critical infrastructure, law enforcement adjacent systems, and customer-facing automation. At this level, every tool call is evaluated against your policy ruleset before execution. If you're unsure which level applies to you, start with &lt;code&gt;HIGH&lt;/code&gt; — you can always downgrade after a legal review.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;require_human_oversight=True&lt;/code&gt;&lt;/strong&gt; activates Article 14 compliance. Any tool call classified as high-risk by SupraWall's policy engine will pause execution and dispatch a real-time notification to your designated compliance officer (via Slack, email, or webhook — configurable in your SupraWall dashboard). The agent cannot proceed until the oversight action is resolved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;audit_retention_days=730&lt;/code&gt;&lt;/strong&gt; sets log retention to two years (730 days), which aligns with the EU AI Act's post-market surveillance requirements under Article 72 for general purpose AI systems and is a conservative baseline for high-risk systems. Every tool call, decision, approval, denial, and error is stored with a tamper-evident timestamp and cryptographic chain of custody.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Happens When a High-Risk Tool Is Called
&lt;/h2&gt;

&lt;p&gt;Let's trace exactly what happens when your agent tries to call &lt;code&gt;send_email&lt;/code&gt; with the above setup.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The agent decides to call &lt;code&gt;send_email(to="leads@...", subject="Follow-up", body="...")&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;SupraWall middleware intercepts the call &lt;em&gt;before&lt;/em&gt; execution. The tool does not run yet.&lt;/li&gt;
&lt;li&gt;SupraWall evaluates the call against your policy ruleset. &lt;code&gt;send_email&lt;/code&gt; is classified as a high-risk tool (external communication, potential PII exposure).&lt;/li&gt;
&lt;li&gt;Because &lt;code&gt;require_human_oversight=True&lt;/code&gt;, SupraWall dispatches a Slack message to your compliance officer: &lt;em&gt;"Agent requested to call &lt;code&gt;send_email&lt;/code&gt; to 847 recipients. Approve or deny?"&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;The compliance officer clicks Approve or Deny in Slack. SupraWall logs: the action taken, the timestamp (ISO 8601, UTC), and the approver's identity (pulled from their Slack/SSO profile).&lt;/li&gt;
&lt;li&gt;If approved, the tool executes normally. If denied, the agent receives a structured error and can respond accordingly.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Compare this to LangChain's built-in &lt;code&gt;HumanApprovalCallbackHandler&lt;/code&gt;. That approach pauses the agent and prints a prompt to your terminal — whoever is watching the terminal must type &lt;code&gt;y&lt;/code&gt; or &lt;code&gt;n&lt;/code&gt;. There's no log of who approved, no timestamp beyond your shell history, no integration with your existing compliance tooling, and no way to reconstruct the audit trail later. That's not Article 14 compliance; that's a debug flag.&lt;/p&gt;

&lt;p&gt;SupraWall turns human oversight from a developer convenience into a compliance-grade system of record.&lt;/p&gt;




&lt;h2&gt;
  
  
  Generating an Audit Report
&lt;/h2&gt;

&lt;p&gt;When your auditor (internal or external) asks for evidence of compliance, this is the two-line answer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_suprawall&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AuditReporter&lt;/span&gt;

&lt;span class="n"&gt;reporter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AuditReporter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SUPRAWALL_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;report&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;reporter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;start_date&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2025-01-01&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;end_date&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2025-03-31&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pdf&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;report&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;q1_audit.pdf&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The generated PDF includes: a full chronological log of every tool call made by your agent, the policy evaluation result for each call, the human oversight decisions (with approver identities and timestamps), any policy violations or near-misses flagged by the risk engine, and a summary attestation table mapping your deployment to the specific EU AI Act articles it satisfies.&lt;/p&gt;

&lt;p&gt;This is what you hand to an auditor. It answers the three questions every EU AI Act auditor will ask: &lt;em&gt;What did the system do? Who approved it? Can you prove it?&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;langchain-suprawall&lt;/code&gt; package is available now on PyPI (&lt;code&gt;pip install langchain-suprawall&lt;/code&gt;). The full tutorial — including how to configure your policy ruleset, set up Slack/webhook notifications, and handle multi-agent deployments where multiple executors share a single compliance context — is at &lt;a href="https://suprawall.ai/blog/eu-ai-act-langchain" rel="noopener noreferrer"&gt;suprawall.ai/blog/eu-ai-act-langchain&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The EU AI Act's high-risk provisions have enforcement teeth. The compliance window is narrower than most teams realize. Five minutes and one middleware import is a reasonable place to start.&lt;/p&gt;

</description>
      <category>python</category>
      <category>langchain</category>
      <category>ai</category>
      <category>compliance</category>
    </item>
  </channel>
</rss>
