<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Act Navigator</title>
    <description>The latest articles on DEV Community by Act Navigator (@actnavigator).</description>
    <link>https://dev.to/actnavigator</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/actnavigator"/>
    <language>en</language>
    <item>
      <title>Trying to apply the EU AI Act to a real product (and why it’s harder than it looks)</title>
      <dc:creator>Act Navigator</dc:creator>
      <pubDate>Thu, 09 Apr 2026 21:24:10 +0000</pubDate>
      <link>https://dev.to/actnavigator/trying-to-apply-the-eu-ai-act-to-a-real-product-and-why-its-harder-than-it-looks-556n</link>
      <guid>https://dev.to/actnavigator/trying-to-apply-the-eu-ai-act-to-a-real-product-and-why-its-harder-than-it-looks-556n</guid>
      <description>&lt;p&gt;I didn’t expect to spend this much time thinking about the EU AI Act.&lt;/p&gt;

&lt;p&gt;It started with a pretty innocent question. We’re building a product that uses AI, and at some point I figured I should probably check if this whole regulation thing applies to us.&lt;/p&gt;

&lt;p&gt;I assumed I’d skim a couple of articles, maybe read a summary, and move on with my life.&lt;/p&gt;

&lt;p&gt;That didn’t happen.&lt;/p&gt;




&lt;p&gt;At first, everything looks very clean. You’ve got categories, definitions, risk levels… it almost feels reassuring. Like someone has already done the hard thinking for you.&lt;/p&gt;

&lt;p&gt;Then you try to apply it to your actual product.&lt;/p&gt;

&lt;p&gt;That’s where things start to wobble a bit.&lt;/p&gt;

&lt;p&gt;I found myself going back and forth on questions that &lt;em&gt;should&lt;/em&gt; be simple, but somehow aren’t:&lt;/p&gt;

&lt;p&gt;Are we even in scope here?&lt;br&gt;
What exactly counts as an “AI system” in what we’ve built?&lt;br&gt;
If we’re using third-party models, is that our problem or someone else’s?&lt;br&gt;
Are we supposed to be documenting things already, or is that future-us’ problem?&lt;/p&gt;

&lt;p&gt;The more I read, the less binary it all felt.&lt;/p&gt;




&lt;p&gt;A lot of the content out there is solid, to be fair. But it’s clearly written either by lawyers or for companies with actual compliance teams.&lt;/p&gt;

&lt;p&gt;Which makes sense. Just not particularly helpful when you’re a small team trying to ship product without accidentally becoming a case study in EU regulation.&lt;/p&gt;

&lt;p&gt;I wasn’t looking for a perfect interpretation of the law. I just wanted a rough answer to:&lt;/p&gt;

&lt;p&gt;“Where do we stand, realistically?”&lt;/p&gt;




&lt;p&gt;The thing that tripped me up the most is how broad some of the definitions are.&lt;/p&gt;

&lt;p&gt;“AI system” sounds obvious until you try to draw the line in your own codebase.&lt;/p&gt;

&lt;p&gt;Is a rules engine included?&lt;br&gt;
What about something that just calls an LLM API?&lt;br&gt;
What if AI is only a small feature and not the core of the product?&lt;/p&gt;

&lt;p&gt;You can make a reasonable argument in multiple directions, which is… not ideal when you’re trying to make decisions.&lt;/p&gt;




&lt;p&gt;At some point I realised I’d spent enough time thinking about this that I might as well try to structure it.&lt;/p&gt;

&lt;p&gt;So I built a small internal tool.&lt;/p&gt;

&lt;p&gt;Not because I thought “the world needs this”, but because I needed a way to stop going in circles.&lt;/p&gt;

&lt;p&gt;It’s honestly very simple under the hood. No magic. No models. Just a rule-based flow that mirrors how the regulation is structured.&lt;/p&gt;

&lt;p&gt;You answer a few questions about what your system actually does — how it’s used, where it’s used, how it interacts with people — and it walks you through what that &lt;em&gt;might&lt;/em&gt; mean in terms of risk classification.&lt;/p&gt;

&lt;p&gt;It’s basically a decision tree, dressed up to feel slightly less like a spreadsheet.&lt;/p&gt;




&lt;p&gt;The interesting part wasn’t building it. It was where it breaks.&lt;/p&gt;

&lt;p&gt;Edge cases are everywhere.&lt;/p&gt;

&lt;p&gt;Things like SaaS products that embed third-party AI don’t map neatly to “provider” vs “deployer”. Features that are only partially AI-driven sit in this weird grey zone. And some of the definitions in the regulation are clearly written to be flexible, which means you eventually have to rely on judgment anyway.&lt;/p&gt;

&lt;p&gt;So it’s not perfect. It’s not meant to be.&lt;/p&gt;

&lt;p&gt;But it did help me go from “this is vague and slightly stressful” to “ok, I think we’re roughly here”.&lt;/p&gt;

&lt;p&gt;Which, honestly, was enough.&lt;/p&gt;




&lt;p&gt;I cleaned it up a bit and put a version online here:&lt;br&gt;
&lt;a href="https://www.actnavigator.com" rel="noopener noreferrer"&gt;https://www.actnavigator.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If nothing else, it’s a quicker way to think through the EU AI Act without needing to read 200 pages of legal text (which I can confirm is not how most founders want to spend their evenings).&lt;/p&gt;




&lt;p&gt;Curious how others are handling this.&lt;/p&gt;

&lt;p&gt;Are you already doing something about the AI Act, or is it still sitting somewhere between “important” and “we’ll deal with it later”?&lt;/p&gt;

&lt;p&gt;Because I have a feeling a lot of us are in that exact spot.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>startup</category>
      <category>saas</category>
      <category>europe</category>
    </item>
  </channel>
</rss>
