<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Fredrik</title>
    <description>The latest articles on DEV Community by Fredrik (@paracta).</description>
    <link>https://dev.to/paracta</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/paracta"/>
    <language>en</language>
    <item>
      <title>Read more about how high risk systems are regulated, here: https://paracta.com/annex-iii-ai-act</title>
      <dc:creator>Fredrik</dc:creator>
      <pubDate>Tue, 17 Mar 2026 09:42:37 +0000</pubDate>
      <link>https://dev.to/paracta/read-more-about-how-high-risk-systems-are-regulated-here-httpsparactacomannex-iii-ai-act-4enl</link>
      <guid>https://dev.to/paracta/read-more-about-how-high-risk-systems-are-regulated-here-httpsparactacomannex-iii-ai-act-4enl</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/paracta" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3824253%2Fc3299e6b-917f-4213-b111-fb734d76afdd.png" alt="paracta"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/paracta/are-you-using-high-risk-ai-without-realizing-it-4nm3" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Are You Using High-Risk AI Without Realizing It?&lt;/h2&gt;
      &lt;h3&gt;Fredrik ・ Mar 17&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#euaiact&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;



&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://paracta.com/annex-iii-ai-act" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fgpt-engineer-file-uploads%2FcRkIB2qgGyMQIybXk4W5RhEMyGH3%2Fsocial-images%2Fsocial-1773267223400-Screenshot_2026-03-11_at_23.11.28.webp" height="204" class="m-0" width="389"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://paracta.com/annex-iii-ai-act" rel="noopener noreferrer" class="c-link"&gt;
            Paracta — EU AI Act Classification Platform
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            Classify and document AI systems under the EU AI Act. Structured questionnaire, classification results, and review-ready documentation. A Swedish compliance technology company.
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fparacta.com%2Fparacta-logo.png" width="512" height="512"&gt;
          paracta.com
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>euaiact</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Are You Using High-Risk AI Without Realizing It?</title>
      <dc:creator>Fredrik</dc:creator>
      <pubDate>Tue, 17 Mar 2026 09:41:44 +0000</pubDate>
      <link>https://dev.to/paracta/are-you-using-high-risk-ai-without-realizing-it-4nm3</link>
      <guid>https://dev.to/paracta/are-you-using-high-risk-ai-without-realizing-it-4nm3</guid>
      <description>&lt;h1&gt;
  
  
  Are You Using High-Risk AI Without Realizing It? 25 Real Examples from the EU AI Act
&lt;/h1&gt;

&lt;p&gt;When people hear about the EU AI Act, they often assume it’s mainly about large tech companies building advanced AI systems.&lt;/p&gt;

&lt;p&gt;In reality, many of the rules focus on &lt;strong&gt;how AI is used in everyday decisions&lt;/strong&gt; — especially when those decisions affect people’s lives.&lt;/p&gt;

&lt;p&gt;The regulation introduces a category called &lt;strong&gt;high-risk AI systems&lt;/strong&gt;, which come with stricter requirements.&lt;/p&gt;

&lt;p&gt;The tricky part is that many of these use cases are more common than you might think.&lt;/p&gt;




&lt;h2&gt;
  
  
  What does “high-risk AI” actually mean?
&lt;/h2&gt;

&lt;p&gt;An AI system is considered high-risk when it is used in contexts where decisions can significantly impact individuals.&lt;/p&gt;

&lt;p&gt;This includes areas like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;hiring
&lt;/li&gt;
&lt;li&gt;education
&lt;/li&gt;
&lt;li&gt;finance
&lt;/li&gt;
&lt;li&gt;healthcare
&lt;/li&gt;
&lt;li&gt;public services
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These use cases are listed in &lt;strong&gt;Annex III of the EU AI Act&lt;/strong&gt;, and they define where companies need to be more careful.&lt;/p&gt;




&lt;h2&gt;
  
  
  25 real-world examples
&lt;/h2&gt;

&lt;p&gt;Here are some examples that help make this more concrete.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hiring and workplace decisions
&lt;/h3&gt;

&lt;p&gt;AI that filters job applicants or ranks candidates is considered high-risk.&lt;br&gt;&lt;br&gt;
The same goes for systems that evaluate employee performance or monitor behavior.&lt;/p&gt;




&lt;h3&gt;
  
  
  Education systems
&lt;/h3&gt;

&lt;p&gt;If an AI system is used to grade exams or determine admissions, it falls into the high-risk category.&lt;/p&gt;




&lt;h3&gt;
  
  
  Financial decisions
&lt;/h3&gt;

&lt;p&gt;Credit scoring is one of the most obvious examples.&lt;br&gt;&lt;br&gt;
If AI determines whether someone gets a loan, that system is high-risk.&lt;/p&gt;

&lt;p&gt;Insurance pricing and approvals can fall into the same category.&lt;/p&gt;




&lt;h3&gt;
  
  
  Public sector use
&lt;/h3&gt;

&lt;p&gt;AI systems used to determine access to benefits or allocate public housing are also high-risk.&lt;/p&gt;

&lt;p&gt;These systems directly affect people’s access to essential services.&lt;/p&gt;




&lt;h3&gt;
  
  
  Law enforcement
&lt;/h3&gt;

&lt;p&gt;Predictive policing tools and facial recognition systems are included here.&lt;/p&gt;

&lt;p&gt;These are some of the most heavily discussed use cases in the regulation.&lt;/p&gt;




&lt;h3&gt;
  
  
  Healthcare
&lt;/h3&gt;

&lt;p&gt;AI used in diagnosis or treatment recommendations is high-risk.&lt;/p&gt;

&lt;p&gt;These systems can influence medical decisions, which raises the bar significantly.&lt;/p&gt;




&lt;h3&gt;
  
  
  Infrastructure and safety
&lt;/h3&gt;

&lt;p&gt;AI systems controlling energy grids or traffic systems also fall into this category.&lt;/p&gt;

&lt;p&gt;Failures in these systems can have wide-reaching consequences.&lt;/p&gt;




&lt;h2&gt;
  
  
  What about everyday AI tools?
&lt;/h2&gt;

&lt;p&gt;Most companies are not building high-risk systems.&lt;/p&gt;

&lt;p&gt;Common use cases like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;chatbots
&lt;/li&gt;
&lt;li&gt;document summarization
&lt;/li&gt;
&lt;li&gt;marketing tools
&lt;/li&gt;
&lt;li&gt;recommendation systems
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;are typically low or minimal risk.&lt;/p&gt;

&lt;p&gt;But that doesn’t mean they can be ignored.&lt;/p&gt;




&lt;h2&gt;
  
  
  The real challenge for companies
&lt;/h2&gt;

&lt;p&gt;The biggest issue isn’t identifying obvious high-risk systems.&lt;/p&gt;

&lt;p&gt;It’s realizing that:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;you may already be using more AI systems than you think.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many companies have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;internal models
&lt;/li&gt;
&lt;li&gt;third-party APIs
&lt;/li&gt;
&lt;li&gt;embedded AI features
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;without a clear overview.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why documentation matters
&lt;/h2&gt;

&lt;p&gt;Even if your systems are not high-risk, you still need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;understand what AI systems you use
&lt;/li&gt;
&lt;li&gt;assess their risk level
&lt;/li&gt;
&lt;li&gt;document your reasoning
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This becomes especially important as regulation evolves.&lt;/p&gt;




&lt;h2&gt;
  
  
  A simple starting point
&lt;/h2&gt;

&lt;p&gt;A practical approach is to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;list all AI systems used in your company
&lt;/li&gt;
&lt;li&gt;identify which ones might fall under high-risk categories
&lt;/li&gt;
&lt;li&gt;document how they are used
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you want a more detailed breakdown of high-risk AI systems and examples, we put together a full guide here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://paracta.com/25-high-risk-ai-examples" rel="noopener noreferrer"&gt;https://paracta.com/25-high-risk-ai-examples&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We also built a small tool to help companies classify and document their AI systems:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://paracta.com" rel="noopener noreferrer"&gt;https://paracta.com&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;The EU AI Act is less about advanced AI technology and more about &lt;strong&gt;how AI is applied in real-world decisions&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And for many companies, the first step isn’t compliance — it’s simply understanding:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where are we actually using AI today?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>euaiact</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>What Actually Counts as an AI System Under the EU AI Act?</title>
      <dc:creator>Fredrik</dc:creator>
      <pubDate>Sat, 14 Mar 2026 16:46:10 +0000</pubDate>
      <link>https://dev.to/paracta/what-actually-counts-as-an-ai-system-under-the-eu-ai-act-3n85</link>
      <guid>https://dev.to/paracta/what-actually-counts-as-an-ai-system-under-the-eu-ai-act-3n85</guid>
      <description>&lt;h1&gt;
  
  
  What Actually Counts as an AI System Under the EU AI Act?
&lt;/h1&gt;

&lt;p&gt;If you’ve been following the EU AI Act discussions, one question keeps coming up when talking to founders and engineers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Does our software actually count as an AI system?”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The regulation sounds like it’s about big AI labs or advanced machine learning systems. But when you read the definition more carefully, you realize the scope is much broader.&lt;/p&gt;

&lt;p&gt;A lot of everyday software features may already fall into the category of an AI system.&lt;/p&gt;

&lt;p&gt;Understanding where that boundary is matters, because once a system qualifies as AI under the Act, the next step is determining its &lt;strong&gt;risk classification and documentation requirements&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The legal definition
&lt;/h2&gt;

&lt;p&gt;The EU AI Act defines an AI system roughly like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A machine-based system that infers from input data how to generate outputs such as predictions, recommendations, content, or decisions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The key word in that definition is &lt;strong&gt;infers&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In other words, the system is not just executing fixed logic — it is deriving outputs based on patterns in data.&lt;/p&gt;

&lt;p&gt;That distinction ends up being the line between traditional software and AI systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Scenario 1: Using an LLM API
&lt;/h2&gt;

&lt;p&gt;Let’s say your product calls an API like OpenAI, Anthropic, or another language model provider.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;generating summaries
&lt;/li&gt;
&lt;li&gt;answering questions
&lt;/li&gt;
&lt;li&gt;analyzing user input
&lt;/li&gt;
&lt;li&gt;extracting information from documents
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even though you didn’t train the model, &lt;strong&gt;you are still deploying an AI system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The AI Act distinguishes between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;providers&lt;/strong&gt; (companies that build models)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;deployers&lt;/strong&gt; (companies that use them)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most SaaS companies fall into the deployer category.&lt;/p&gt;




&lt;h2&gt;
  
  
  Scenario 2: A chatbot in your product
&lt;/h2&gt;

&lt;p&gt;A chatbot powered by a language model is clearly an AI system.&lt;/p&gt;

&lt;p&gt;The regulation doesn’t automatically make that high-risk, but it does introduce &lt;strong&gt;transparency obligations&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For example, users should be aware that they are interacting with an AI system.&lt;/p&gt;

&lt;p&gt;In most SaaS or customer support contexts this will likely fall under &lt;strong&gt;limited or minimal risk&lt;/strong&gt;, but it still counts as AI.&lt;/p&gt;




&lt;h2&gt;
  
  
  Scenario 3: Machine learning models
&lt;/h2&gt;

&lt;p&gt;If your product uses machine learning — even something simple — it almost certainly qualifies.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;churn prediction models
&lt;/li&gt;
&lt;li&gt;fraud detection
&lt;/li&gt;
&lt;li&gt;recommendation engines
&lt;/li&gt;
&lt;li&gt;classification models
&lt;/li&gt;
&lt;li&gt;personalization algorithms
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The important question is not whether the system uses neural networks or fancy architectures.&lt;/p&gt;

&lt;p&gt;It’s whether the system &lt;strong&gt;infers outputs from data rather than executing deterministic logic&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Scenario 4: Recommendation systems
&lt;/h2&gt;

&lt;p&gt;Recommendation systems appear everywhere:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;e-commerce product suggestions
&lt;/li&gt;
&lt;li&gt;content feeds
&lt;/li&gt;
&lt;li&gt;personalization features
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These systems usually rely on machine learning or statistical inference, which means they qualify as AI systems.&lt;/p&gt;

&lt;p&gt;However, the &lt;strong&gt;risk classification depends heavily on context&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A product recommendation engine is likely minimal risk.&lt;/p&gt;

&lt;p&gt;A system recommending medical treatments would be something very different.&lt;/p&gt;




&lt;h2&gt;
  
  
  Scenario 5: Rule-based automation
&lt;/h2&gt;

&lt;p&gt;This is where things get blurry.&lt;/p&gt;

&lt;p&gt;Many companies assume their automation tools count as AI, but often they don’t.&lt;/p&gt;

&lt;p&gt;Examples that usually &lt;strong&gt;do not qualify&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;simple if/then logic
&lt;/li&gt;
&lt;li&gt;scripted automation
&lt;/li&gt;
&lt;li&gt;workflow rules
&lt;/li&gt;
&lt;li&gt;deterministic business logic
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These systems execute predefined instructions.&lt;/p&gt;

&lt;p&gt;They don’t infer outputs.&lt;/p&gt;

&lt;p&gt;However, once statistical models or adaptive logic are introduced, that boundary can shift quickly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Scenario 6: Robotic Process Automation (RPA)
&lt;/h2&gt;

&lt;p&gt;Traditional RPA tools typically follow scripted steps and therefore don’t qualify as AI systems.&lt;/p&gt;

&lt;p&gt;But many modern RPA pipelines include AI components such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;document recognition
&lt;/li&gt;
&lt;li&gt;classification models
&lt;/li&gt;
&lt;li&gt;anomaly detection
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those components may fall under the AI system definition even if the surrounding workflow does not.&lt;/p&gt;




&lt;h2&gt;
  
  
  Scenario 7: Analytics dashboards
&lt;/h2&gt;

&lt;p&gt;Classic analytics and BI tools generally fall outside the scope.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SQL queries
&lt;/li&gt;
&lt;li&gt;dashboards
&lt;/li&gt;
&lt;li&gt;reporting tools
&lt;/li&gt;
&lt;li&gt;visualizations
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools summarize data but don’t infer predictions or decisions.&lt;/p&gt;

&lt;p&gt;However, predictive analytics models — forecasting outcomes based on patterns — may qualify.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this matters for companies
&lt;/h2&gt;

&lt;p&gt;This classification question isn’t just theoretical.&lt;/p&gt;

&lt;p&gt;Many companies are discovering that they already have &lt;strong&gt;multiple AI systems running inside their products or internal workflows&lt;/strong&gt;, often without realizing it.&lt;/p&gt;

&lt;p&gt;Examples I’ve seen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;internal document processing pipelines
&lt;/li&gt;
&lt;li&gt;support chatbots
&lt;/li&gt;
&lt;li&gt;recommendation algorithms
&lt;/li&gt;
&lt;li&gt;fraud detection models
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these may need to be inventoried and assessed.&lt;/p&gt;




&lt;h2&gt;
  
  
  A practical approach
&lt;/h2&gt;

&lt;p&gt;One practical rule that has emerged in many teams:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If there’s uncertainty, document it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Even if a system ultimately falls outside the AI Act, recording the reasoning behind that decision is useful.&lt;/p&gt;

&lt;p&gt;In practice this often leads to maintaining an &lt;strong&gt;AI system inventory&lt;/strong&gt; inside the company.&lt;/p&gt;




&lt;h2&gt;
  
  
  How we started thinking about it
&lt;/h2&gt;

&lt;p&gt;When we began mapping our own AI systems, we realized how quickly the list grows.&lt;/p&gt;

&lt;p&gt;Between APIs, internal models, and product features, it’s easy to lose track.&lt;/p&gt;

&lt;p&gt;That’s part of why we started building &lt;strong&gt;Paracta&lt;/strong&gt; — a small tool to help companies classify and document their AI systems in a structured way.&lt;/p&gt;

&lt;p&gt;If you want to read a deeper breakdown of the definition and examples, we wrote a full guide here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://paracta.com/what-is-an-ai-system-eu-ai-act" rel="noopener noreferrer"&gt;https://paracta.com/what-is-an-ai-system-eu-ai-act&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if you’re exploring ways to document AI systems under the regulation, you can check out:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://paracta.com" rel="noopener noreferrer"&gt;https://paracta.com&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;The EU AI Act isn’t just about advanced AI labs.&lt;/p&gt;

&lt;p&gt;It’s about &lt;strong&gt;how everyday software products use AI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And the first step for most teams is simply answering a basic question:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What AI systems are we actually running?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>euaiact</category>
      <category>aiact</category>
    </item>
  </channel>
</rss>
