DEV Community

Fredrik
Fredrik

Posted on

What Actually Counts as an AI System Under the EU AI Act?

What Actually Counts as an AI System Under the EU AI Act?

If you’ve been following the EU AI Act discussions, one question keeps coming up when talking to founders and engineers:

“Does our software actually count as an AI system?”

The regulation sounds like it’s about big AI labs or advanced machine learning systems. But when you read the definition more carefully, you realize the scope is much broader.

A lot of everyday software features may already fall into the category of an AI system.

Understanding where that boundary is matters, because once a system qualifies as AI under the Act, the next step is determining its risk classification and documentation requirements.


The legal definition

The EU AI Act defines an AI system roughly like this:

A machine-based system that infers from input data how to generate outputs such as predictions, recommendations, content, or decisions.

The key word in that definition is infers.

In other words, the system is not just executing fixed logic — it is deriving outputs based on patterns in data.

That distinction ends up being the line between traditional software and AI systems.


Scenario 1: Using an LLM API

Let’s say your product calls an API like OpenAI, Anthropic, or another language model provider.

For example:

  • generating summaries
  • answering questions
  • analyzing user input
  • extracting information from documents

Even though you didn’t train the model, you are still deploying an AI system.

The AI Act distinguishes between:

  • providers (companies that build models)
  • deployers (companies that use them)

Most SaaS companies fall into the deployer category.


Scenario 2: A chatbot in your product

A chatbot powered by a language model is clearly an AI system.

The regulation doesn’t automatically make that high-risk, but it does introduce transparency obligations.

For example, users should be aware that they are interacting with an AI system.

In most SaaS or customer support contexts this will likely fall under limited or minimal risk, but it still counts as AI.


Scenario 3: Machine learning models

If your product uses machine learning — even something simple — it almost certainly qualifies.

Examples:

  • churn prediction models
  • fraud detection
  • recommendation engines
  • classification models
  • personalization algorithms

The important question is not whether the system uses neural networks or fancy architectures.

It’s whether the system infers outputs from data rather than executing deterministic logic.


Scenario 4: Recommendation systems

Recommendation systems appear everywhere:

  • e-commerce product suggestions
  • content feeds
  • personalization features

These systems usually rely on machine learning or statistical inference, which means they qualify as AI systems.

However, the risk classification depends heavily on context.

A product recommendation engine is likely minimal risk.

A system recommending medical treatments would be something very different.


Scenario 5: Rule-based automation

This is where things get blurry.

Many companies assume their automation tools count as AI, but often they don’t.

Examples that usually do not qualify:

  • simple if/then logic
  • scripted automation
  • workflow rules
  • deterministic business logic

These systems execute predefined instructions.

They don’t infer outputs.

However, once statistical models or adaptive logic are introduced, that boundary can shift quickly.


Scenario 6: Robotic Process Automation (RPA)

Traditional RPA tools typically follow scripted steps and therefore don’t qualify as AI systems.

But many modern RPA pipelines include AI components such as:

  • document recognition
  • classification models
  • anomaly detection

Those components may fall under the AI system definition even if the surrounding workflow does not.


Scenario 7: Analytics dashboards

Classic analytics and BI tools generally fall outside the scope.

Examples:

  • SQL queries
  • dashboards
  • reporting tools
  • visualizations

These tools summarize data but don’t infer predictions or decisions.

However, predictive analytics models — forecasting outcomes based on patterns — may qualify.


Why this matters for companies

This classification question isn’t just theoretical.

Many companies are discovering that they already have multiple AI systems running inside their products or internal workflows, often without realizing it.

Examples I’ve seen:

  • internal document processing pipelines
  • support chatbots
  • recommendation algorithms
  • fraud detection models

Each of these may need to be inventoried and assessed.


A practical approach

One practical rule that has emerged in many teams:

If there’s uncertainty, document it.

Even if a system ultimately falls outside the AI Act, recording the reasoning behind that decision is useful.

In practice this often leads to maintaining an AI system inventory inside the company.


How we started thinking about it

When we began mapping our own AI systems, we realized how quickly the list grows.

Between APIs, internal models, and product features, it’s easy to lose track.

That’s part of why we started building Paracta — a small tool to help companies classify and document their AI systems in a structured way.

If you want to read a deeper breakdown of the definition and examples, we wrote a full guide here:

https://paracta.com/what-is-an-ai-system-eu-ai-act

And if you’re exploring ways to document AI systems under the regulation, you can check out:

https://paracta.com


Final thought

The EU AI Act isn’t just about advanced AI labs.

It’s about how everyday software products use AI.

And the first step for most teams is simply answering a basic question:

What AI systems are we actually running?

Top comments (0)