DEV Community

Fredrik
Fredrik

Posted on

Are You Using High-Risk AI Without Realizing It?

Are You Using High-Risk AI Without Realizing It? 25 Real Examples from the EU AI Act

When people hear about the EU AI Act, they often assume it’s mainly about large tech companies building advanced AI systems.

In reality, many of the rules focus on how AI is used in everyday decisions — especially when those decisions affect people’s lives.

The regulation introduces a category called high-risk AI systems, which come with stricter requirements.

The tricky part is that many of these use cases are more common than you might think.


What does “high-risk AI” actually mean?

An AI system is considered high-risk when it is used in contexts where decisions can significantly impact individuals.

This includes areas like:

  • hiring
  • education
  • finance
  • healthcare
  • public services

These use cases are listed in Annex III of the EU AI Act, and they define where companies need to be more careful.


25 real-world examples

Here are some examples that help make this more concrete.

Hiring and workplace decisions

AI that filters job applicants or ranks candidates is considered high-risk.

The same goes for systems that evaluate employee performance or monitor behavior.


Education systems

If an AI system is used to grade exams or determine admissions, it falls into the high-risk category.


Financial decisions

Credit scoring is one of the most obvious examples.

If AI determines whether someone gets a loan, that system is high-risk.

Insurance pricing and approvals can fall into the same category.


Public sector use

AI systems used to determine access to benefits or allocate public housing are also high-risk.

These systems directly affect people’s access to essential services.


Law enforcement

Predictive policing tools and facial recognition systems are included here.

These are some of the most heavily discussed use cases in the regulation.


Healthcare

AI used in diagnosis or treatment recommendations is high-risk.

These systems can influence medical decisions, which raises the bar significantly.


Infrastructure and safety

AI systems controlling energy grids or traffic systems also fall into this category.

Failures in these systems can have wide-reaching consequences.


What about everyday AI tools?

Most companies are not building high-risk systems.

Common use cases like:

  • chatbots
  • document summarization
  • marketing tools
  • recommendation systems

are typically low or minimal risk.

But that doesn’t mean they can be ignored.


The real challenge for companies

The biggest issue isn’t identifying obvious high-risk systems.

It’s realizing that:

you may already be using more AI systems than you think.

Many companies have:

  • internal models
  • third-party APIs
  • embedded AI features

without a clear overview.


Why documentation matters

Even if your systems are not high-risk, you still need to:

  • understand what AI systems you use
  • assess their risk level
  • document your reasoning

This becomes especially important as regulation evolves.


A simple starting point

A practical approach is to:

  1. list all AI systems used in your company
  2. identify which ones might fall under high-risk categories
  3. document how they are used

If you want a more detailed breakdown of high-risk AI systems and examples, we put together a full guide here:

https://paracta.com/25-high-risk-ai-examples

We also built a small tool to help companies classify and document their AI systems:

https://paracta.com


Final thought

The EU AI Act is less about advanced AI technology and more about how AI is applied in real-world decisions.

And for many companies, the first step isn’t compliance — it’s simply understanding:

Where are we actually using AI today?

Top comments (0)