The Uncomfortable Truth About Third-Party AI
There is a widespread misconception circulating in European boardrooms right now: "We don't build AI, so the AI Act doesn't really apply to us."
This is wrong, and it could be an expensive mistake.
The EU AI Act distinguishes between providers (those who develop or place AI systems on the market) and deployers (those who use AI systems under their authority). If your company uses ChatGPT for customer support, Microsoft Copilot for document drafting, an AI-powered applicant tracking system for hiring, or any SaaS product with embedded AI features — you are a deployer. And deployers have real, enforceable obligations under the EU AI Act starting August 2, 2026.
The penalty for getting this wrong? Up to €15 million or 3% of global annual turnover, whichever is higher.
What Exactly Is a "Deployer"?
Article 3(4) of the EU AI Act defines a deployer as any natural or legal person that uses an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
This definition is deliberately broad. You become a deployer the moment your organization uses an AI system in a professional context — regardless of whether you built it, bought it, or simply signed up for a free tier.
Some examples that make companies deployers without them realizing it:
- Using AI-powered recruitment tools like HireVue, Pymetrics, or any ATS with AI screening — this is likely high-risk AI under Annex III
- Deploying AI chatbots for customer service, even if they run on GPT-4 or Claude under the hood
- Using AI credit scoring or fraud detection tools provided by your fintech vendor
- Running AI-assisted medical diagnostics through third-party software in clinical settings
- Applying AI for employee monitoring or performance evaluation through HR platforms
In each case, the AI provider has their own set of obligations. But that does not eliminate yours.
The Five Core Deployer Obligations
Here is what the EU AI Act actually requires from deployers of high-risk AI systems. These are not suggestions — they are legally binding requirements.
1. Use the System According to Instructions
Article 26(1) requires deployers to use high-risk AI systems in accordance with the instructions of use provided by the provider. This sounds trivial, but it has teeth.
If your provider's documentation says their AI hiring tool should only be used as a screening aid with human review of all decisions, and your HR team starts auto-rejecting candidates based solely on AI scores — you are in violation. The provider documented the intended use. You deviated from it. The liability is yours.
Practical step: Obtain and actually read the instructions of use for every AI system your organization deploys. Store them centrally. Ensure the people operating these systems know the documented boundaries.
2. Assign Human Oversight
Article 26(2) requires deployers to assign human oversight of high-risk AI systems to individuals who have the competence, training, and authority to fulfil that role.
This means someone in your organization must be specifically responsible for monitoring each high-risk AI system's operation. That person needs to understand how the system works at a functional level, be trained on what to watch for, and have the actual authority to override or stop the system when something goes wrong.
A common failure mode: companies designate a junior employee who technically "monitors" the AI system but has no authority to intervene when it produces questionable outputs. This does not satisfy the requirement.
Practical step: For each high-risk AI system, formally document who provides human oversight. Verify they have appropriate training and decision-making authority. This should be part of your AI governance framework.
3. Ensure Input Data Quality
Article 26(4) states that deployers who exercise control over the input data must ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.
If your AI credit scoring tool takes customer data that you collect and manage, you are responsible for the quality and representativeness of that data. Feeding the system incomplete, biased, or outdated data is your liability, not the provider's.
Practical step: Audit the data pipelines feeding your high-risk AI systems. Document what data goes in, where it comes from, how it is cleaned, and whether it adequately represents the population the system will affect.
4. Monitor Operation and Report Issues
Article 26(5) requires deployers to monitor the operation of high-risk AI systems based on the instructions of use and, when relevant, inform providers or distributors about serious incidents or malfunctions.
You cannot simply deploy an AI system and forget about it. You need ongoing monitoring — checking whether the system performs as expected, whether its outputs remain within acceptable bounds, and whether any patterns suggest drift, bias, or malfunction.
When something goes wrong, you have a legal obligation to report it. If you discover a serious incident — meaning an incident that results in death, serious damage to health, serious disruption to critical infrastructure, or serious harm to property or the environment — you must report it to both the provider and the relevant market surveillance authority. The timeline for this reporting is tight: you need to act as soon as you establish a causal link between the AI system and the incident.
Practical step: Establish a monitoring protocol for each high-risk AI system. Define what "normal operation" looks like and what triggers an investigation. Create an incident reporting process before you need one.
5. Conduct a Fundamental Rights Impact Assessment
Article 27 requires certain deployers — specifically public bodies and private entities providing public services — to carry out a fundamental rights impact assessment (FRIA) before deploying high-risk AI systems.
Even if your organization is not legally required to conduct a FRIA, doing one is good practice. It forces you to think systematically about who your AI systems affect, what could go wrong, and what safeguards are in place. Regulators will likely view a voluntary FRIA favorably if questions arise about your compliance posture.
Practical step: Use the risk assessment frameworks relevant to your sector. Document your assessment and keep it updated as your use of the AI system evolves.
Transparency Obligations Apply to All Deployers
Even if your AI systems are not classified as high-risk, you still have transparency obligations under Article 50.
If you deploy an AI system that interacts directly with people — a chatbot, a virtual assistant, an AI phone agent — you must inform those people that they are interacting with an AI system. The exception is only when this is "obvious from the circumstances and the context of use," which is a higher bar than most companies assume.
If you use AI to generate or manipulate images, audio, or video content (deepfakes or synthetic media), you must disclose that the content was AI-generated. This applies even to marketing materials that use AI-generated imagery.
Practical step: Audit every customer-facing AI touchpoint. Implement clear, visible disclosures. "This conversation is powered by AI" is a reasonable starting point for chatbots.
The Supply Chain Problem
Here is where things get genuinely difficult for deployers: you are dependent on your AI providers for much of the information you need to fulfil your own obligations.
To comply with your deployer obligations, you need:
- Instructions of use from the provider
- Information about the system's intended purpose and limitations
- Details about the training data (at least at a high level)
- Technical documentation sufficient to understand how the system works
- Information about the system's accuracy, robustness, and cybersecurity measures
The AI Act places obligations on providers to supply this information. But in practice, many AI vendors — especially larger ones — are still working out how to deliver this documentation in a usable format. Some SaaS providers with embedded AI features may not even have classified their systems under the AI Act yet.
This creates a real problem: your compliance depends partly on your vendors' compliance.
What to do about it:
- Audit your vendor contracts now. Do they include AI Act compliance commitments? If not, negotiate amendments before August 2026.
- Request documentation proactively. Ask vendors for their AI Act compliance documentation, intended use descriptions, and risk classifications. If they cannot provide it, that is a red flag.
- Build contract language that requires vendors to notify you of material changes to their AI systems, provide updated documentation, and cooperate in the event of regulatory inquiries.
- Have a contingency plan. If a critical vendor cannot demonstrate compliance, you need to know your alternatives. Switching AI tools under regulatory pressure is far worse than planning ahead.
The AI Literacy Requirement Hits Deployers Too
Article 4 of the EU AI Act — which has been enforceable since February 2025 — requires deployers to ensure that their staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy.
This is not a vague aspiration. It is an enforceable obligation, and it already applies today.
For deployers, AI literacy means your people need to understand what the AI systems they work with actually do, what their limitations are, and how to interpret their outputs appropriately. A recruiter using an AI screening tool needs to understand that the tool's scores are probabilistic, not deterministic. A loan officer using AI credit scoring needs to understand that the system can produce biased outcomes if not properly monitored.
Generic "AI awareness" training does not satisfy this requirement. The training needs to be tailored to the specific AI systems your people use and the context in which they use them.
Practical step: Build an AI literacy program tied to the specific AI tools your organization deploys. Document the training, track completion, and update it when systems change.
Five Months Left: What to Do Now
The August 2, 2026 deadline is approximately five months away. If your organization has not started addressing deployer obligations, here is a practical prioritization:
Month 1: Inventory
Complete an AI systems inventory. Every AI system your organization uses, including embedded AI in SaaS tools. Classify each by risk level.
Month 2: Gap analysis
For each high-risk system, map your current practices against the five deployer obligations above. Identify the gaps.
Month 3: Vendor engagement
Contact every AI vendor supplying high-risk or significant AI systems. Request their compliance documentation. Begin contract negotiations where needed.
Month 4: Implementation
Implement human oversight assignments, monitoring protocols, incident reporting processes, and data quality procedures for high-risk systems. Roll out AI literacy training.
Month 5: Verification
Test your processes. Run a tabletop exercise simulating a regulatory inquiry. Verify your documentation is complete and accessible. Conduct an internal compliance check.
This timeline is tight but achievable for organizations that commit resources to it. If you have already started, you are ahead of the majority — research suggests only 18% of organizations have fully implemented AI governance frameworks despite widespread operational AI use.
The Bottom Line
Using third-party AI does not insulate you from regulatory responsibility. The EU AI Act was deliberately designed to create accountability across the entire AI value chain — from the company that builds the model to the organization that deploys it to make decisions affecting real people.
The good news is that deployer obligations, while real, are manageable. They boil down to: know what AI you use, use it properly, watch it carefully, keep records, and make sure your people understand what they are working with.
The bad news is that "we assumed our vendor handled all of that" will not be an acceptable answer when a market surveillance authority comes asking questions.
Need help mapping your deployer obligations? AktAI helps organizations identify, classify, and manage their AI compliance requirements — including the deployer-specific obligations that most compliance tools overlook. Start with a free assessment.
Top comments (0)