Every major industry is quietly embedding AI into its transaction layer. Property valuations. Insurance underwriting. Lending decisions. Medical diagnostics. Hiring algorithms. These AI systems make millions of consequential decisions per day, and almost none of them are being audited for regulatory compliance.
The EU AI Act goes into enforcement on August 2, 2026. Colorado's AI Act follows close behind. And the AI systems making your industry's highest-stakes decisions? Most of them have never been scanned for compliance with the regulations that now govern them.
I built AIR Blackbox, an open-source CLI tool that scans Python AI projects for EU AI Act technical requirements. It checks six articles covering risk management, data governance, documentation, record-keeping, human oversight, and robustness. One install, one scan, one report.
But here is what I have been thinking about: the same compliance gap exists in every industry where AI makes decisions that affect people's lives, money, or access to services. Let me walk through where this matters most.
The Pattern
Every industry below follows the same structure:
- A massive, traditionally analog market goes digital
- AI gets embedded into the decision-making layer
- Those AI decisions fall under high-risk classification in the EU AI Act, Colorado SB 205, or both
- Nobody is auditing the AI layer for compliance
- The regulatory deadline is months away
The opportunity is not in any single industry. It is in the compliance infrastructure that sits underneath all of them.
1. Tokenized Real Estate and Real-World Assets ($1.4T projected 2026)
Tokenization platforms use AI for property valuation, investor risk scoring, automated KYC/AML, and fraud detection. Every one of those AI systems makes "consequential decisions" about people's investments.
The compliance gap: platforms spend 30% of their budget on securities and blockchain compliance. They spend 0% on auditing the AI systems that actually make the decisions.
Regulatory exposure: EU AI Act (essential financial services, Annex III), MiCA (crypto-asset service providers), Colorado SB 205 (consequential financial decisions), SEC (securities compliance).
There are 80+ tokenization service providers globally. Zero of them have AI governance tooling.
2. Insurance and InsurTech
AI underwriting models decide who gets coverage and at what price. Claims processing AI determines whether your claim gets paid or denied. Fraud detection AI flags (or misses) suspicious activity.
These are textbook high-risk AI systems under the EU AI Act. Insurance is explicitly called out in Annex III as an essential service where AI decisions require full compliance.
The compliance gap: InsurTech companies have built sophisticated ML models for pricing and claims, but the governance layer (documentation, audit trails, human oversight, bias detection) is often bolted on as an afterthought, if it exists at all.
Market size: The global InsurTech market is projected to exceed $150B by 2030. Every AI-driven underwriting model in Europe needs to satisfy Articles 9-15 by August 2026.
3. Lending and Credit Decisioning (FinTech)
AI credit scoring determines who gets a loan, what interest rate they pay, and whether they get approved or denied. This is one of the most heavily regulated AI use cases on the planet.
The compliance gap: traditional banks have compliance departments. FinTech startups building AI lending products often don't. They have ML engineers building scoring models and growth teams optimizing approval rates, but the Article 12 audit trail? The Article 14 human override? Often missing.
Regulatory exposure: EU AI Act (credit and financial services, Annex III), Colorado SB 205 (consequential credit decisions), Fair Lending laws (US), Consumer Duty (UK).
4. Healthcare AI and MedTech
Diagnostic AI (radiology, pathology, dermatology), treatment recommendation engines, clinical decision support, mental health chatbots, and drug interaction checkers. Every one of these makes decisions that directly affect patient outcomes.
The compliance gap: the FDA has a pathway for AI/ML-based Software as a Medical Device (SaMD). But FDA clearance does not cover EU AI Act compliance. A diagnostic AI can be FDA-cleared AND non-compliant with the EU AI Act simultaneously. Two separate compliance requirements, two separate audit processes, and most companies are only thinking about one of them.
Regulatory exposure: EU AI Act (safety component in medical devices, Annex I + Annex III for health-related AI), MDR (Medical Devices Regulation), FDA (US), state-level healthcare AI transparency laws.
5. HR Tech and Recruitment AI
Resume screening AI, candidate scoring models, automated interview analysis, workforce analytics, and performance management AI. Employment is one of the most explicitly regulated AI categories globally.
This is one of the first verticals where enforcement has already started. New York City's Local Law 144 requires bias audits for automated employment decision tools. The EU AI Act classifies employment AI as high-risk. Colorado SB 205 covers AI making employment decisions.
The compliance gap: many HR Tech vendors market "AI-powered hiring" without the infrastructure to prove their models are free of bias, properly documented, or equipped with human oversight. The marketing moved faster than the governance.
6. Autonomous Vehicles and Mobility
Self-driving vehicle AI, fleet management optimization, route planning, driver safety scoring, and predictive maintenance. The AI is making life-safety decisions at highway speed.
The compliance gap: automotive OEMs have strong safety testing cultures. But the EU AI Act introduces requirements beyond safety testing: documentation standards, audit trails, and transparency obligations that traditional automotive compliance processes were not designed to cover.
Regulatory exposure: EU AI Act (safety component in vehicles, Annex I), existing vehicle type-approval regulations, emerging UNECE regulations for autonomous driving.
7. EdTech and AI Tutoring
Adaptive learning platforms, automated grading, student performance prediction, dropout risk scoring, and AI-generated educational content. Education is explicitly listed in the EU AI Act's high-risk categories.
The compliance gap: EdTech companies have moved aggressively into AI-powered personalization, but few have the documentation, bias detection, or human oversight mechanisms that the EU AI Act requires. A student scoring model that determines course placement is a high-risk AI system, whether the company building it realizes that or not.
8. Legal Tech
Contract analysis AI, legal research assistants, case outcome prediction, document review automation, and AI-generated legal briefs. Ironic that the tools lawyers use may themselves be non-compliant.
The compliance gap: legal AI tools process privileged information and influence case strategy. The EU AI Act classifies AI used in the administration of justice as high-risk. Most legal AI vendors focus on accuracy and speed, not on the governance infrastructure that regulators will demand.
The Common Thread
Every industry above has the same profile:
- AI is making decisions that materially affect people
- Regulations now classify those AI systems as high-risk
- The compliance infrastructure (documentation, audit trails, human oversight, bias detection, robustness testing) is either incomplete or absent
- The enforcement deadline is months away
- Nobody is scanning the AI layer
What AIR Blackbox Does
AIR Blackbox is an open-source CLI tool that scans Python AI projects for compliance with six EU AI Act technical requirements:
pip install air-blackbox
air-blackbox scan .
It checks:
- Article 9: Risk management system
- Article 10: Data governance
- Article 11: Technical documentation
- Article 12: Record-keeping and audit trails
- Article 14: Human oversight
- Article 15: Accuracy, robustness, cybersecurity
With Phase 2 (shipping now), it maps results across the EU AI Act, ISO 42001, NIST AI RMF, and Colorado SB 205 simultaneously. One scan, four frameworks.
The scanner does not care what industry you are in. It cares whether your AI system has the technical controls that regulators require. A property valuation model and a credit scoring model need the same six compliance checks. The regulations are the same. The scan is the same.
The Bigger Vision
I think of AIR Blackbox as the compliance verification layer for AI transactions. The same way a financial audit verifies that accounting standards are met, an AIR Blackbox scan verifies that AI governance standards are met.
Every tokenized real estate transaction should carry an AIR Blackbox evidence bundle proving the valuation AI was audited. Every AI lending decision should reference a signed compliance report. Every autonomous vehicle software update should pass a governance scan before deployment.
The industries are different. The compliance requirement is the same. And right now, with 4 months until the EU AI Act's August 2026 deadline, most of these industries are flying blind.
Try It
AIR Blackbox is open-source and free:
pip install air-blackbox
air-blackbox scan your-project/
GitHub: github.com/air-blackbox
Website: airblackbox.ai
If you are building AI systems in any of the industries above, scan your project. The results might surprise you.
Disclaimer: AIR Blackbox scans for technical requirements. This is not a certified compliance test. It is a starting point to identify potential gaps. Consult a qualified attorney for legal compliance guidance.
I'm Jason Shotwell, the builder behind AIR Blackbox. I write about AI governance, open-source compliance tooling, and the race to August 2026. Follow me on Dev.to for more.
Top comments (0)