DEV Community

Mohammed Ali Chherawalla
Mohammed Ali Chherawalla

Posted on

EU AI Act-Ready Local AI for Credit Scoring Mobile Apps in 2026 (Cost, Timeline & How It Works)

Short answer: Credit Scoring AI systems can be structured to avoid the EU AI Act's high-risk classification by limiting decision scope and maintaining human-in-the-loop architecture. Wednesday scopes this in a one-week discovery sprint before any code is written.

Your credit scoring model is on the EU AI Act's high-risk list. Your legal team needs a conformity assessment before the next product release. Your engineering team built the model without one.

The conformity assessment is not optional and it's not a rubber stamp. It requires technical documentation, explainability evidence, human review pathways, and audit logging that most mobile credit apps were not built to produce.

What decisions determine whether this project ships in 6 weeks or 18 months?

Four decisions determine whether the conformity assessment your legal team needs takes 6 weeks or forces a rebuild.

Explainability requirement. The Act requires that individuals affected by high-risk AI decisions receive a meaningful explanation of the factors that drove the outcome. "The model assessed your application and it was declined" is not compliant. The app has to generate a human-readable explanation that references specific decision inputs - income level, debt ratio, payment history - in language a borrower can understand and act on. If your current model is a black-box ensemble, the explainability work changes the model architecture, not just the UI.

Training data documentation. High-risk AI under the Act requires documentation of training datasets, including their sources, known limitations, and steps taken to correct for bias. If your model was trained on third-party bureau data, you need the bureau's dataset documentation as part of your compliance file. If the bureau can't provide it, you need an alternative training data source or an independent audit of the data you used.

Human review pathway. High-risk credit AI requires that affected applicants have access to a human review of contested decisions. The app needs a mechanism for the applicant to trigger that review, and your operations team needs a defined workflow for handling it within a timeframe your legal team approves. Both the app mechanism and the ops workflow need to exist before the conformity assessment, not after.

On-device vs server. A credit scoring model running on the applicant's device processes their financial data locally, without transmitting it to your servers. The data minimization argument for on-device is strong. The constraint is model size - credit scoring models that meet the explainability requirement often require more parameters than simpler classification tasks. The tradeoff between on-device data minimization and server-side model capability has to be resolved with your compliance team, not just your engineering team.

Most teams spend 4-6 months discovering these decisions by building the wrong version first. A team that has shipped this before compresses that to 1 week.

On-Device AI vs. Cloud AI: What's the Real Difference?

Factor On-Device AI Cloud AI
Data transmission None — data never leaves the device All inputs sent to external server
Compliance No BAA/DPA required for inference step Requires BAA (HIPAA) or DPA (GDPR)
Latency Under 100ms on Neural Engine 300ms–2s (network + server queue)
Cost at scale Fixed — one-time integration Variable — $0.001–$0.01 per query
Offline capability Full functionality, no connectivity needed Requires active internet connection
Model size 1B–7B parameters (quantized) Unlimited (GPT-4, Claude 3, etc.)
Data sovereignty Device-local, no cross-border transfer Depends on server region and DPA chain

The right choice depends on your compliance constraints, query volume, and task complexity. Wednesday scopes this in the first week — before any code is written.

Why is Wednesday the right team for on-device AI?

We built Off Grid because we hit every one of these problems in production. Off Grid is the fastest-growing on-device AI application in the world, with 50,000+ users running it today.

It's open source, with 1,650+ stars on GitHub and contributors from across the world. It has been cited in peer-reviewed clinical research on offline mobile edge AI.

Every decision named above - model choice, platform, server boundary, compliance posture - we have made before, at scale, for real deployments.

How long does the integration take, and what does it cost?

The engagement is four sprints. Each sprint is fixed-price. Each sprint has a named deliverable your team can put on a roadmap.

Discovery (Week 1, $5K): We resolve the four decisions - model, platform, server boundary, compliance posture. Deliverable: a 1-page architecture doc your CTO can take to the board and your Privacy Officer can take to Legal.

Integration (Weeks 2-3, $5K-$10K): We ship the on-device model into your app behind a feature flag. Deliverable: a working build your QA team can test against real workflows.

Optimization (Weeks 4-5, $5K-$10K): We hit the performance and compliance targets from the discovery doc. Deliverable: benchmarks signed off by your team.

Production hardening (Week 6, $5K): Edge cases, OS version coverage, app store and compliance review readiness. Deliverable: shippable build.

4-6 weeks total. $20K-$30K total.

Money back if we don't hit the benchmarks. We have not had to refund.

"Wednesday Solutions' team is very methodical in their approach. They have a unique style of working. They score very well in terms of the scalability, stability, and security of what they build." - Sachin Gaikwad, Founder & CEO, Buildd

Is on-device AI right for your organization?

Worth 30 minutes? We'll walk you through what your version of the four decisions looks like, what a realistic scope and timeline would be for your app, and what your compliance posture and on-device target mean in practice.

You'll leave with enough to run a planning meeting next week. No pitch deck.

If we're not the right team, we'll tell you who is.

Book a call with the Wednesday team

Frequently Asked Questions

Q: Does the EU AI Act classify credit scoring AI as high-risk?

It depends on the decision scope. EU AI Act Annex III lists specific use cases that qualify as high-risk. Systems making or materially influencing consequential individual decisions fall under high-risk requirements. Systems structured as decision-support tools with mandatory human review can often avoid the classification.

Q: What technical requirements does the EU AI Act impose on on-device AI?

High-risk systems require: risk management, data governance, technical documentation, operational logging, user transparency, human oversight, and accuracy standards. On-device AI satisfies data sovereignty requirements more cleanly than cloud, but the other requirements apply regardless of deployment mode.

Q: How long does it take to ship an EU AI Act-compliant credit scoring AI app?

4–6 weeks for technical integration. Compliance documentation — risk management, technical docs, conformity assessment — adds 2–4 weeks in parallel if you don't have a compliance team already familiar with the Act.

Q: What does EU AI Act-compliant on-device AI cost?

$20K–$30K for technical integration across four fixed-price sprints. Compliance documentation scope varies by system classification.

Q: Can an on-device AI system avoid EU AI Act registration?

General-purpose AI models deployed as product components are subject to transparency obligations but may not require conformity assessment if the overall system is not high-risk. Classification depends on use case, decision scope, and affected population — resolved in the discovery sprint.

Top comments (0)