Short answer: Indian healthcare companies can deploy AI in mobile apps under DPDP compliance by running inference on-device — no personal data leaves the device, satisfying data localisation and purpose limitation without additional consent infrastructure.
Your legal team has classified health data as sensitive personal data under DPDP, requiring explicit consent for every AI processing activity. Your app's current consent flow wasn't designed for that granularity.
The gap is architectural, not just legal. A consent flow built for general app permissions can't be retrofitted to capture purpose-specific consent for each AI processing activity. The rebuild is smaller than it looks - if it's designed correctly from the start of the AI feature work, not added after the compliance review.
What decisions determine whether this project ships in 6 weeks or 18 months?
Four decisions determine whether your health AI features ship with a consent architecture your legal team approves or get held back for a consent framework rebuild that takes 3 months.
Sensitive data processing basis. Health data under DPDP requires explicit, specific consent before any AI processing can occur. The consent has to name the specific processing activity - "your symptom description will be processed by an AI model on your device to suggest possible conditions" - not just reference AI generally. Your legal team needs to draft the consent language for each AI feature before that feature ships, not after the first user complaint.
Offline vs server processing. Indian healthcare apps frequently serve patients in tier-2 and tier-3 cities where connectivity is intermittent. An offline-first AI model that processes health data on the patient's device solves both problems at once: the DPDP cross-border transfer question and the connectivity problem. A server-based model solves neither. For a healthcare app serving patients outside metro areas, offline-first is the architecture that matches the actual usage environment.
Data principal rights. Patients have the right under DPDP to access what your app has inferred about them and to request correction or erasure of their data. If your AI model runs locally and builds a health profile over time - symptom patterns, condition likelihood scores, medication interactions - your app needs a local data management interface. Patients have to be able to see the profile, understand it, and delete it with a single action.
Consent revocation. DPDP requires that consent be revocable at any time. If a patient revokes consent for AI processing of their health data, the app has to stop all AI processing immediately and delete any locally stored inferences that were generated with that consent. The revocation mechanism is not a setting in a privacy menu - it's a data deletion function that has to be built into the app's data layer before the AI feature ships.
Most teams spend 4-6 months discovering these decisions by building the wrong version first. A team that has shipped this before compresses that to 1 week.
On-Device AI vs. Cloud AI: What's the Real Difference?
| Factor | On-Device AI | Cloud AI |
|---|---|---|
| Data transmission | None — data never leaves the device | All inputs sent to external server |
| Compliance | No BAA/DPA required for inference step | Requires BAA (HIPAA) or DPA (GDPR) |
| Latency | Under 100ms on Neural Engine | 300ms–2s (network + server queue) |
| Cost at scale | Fixed — one-time integration | Variable — $0.001–$0.01 per query |
| Offline capability | Full functionality, no connectivity needed | Requires active internet connection |
| Model size | 1B–7B parameters (quantized) | Unlimited (GPT-4, Claude 3, etc.) |
| Data sovereignty | Device-local, no cross-border transfer | Depends on server region and DPA chain |
The right choice depends on your compliance constraints, query volume, and task complexity. Wednesday scopes this in the first week — before any code is written.
Why is Wednesday the right team for on-device AI?
We built Off Grid because we hit every one of these problems in production. Off Grid is the fastest-growing on-device AI application in the world, with 50,000+ users running it today.
It's open source, with 1,650+ stars on GitHub and contributors from across the world. It has been cited in peer-reviewed clinical research on offline mobile edge AI.
Every decision named above - model choice, platform, server boundary, compliance posture - we have made before, at scale, for real deployments.
How long does the integration take, and what does it cost?
The engagement is four sprints. Each sprint is fixed-price. Each sprint has a named deliverable your team can put on a roadmap.
Discovery (Week 1, $5K): We resolve the four decisions - model, platform, server boundary, compliance posture. Deliverable: a 1-page architecture doc your CTO can take to the board and your Privacy Officer can take to Legal.
Integration (Weeks 2-3, $5K-$10K): We ship the on-device model into your app behind a feature flag. Deliverable: a working build your QA team can test against real workflows.
Optimization (Weeks 4-5, $5K-$10K): We hit the performance and compliance targets from the discovery doc. Deliverable: benchmarks signed off by your team.
Production hardening (Week 6, $5K): Edge cases, OS version coverage, app store and compliance review readiness. Deliverable: shippable build.
4-6 weeks total. $20K-$30K total.
Money back if we don't hit the benchmarks. We have not had to refund.
"Retention improved from 42% to 76% at 3 months. AI recommendations rated 'highly relevant' by 87% of users." - Jackson Reed, Owner, Vita Sync Health
Is on-device AI right for your organization?
Worth 30 minutes? We'll walk you through what your version of the four decisions looks like, what a realistic scope and timeline would be for your app, and what your compliance posture and on-device target mean in practice.
You'll leave with enough to run a planning meeting next week. No pitch deck.
If we're not the right team, we'll tell you who is.
Book a call with the Wednesday team
Frequently Asked Questions
Q: How does on-device AI help healthcare companies comply with India's DPDP Act?
DPDP requires consent for processing personal data and restricts cross-border transfers. An on-device model processing data locally satisfies the transfer restriction structurally — data that never leaves the device has no cross-border transfer to consent to.
Q: Does DPDP require a Data Fiduciary to register AI processing?
Significant Data Fiduciaries must conduct DPIAs for high-risk processing. AI systems processing sensitive personal data — financial, health, or identity data — likely require a DPIA. On-device AI reduces DPIA scope because local processing eliminates third-party processor involvement.
Q: How long does DPDP-compliant on-device AI take for healthcare?
4–6 weeks. Consent and disclosure documentation required under DPDP runs in parallel with the build. Wednesday has shipped on-device AI for Indian fintech and healthcare and is familiar with DPDP requirements.
Q: What does DPDP-compliant on-device AI cost?
$20K–$30K across four fixed-price sprints, money back if benchmarks aren't met.
Q: Can on-device AI process Aadhaar or financial data under DPDP?
Processing sensitive personal data requires explicit consent and documented purpose. On-device processing reduces risk surface — data is processed locally, and only the inference result (not raw data) is used by the app.
Top comments (0)