Lovable Just Proved Everything We Found in Our 600K Line Audit
Last week, we published an article about a production SaaS built entirely on Lovable - 600,000 lines of AI-generated code with security gaps the founder never knew existed. The article hit 250,000 views on Reddit and 19,000 impressions on LinkedIn.
This week, Lovable proved every finding right.
What happened
On April 20, security researcher @weezerOSINT created a free Lovable account, made five API calls, and accessed another user's source code, database credentials, AI chat histories, and customer data. No hacking required. No exploits. Just a free account and basic API requests.
The vulnerability - a Broken Object Level Authorization (BOLA) flaw, ranked number one on OWASP's API Security Top 10 - affected every project created before November 2025. The researcher extracted real names, real companies, real LinkedIn profiles from a Danish nonprofit's project. Database credentials. Stripe keys. Full AI conversations where founders had pasted error logs, discussed business logic, and shared passwords with the AI assistant.
Companies including Uber, Zendesk, and Deutsche Telekom use Lovable, according to its latest funding announcement. If employees at these companies used Lovable for internal tools or prototypes before November 2025, sensitive corporate data may have been exposed.
The researcher reported this to Lovable through HackerOne on March 3. Lovable closed the report without escalation. The vulnerability stayed open for 48 days.
When the researcher went public, Lovable's response cycled through three stages in a single day: first they denied a breach and called it "intentional behavior," then blamed unclear documentation, then shifted responsibility to HackerOne for not escalating the report.
The platform made it worse
Early free-tier users on Lovable couldn't create private projects at all. They had to upgrade to a paid plan to make their projects private. Every free-tier project was public by default - including code, AI chat history, and any credentials discussed in conversation.
Lovable switched to private by default in December 2025. But in February 2026, while updating backend permissions, they accidentally re-enabled access to chats on public projects. The March 3 bug report flagged exactly this. It was closed without action.
When Lovable finally patched the issue, they only fixed new projects. Older projects - potentially thousands of them, many still active - remained exposed. A newly created project correctly returned "403 Forbidden" when queried. An older project returned "200 OK" with full access to everything.
Why this matters if you built on Lovable
This is not a theoretical risk. If you built an app on Lovable before November 2025, assume the following has been publicly accessible:
- Your full source code
- Every message you sent to the AI, including any credentials you pasted
- Your database schema and connection strings
- Your Supabase, Stripe, and third-party API keys embedded in the code
- Your users' data if the AI helped you build database tables with personal information
As the researcher put it: "People tell the AI what they want to build. They paste error logs. They discuss their business logic. They share credentials. Lovable stores all of it and exposes all of it."
What we already found - and what just got confirmed
In our audit of a 600,000-line Lovable codebase, we flagged several issues that this breach validates at a platform level:
We found: Every AI function routed through Lovable's proprietary gateway.
The breach confirms: Your entire project - code, conversations, credentials - lives on infrastructure you don't control. When that infrastructure has a security flaw, you have no visibility and no ability to respond.
We found: Passwords stored in plain text with TODO comments to fix later.
The breach confirms: Founders share credentials with the AI during development. Those conversations were stored and exposed. Every password, API key, and database connection string discussed in chat was readable by any free account.
We found: RLS policies wide open for months before a fix migration.
The breach confirms: Lovable's own authorization was broken for 48 days after being reported. The platform responsible for generating your security policies couldn't secure its own API endpoints.
We found: No tests, no verification, no safety net.
The breach confirms: There was no automated system catching unauthorized access to user data. The vulnerability was found by a single external researcher, not by Lovable's own security testing.
The bigger picture
The data across the entire AI coding category is consistent. Between 40% and 62% of AI-generated code contains security vulnerabilities, depending on the study. AI-written code produces flaws at 2.74 times the rate of human-written code. A Q1 2026 assessment of over 200 vibe-coded applications found that 91.5% contained at least one vulnerability traceable to AI hallucination.
Lovable is not uniquely insecure. It is representatively insecure. The same patterns exist across every AI coding platform. The same incentive structure applies: growth is rewarded, security is an afterthought. Lovable hit $4 million ARR in four weeks, raised $330 million at a $6.6 billion valuation, and grew enterprise adoption by 340% year over year. That kind of growth doesn't leave room for the slow, careful security work that production software requires.
87% of Fortune 500 companies have adopted at least one vibe coding platform. Financial services and healthcare show the lowest adoption at 34% and 28% respectively - which suggests the market itself recognises the risk, even if individual founders don't.
What to do right now
If you built on Lovable before November 2025:
- Assume your source code and AI chat history have been read by unknown parties. Act accordingly.
- Rotate every credential that was ever mentioned in a Lovable chat - Supabase keys, Stripe keys, database passwords, API tokens. All of them. Today.
- Check whether your Supabase database contains user data that was exposed through the source code. If your connection strings were leaked, your database may have been accessed directly.
- Set all projects to private immediately.
- If you have paying users, consider whether you need to notify them.
If you're still building on any AI platform:
- Never paste credentials into AI chat. Use environment variables and reference them by name, not value.
- Assume your AI conversations are stored and potentially accessible. Treat every message as if it could be read by a stranger.
- Get your code into your own GitHub repository. Don't rely solely on the platform's storage.
- Get an independent security review before going to production. The platform that wrote the code cannot objectively audit it - and as this breach shows, the platform itself may not be secure.
If you have paying users on an AI-built app:
You have a responsibility to your users. Their data may have been exposed not through your code, but through the platform you built on. Consider what steps you need to take to secure their information going forward.
The pattern continues
We said it in our original audit and it bears repeating: AI is consistently good at the visible - UI, features, user flows. And consistently weak at the invisible - security, testing, architecture, vendor dependencies.
This week, the invisible became visible. The question for every founder building on these platforms is not whether the same issues exist in your codebase. They almost certainly do. The question is whether you discover them through a planned review or through an incident.
One of those options is significantly cheaper than the other.
We audit AI-generated codebases and migrate apps off Lovable, Replit, and Bolt to production infrastructure. If you want to know what's in your code before someone else finds out, book a free discovery call at inigra.eu.
Top comments (0)