Yesterday I wrote about the Vercel incident and walked through why you need to rotate your "non-sensitive" environment variables today. I thought that would be the week's security post.
Then I woke up to @weezerOSINT's disclosure about Lovable, and now I am starting to wonder if someone out there is just running an end-to-end test on the mythos of the modern AI-dev stack.
Two days. Two incidents. Totally different root causes. Same uncomfortable conclusion.
What dropped
The short version: security researcher @weezerOSINT made a free Lovable account and was able to read other users' source code, database credentials, AI chat histories, and customer data. Any free account. Every project created before November 2025.
The screenshot making the rounds shows a response from api.lovable.dev/GetProjectMessagesOutputBody.json with another user's prompts, AI reasoning traces, task lists, and project IDs sitting there in plain JSON. The bug is Broken Object Level Authorization on Lovable's own platform API, not the more familiar "the generated app shipped without Supabase RLS" story we got in February.
The part that actually made me set my coffee down: the report was filed through Lovable's bug bounty program 48 days ago, marked as a duplicate of an earlier informative report, and left open. At the time of the disclosure it reportedly still worked.
Forty. Eight. Days.
Why this one hits different
The February Lovable wave was a story about generated apps. The takeaway was "audit the output" — a thing developers already know how to do, at least in principle. You could imagine a fix: better defaults, RLS on by default in the scaffolds, a linter that yells at you when a table is public.
This one is a story about the platform itself. The thing you trusted to hold your code, your keys, your customer data — the control plane, not the output — had a missing auth check on a production API endpoint for at least seven weeks after someone told them about it.
Stack this next to the Vercel situation and a pattern starts to emerge. In the Vercel case, the breach came through a third-party AI tool that had been granted a Workspace OAuth scope that went further than anyone audited. In the Lovable case, it is the platform's own API failing to check "is this caller allowed to see this object." Different failure modes, same underlying theme: the trust boundaries in the AI-assisted-dev stack are drawn with marker, and the marker is washing off in the rain.
The vibe-coding angle
Here is the thing that will keep me up tonight. When you vibe-code an app, you do not type process.env.STRIPE_KEY into a .env file and move on. You paste the key into the chat so the AI can wire it up. You paste the database URL into the chat to fix a schema bug. You paste a sample customer record into the chat to get the types right.
Every one of those messages lives in the project's chat history. The disclosed endpoint returned chat histories. So it is not just "your generated app is exposed" — it is "every secret you ever mentioned in a conversation with Lovable is sitting in a JSON response that any free account could fetch."
If you have built on Lovable, go read your own chat history right now, with the eyes of an attacker. Search for sk-, postgres://, Bearer, anything that looks like a secret. Every match is a key to rotate at the source. Not rename. Rotate. Revoke at the provider and reissue.
What I actually think is going on
I do not think someone is literally targeting the AI-dev ecosystem on a two-day schedule for dramatic effect. What I think is happening is that this category of tools grew very fast, shipped a lot of features, pointed their best engineers at the next feature rather than the last one, and is now discovering that "trust boundaries" is a feature that does not show up in a demo.
The vibe-coding productivity is real. I still use these tools. I will still use them next week. But I am going to stop pretending that a platform saying "secure by default" counts for anything until I see a disclosure track record that backs it up. Forty-eight days on a report with the title "Broken Object Level Authorization on Lovable API leads to unauthorized access to user data and project source code" is, to use a technical term, a lot.
If you are shipping on Lovable right now
Short version, because I already wrote the long version yesterday for Vercel and the shape is the same:
Rotate anything a Lovable project ever touched. Revoke at the upstream provider, not just in the Lovable dashboard. Audit your chat histories for pasted secrets. Turn on RLS on every Supabase table while you are in there. If personal data was exposed, talk to a lawyer today about your disclosure obligations, because "we used an AI app builder" is not going to hold up in front of a regulator.
Two days. Two hacks. Maybe it is the start of a trend, maybe it is the week from hell, maybe someone really is testing the mythos. Either way, rotate your keys and get back to building.
Source for the disclosure: @weezerOSINT on X. If you have audited a Lovable project in the last day and found something worth sharing, the comments are open.
Top comments (0)