Lovable is facing claims that users were able to access other people’s source code, database credentials, AI chat histories, Stripe IDs, and customer data, and the company’s response only made the whole thing look worse. Instead of sounding like a business that understood why people were furious, Lovable tried to shrink the story down into a visibility issue. It said it did not suffer a data breach, sigh. It said its documentation around what “public” means was unclear. It said chat messages used to be visible on public projects and that code visibility on public projects was intentional. If people are looking at screenshots showing readable chats, code, credentials, and live project data, that answer is not reassuring. It sounds like a company trying to rename the problem after users had already seen enough.
That is the part that makes this so ugly. Lovable is not some little hobby toy that never promised anyone anything. It has been selling security hard. Its Trust Center says security is foundational. Its security page says customer data is not accessible across accounts, permissions are enforced server-side, secrets are encrypted at rest, and customer prompts, code, and workspace data are not used to train its models. Its privacy policy says it maintains a 24/7 incident response team. In March and April, the company was publishing security-facing material about pentesting, enterprise readiness, and what serious teams should expect before trusting AI development tools. Once a company talks like that, it does not get much room to hide behind cheap wording when users start posting screenshots that make the platform look unsafe.
The claims spreading around are specific enough that this does not read like random panic. A HackerOne screenshot shows a report filed on March 3 describing Broken Object Level Authorization on the Lovable API and unauthorized access to user data and project source code. Other screenshots say a free Lovable account could read another user’s code, database credentials, AI chat history, and customer data. The posts also claim newer projects were patched while older ones stayed exposed, that a report was marked duplicate, and that users were effectively pushed toward making projects private even though privacy sits behind a paid tier. Lovable can argue over labels all it wants. People looking at that are going to call it a breach.
Lovable was also not walking into this on a clean record. There were already warnings. Earlier reporting had already raised questions around exposed user records, weak vulnerability detection, and data leaks tied to Lovable-built or Lovable-hosted apps. That history matters because it changes how people read this story. If a company with a spotless security reputation gets hit with one messy allegation wave, some people may still give it the benefit of the doubt. Lovable did not have that luxury. The doubt was already there. This just gave it a much nastier shape.
There is also a big difference between an insecure app built with an AI tool and a platform-level trust problem. Lovable has more room to defend itself when the story is that someone used the platform badly and shipped something sloppy. That is not what these screenshots are making people think about. They are making people think about whether one user could look into another user’s work through the platform itself. That is a different kind of failure. Once users start believing their source code, prompt history, credentials, and project data may not be safely separated from someone else’s, the product has a much deeper problem than a bad app on the edge of the ecosystem.
The company’s answer makes that worse, not better. Saying “public meant public” does not solve the trust problem. It tells users the platform was built with visibility behavior they clearly did not understand, that the documentation failed to explain it properly, or that Lovable simply did not take the trust boundary seriously enough to begin with. None of those readings help the company. They all point in the same direction. A product asking for this much trust was not built carefully enough.
That is the larger problem running through all of this. Anyone can make an AI business like Lovable now. The barrier is low, the hype is loud, and the money comes fast. A company can wrap itself in AI branding, talk about enterprise security, raise money, and start asking for serious trust before it has earned any of it. That is why there are so many companies like this now. They are easy to make, easy to market, and easy to turn into a business model. Security gets treated like a feature page, not like the first job.
People are not putting harmless scraps of nothing into these products. They are putting source code, prompts, credentials, schemas, customer records, business logic, internal tools, and live project data into them. That is what makes companies like this dangerous. They are not just selling a toy or a novelty. They are asking people to pour real work into systems that may have been built first around speed, hype, growth, and valuation, with protection expected to catch up later.
Lovable also fits another pattern people are getting tired of. The company wants to sound serious when it is asking for trust and sound technical when it is asking for confidence, but once the story turns ugly the response starts sounding small, legalistic, and evasive. Users are angry because they are looking at a platform that wanted to be treated like a serious place to build real things, while answering a security scandal in a way that sounds like it is trying to manage optics first. That is exactly the kind of response that makes people hate businesses like this.
The Lovable data breach is not just an embarrassing story for one AI company. It is one more example of a category that keeps rewarding the wrong priorities. Too many AI businesses are being made because they can be made. Too many are being run by people who care more about growth than protection. Too many want trust first and responsibility later. Companies like this should not exist in their current form, and if they are going to ask for source code, credentials, prompts, customer data, and real project logic, they should be built and operated by people who treat security as the first requirement, not the cleanup step after the screenshots start spreading.
- ANTS Breach Confirmed After Security Incident Hits France’s Identity Portal
- Vercel Breach Leaves Customers Rotating Secrets After AI OAuth Compromise
- ASTIM Data Breach Claim Follows CoinbaseCartel Ransomware Listing
- Itobori USA Data Breach Claims Expose 1.7 Million Customer and Order Records
- Sonora Ministry of Education and Culture Data Breach Claims Expose Teacher IDs, Addresses, and Work Records
Sean Doyle
Sean is a tech author and security researcher with more than 20 years of experience in cybersecurity, privacy, malware analysis, analytics, and online marketing. He focuses on clear reporting, deep technical investigation, and practical guidance that helps readers stay safe in a fast-moving digital landscape. His work continues to appear in respected publications, including articles written for Private Internet Access. Through Botcrawl and his ongoing cybersecurity coverage, Sean provides trusted insights on data breaches, malware threats, and online safety for individuals and businesses worldwide.







