← Back to Blog

I Scanned a Typical AI-Generated SaaS App. It Had 53 Security Vulnerabilities.

Brian Gage··8 min read

Every week, someone posts on Reddit: “I built this with Cursor in a weekend!” A week later, they're back asking why their app got hacked. I built a tool to prevent that — and used it to see exactly how bad the problem is.

The Problem Nobody Talks About

AI coding tools are incredible. Cursor, Lovable, Bolt, Replit — they can build a full SaaS app in hours. But there's a catch that the “vibe coding” community doesn't like to discuss: AI-generated code is consistently insecure.

A 2025 Veracode study found that 45% of AI-generated code contains security vulnerabilities. Not edge cases. Not theoretical risks. Real, exploitable holes that attackers actively look for.

The Experiment

I created a realistic AI-generated SaaS app — the kind of code Cursor or Bolt would produce for a typical startup. Express backend, Supabase database, Stripe payments, user authentication. About 47 files.

Then I scanned it with XploitScan, running all 131 security rules.

The Results: Grade D (35/100)

53 security vulnerabilities found in under 3 seconds.

16
Critical
22
High
14
Medium
1
Low

21 Hardcoded Secrets

The most common issue by far. API keys, database credentials, and encryption keys directly in the source code. Anyone who can see your code (or your git history) can steal these.

What AI gets wrong: AI tools often hardcode credentials because that's what they were trained on. They don't always separate secrets into environment variables.

5 Injection Vulnerabilities

SQL injection, XSS, and command injection. Classic web app vulnerabilities that have existed for 20+ years — and AI tools still produce them.

Example finding — SQL Injection:

db.query(`SELECT * FROM products WHERE name LIKE '%${userInput}%'`)

Fix: Use parameterized queries — db.query('SELECT * FROM users WHERE id = ?', [userId])

16 Configuration Issues

CORS set to allow all origins, missing security headers, debug mode in production, no rate limiting on login endpoints. These are the “I didn't know I needed to do that” issues.

Why this matters: AI tools build features, not security controls. They'll create a login endpoint but won't add rate limiting to prevent brute-force attacks.

Missing Payment Security

Stripe webhook endpoints without signature verification. Anyone can send fake payment events to your app, marking orders as paid without actually paying. This one finding alone could cost a business thousands of dollars.

The Compliance Impact

Every finding maps to compliance frameworks. If you're building a B2B SaaS and your customers ask about SOC2 or ISO 27001, these issues need to be fixed first.

  • SOC2: 6 of 11 controls had issues
  • ISO 27001: 7 of 13 controls affected
  • OWASP Top 10: 5 of 10 categories flagged

What You Should Do

  1. Scan your code before you deploy. You can do this for free at xploitscan.com — drag and drop your project files. No signup required.
  2. Never hardcode secrets. Use environment variables. Always. Check your git history too — if secrets were ever committed, they're still there.
  3. Add authentication to every API route. AI tools often create routes without auth checks. Every endpoint that reads or writes user data needs verification.
  4. Verify payment webhooks. If you use Stripe, always verify the signature with stripe.webhooks.constructEvent().
  5. Don't trust AI-generated security code. AI tools are great at features but consistently bad at security. Treat every AI-generated app as if it has vulnerabilities — because it probably does.

Try It Yourself

See what XploitScan finds in your code:

Would your app pass a security scan?

131 security rules. Plain-English results. Free to start.

Scan Now — Free