Why Traditional SAST Tools Fail on AI-Generated Code
Semgrep, SonarQube, Snyk, and Checkmarx were designed for hand-written enterprise code. Here's why they miss the bugs that Cursor, Bolt, and Lovable ship by default — and what to use instead.
Every time I show someone XploitScan, I get the same question: “Isn't this just Semgrep?”
It's a fair question. There are already a half-dozen well-funded SAST (Static Application Security Testing) tools out there — Semgrep, SonarQube, Snyk Code, Checkmarx, Veracode, GitHub CodeQL. Some of them are excellent. Most of them have been refined over a decade by serious security teams. So why build another one?
The short answer: those tools were built for a world where humans wrote the code, slowly, with code review. AI-generated code breaks every assumption that world was built on. The bugs are different. The volume is different. The audience is different. And when I run the existing tools on a typical Cursor-generated SaaS app, they all fail in the same specific ways.
Here's what they get wrong, with specific examples.
1. They Optimize for Recall, Not Precision
Traditional SAST tools were built for enterprise security teams. The audience is a security engineer triaging findings before a pen test or compliance audit. That engineer wants to see everything that could possibly be a vulnerability, even if 95% of it turns out to be noise. Missing a real bug is career-ending. False positives are just Tuesday.
So the tools tune toward recall. Run Semgrep on a typical 50-file Cursor project and you'll get something like this:
sonarqube: 1,247 issues → 312 bugs → 891 code smells → 44 vulnerabilities → most: "Cognitive Complexity is 16, refactor" semgrep: 2,103 findings → 1,840 style/maintainability → 263 "potential" issues → 12 actual security findings → 0 specific to AI-generated patterns
Two thousand findings. The non-technical founder who built the app with Cursor opens the report, sees two thousand red marks, panics, closes the tab, and ships the app anyway. The 12 actual security issues are buried under 2,091 style nits.
XploitScan inverts this. We tune for precision: only show findings that are real, exploitable, AI-specific patterns. On the same project:
xploitscan: 11 findings → 7 CRITICAL → 3 HIGH → 1 MEDIUM → all explained in plain English → all with copy-paste fixes
Eleven findings. Seven of them are critical. The founder reads them, fixes them, and ships a safer app. Same code. Different audience. Different priorities.
2. They Don't Have Rules for AI-Specific Bugs
Most SAST rule packs were written between 2010 and 2020, targeting the bugs that humans were making at the time: SQL injection from string concatenation, XSS from raw template output, command injection from shell exec. Those bugs still exist, but AI assistants have a whole new class of bugs that the existing rule packs simply don't check for.
Some examples I find in nearly every Cursor/Bolt/Lovable project that none of the major SAST tools flag by default:
- Stripe webhook handlers without signature verification — the canonical “walked into the room and gave away $10,000” bug. AI tools generate
const event = req.bodyand call it a day. - Clerk/Auth0/Supabase webhook handlers without signature verification — same pattern, but for auth events. Attackers can mint fake user records.
- Admin API routes without auth middleware — the AI builds the dashboard, builds the API, forgets the
requireAdmincheck. Anyone with the URL can hit them. - CORS wildcards on credentialed endpoints —
Access-Control-Allow-Origin: *withcredentials: include. Browsers reject this on real requests, but it indicates a complete misunderstanding of CORS. - Hardcoded API keys in committed
.env.examplefiles — the AI “helpfully” fills in the example with the user's real keys. - SSRF via user-controlled fetch URLs — when the AI builds a “preview link” or “import from URL” feature, it almost never validates the target.
None of these are exotic. They're the bugs you'd find by reading the code for ten minutes if you knew what to look for. They're also the bugs that don't appear in OWASP Top 10 examples, so the legacy SAST rule packs ignore them. XploitScan's 131 rules are specifically tuned for patterns I've seen AI tools produce.
3. The Output Is Written for Security Engineers
Open a Snyk or Checkmarx report and you'll see findings titled things like “CWE-352: Cross-Site Request Forgery (CSRF)” with descriptions full of acronyms (CSP, CORS, MITM, RCE, HSTS) and links to CVE entries. That's exactly what a security engineer wants — they already know what those mean and they need the formal classification for their audit report.
That is exactly the wrong shape for a non-technical founder who built their app with Cursor. They open the report, see CWE-352, don't know what CWE means, don't know what CSRF means, don't know what to do, and close the tab.
XploitScan writes every finding twice: a one-line plain-English summary of what an attacker can actually do (“Anyone with the URL can mark orders as paid without paying”), and a copy-paste fix snippet. The CWE/OWASP mapping is still there for compliance reports, but it's not the first thing the user sees.
4. They Can't Be Run by Non-Security People
Setting up Checkmarx or Veracode is a multi-day procurement and onboarding process. SonarQube self-hosted requires you to stand up a server, configure authentication, integrate with your build system. Even Semgrep — which is genuinely friendly compared to the others — requires you to know what a SAST tool is, install a CLI, pick a rule pack, and read the SARIF output.
The audience for AI-generated code includes a lot of people who built their first app last weekend. They have never seen a CLI before. They don't know what SARIF is. Asking them to install Semgrep and pick a rule pack is asking them to learn an entire discipline before they can ship a side project.
XploitScan has three flows on purpose: paste your code in the browser, drag a folder, or paste a public GitHub URL. No install. No config. No rule pack picking. Results in 5 seconds. The CLI exists for people who want it, but the default flow is “upload and read.”
5. They Charge Like Enterprise Software
Snyk Code starts at $25/developer/month with a minimum seat count. Checkmarx is enterprise pricing (read: contact sales, $40k+/year minimum). Veracode is similar. Semgrep's open source version is free, but the rule packs that actually catch the AI bugs are part of the paid Semgrep Pro tier.
That pricing makes sense if you're a 50-engineer security team protecting a Fortune 500 codebase. It does not make sense for an indie hacker shipping a $9/month side project. The economics literally don't work — the security tool would cost more than the app makes.
XploitScan's free tier gives you 5 scans a day with the 30 most important rules. Pro is $29/month for unlimited scans, all 131 rules, PDF reports, SBOM, compliance mapping, and webhook integrations. Same product, different audience, different price point.
When You Should Still Use a Traditional SAST Tool
I'm not arguing the existing tools are bad. They're very good at what they were built for. Use them when:
- You have a security engineer or AppSec team triaging findings
- You need formal compliance audits (SOC 2, FedRAMP, PCI-DSS) with full audit trails
- You have a large hand-written legacy codebase with the bugs the existing tools were designed to find
- You need data-flow analysis across many files (Semgrep Pro and CodeQL are particularly good at this)
- Your codebase is in a language XploitScan doesn't deeply support yet
For a Cursor-built SaaS, an indie hacker side project, or a Y Combinator demo app shipping next week — XploitScan is the safety net. For a Fortune 500 monolith with a dedicated AppSec team — keep using Semgrep or CodeQL. They're not competitors; they're complements aimed at completely different audiences.
Try It on Your Own Code
If you've got a project shipped with Cursor, Bolt, Lovable, or Replit, the fastest way to see what I'm talking about is to just run a scan:
npx xploitscan scan .
No install, no signup, runs entirely on your machine. If you find the same Stripe webhook bug I described above, the previous post walks through the 4-line fix.
Want to see what XploitScan actually finds? The demo page loads a pre-scanned vulnerable SaaS app so you can see the report format without uploading anything.
See the live demo →