The Day I Found 47 Security Issues in My AI-Built App
A cautionary tale for every vibe coder who's about to ship
I built my SaaS in 3 weeks with Cursor. Auth, payments, user dashboard, API — the whole thing. It worked. Users signed up. Revenue trickled in.
I felt invincible.
Then I ran a security audit on my own codebase.
The Audit That Changed Everything
47 findings. That's what VCX reported.
I expected a few warnings. Maybe some missing error handling. Not 47 actual issues scattered across the app I was about to trust with real user data.
Let me show you the worst ones.
Finding #1: The Auth Bypass I Didn't Know Existed
SEC-AUTH-003 · CRITICAL
src/lib/middleware.ts · line 23
// Admin check
if (user.role === 'admin') {
return next()
}
// Issue: No return statement after admin check
// Non-admin users proceed to protected route
My AI had written admin middleware that looked correct at a glance. But without a return statement, non-admin users would pass right through to protected routes.
Impact: Any user could access admin endpoints.
How I missed it: I'd never written admin middleware before. The code looked right. I didn't know what to look for.
Finding #2: The SQL Injection in My User Search
SEC-SQLI-001 · CRITICAL
src/lib/db/queries.ts · line 87
WHERE email LIKE '%${searchTerm}%'
Classic SQL injection, generated by an AI that didn't understand the security context. My search feature was wide open.
Impact: Full database access to anyone who typed the right search query.
How I missed it: I'd asked Cursor to "add a user search feature." It worked perfectly in testing. I never thought about what happens when the search term isn't just a name.
Finding #3: The Hardcoded Secret
SEC-SECRET-002 · HIGH
src/lib/payments.ts · line 12
const STRIPE_SECRET = 'sk_live_...'
My AI had hardcoded the Stripe secret key. Not in an env variable. In the actual code.
Impact: Anyone with repo access (including future contractors, or if I ever made it public) would have my production Stripe key.
How I missed it: The code worked. The payments went through. I never needed to check how the key was being loaded.
Finding #4: The Missing Rate Limiting
SEC-RATE-001 · MEDIUM
src/api/auth/login.ts · line 1
// No rate limiting detected on authentication endpoint
// Vulnerable to brute force attacks
I had a login endpoint with no rate limiting. At all. Anyone could hammer it with password attempts forever.
Impact: Credential stuffing attacks, account takeovers.
How I missed it: Rate limiting isn't something you think about when you're building features. It's invisible until it isn't.
The Pattern I Couldn't See
Here's what scared me most: these weren't random bugs.
Each one followed a pattern:
- I asked the AI for a feature
- The AI delivered working code
- I tested the happy path
- I moved on
I never asked "what could go wrong?" because I didn't know what could go wrong. I was a founder shipping a product, not a security engineer auditing code.
That's the vibe coder trap: you can't review what you don't understand.
What I Did Next
I spent 2 days fixing every finding. The auth bypass took 30 seconds (added a return). The SQL injection took 10 minutes (parameterized queries). The hardcoded secret took 5 minutes (moved to env vars).
None of the fixes were hard. The hard part was knowing they existed.
The New Rule
I have one rule now:
Before any deployment, I run VCX.
Not because I'm paranoid. Because I'm realistic. My AI writes great code most of the time. But "most of the time" isn't good enough when real user data is on the line.
If You're a Vibe Coder, Read This
You probably have issues in your codebase right now. Not because you're bad at this. Because AI-generated code is opaque. It works. It passes tests. And it hides vulnerabilities in plain sight.
Run an audit. See what's there. Fix what matters.
Your users will never know what you prevented. That's the point.
Try VCX free — no credit card required
AI builds your code. VCX checks it.