Nobody checks your user count before scanning for vulnerabilities
Table of Contents
It started with a $10 charge from Google Cloud on my bank statement. That might not sound like much, but Google Cloud usually costs me less than a dollar a month. There was no reason for it to spike.
A few days earlier, I’d received an email from a security researcher, asking if Fastbooks had a vulnerability disclosure or bug bounty program. It was polite, professional — the kind of email you’d expect from someone doing this the right way. I replied honestly: Fastbooks is a weekend project with no revenue, no bounty program, and the best I could offer was an acknowledgement on the website if he found something worth reporting.
I didn’t think much of it at the time. But staring at that $10 charge, the email suddenly felt different. I pulled up the Firestore usage graphs. And there it was: a massive spike in reads that had no business being there.
The trust problem
Like a lot of Firebase projects, Fastbooks let the web client talk to Firestore directly. We had Security Rules in place — users could only read and write within their own organization. That part worked fine.
But Security Rules answer “is this user allowed to read this document?” They don’t answer “has this user read 50,000 documents in the last minute?” A malicious actor with a valid account — even a free one — could hammer our reads as much as they wanted. And we’d foot the bill.
That’s exactly what happened. Someone was making an enormous number of read requests, likely probing the API surface. Every read was technically authorized. The volume was not.
There’s something uncomfortable about this. You build rules to keep bad actors out, and then the problem turns out to be someone who’s technically in. It’s not an intrusion. It’s abuse. And Firebase gives you almost nothing to deal with it.
The frustrating part is that none of this should have surprised me. I’ve seen this pattern at work — it’s not obscure knowledge. But I assumed nobody would bother targeting a small product I build in the evenings with barely any users. Turns out, the people scanning for vulnerabilities don’t check your user count first.
What saved me from something worse
The thing that kept this from being a disaster was an architectural decision I made at the start of this project, for entirely different reasons. I’d built a useDB hook that every component used for data access, backed by a dual-layer cache that reduced redundant reads. I also had a Cloud Function that verified organization membership server-side, so the organization ID was never something the client could spoof.
This meant all data access already flowed through a single layer. I didn’t need to hunt through dozens of components making ad-hoc Firestore calls. I just needed to put a gate in front of the one choke point that already existed.
Moving everything server-side
I moved all Firestore access behind callable Cloud Functions. Instead of the client querying Firestore directly, it now calls functions like firestoreFind, firestoreQuery, firestoreSave. These handle authentication, verify organization membership, and — critically — enforce rate limits before touching the database.
The rate limiting works at three levels. A global cap per user catches runaway loops or scripted requests. Tighter per-operation limits control how often you can call each function — bulk reads are more restricted than single-document lookups. And the tightest limits are scoped to the specific query being run, so you can’t repeatedly hammer the same expensive call.
There’s a subtlety here that’s easy to miss. Cloud Functions are serverless — each invocation can land on a different instance, instances spin up and die unpredictably, and there’s no shared memory between them. You can’t just keep a counter in a variable. It would only track requests hitting that one instance while every other instance counts from zero.
So the rate limit state lives in Firestore itself. Each limit level gets a document keyed by a hash of the scope (user, operation, query). Inside, there’s a counter and a bucket timestamp. When a request comes in, a Firestore transaction reads the current count for the active time window, checks it against the limit, and atomically increments it. If the window has rolled over, the counter resets. If it hasn’t and the count exceeds the limit, the request gets a resource-exhausted error with a Retry-After hint before any actual data reads happen.
Yes, this means every rate-limited request costs a couple of extra Firestore operations just for the rate limiting to work. That’s the trade-off. You’re spending a few reads and writes per request to prevent thousands of unauthorized ones. At our scale it’s negligible. If it ever isn’t, you’d move the counters to something faster — Redis, Memorystore, or even Firebase Realtime Database which has lower latency for this kind of increment-and-check pattern. But for now, keeping it all in Firestore means no extra infrastructure to manage.
I also added a collection allowlist. Not every collection is accessible, and not every operation is allowed on every collection. Audit logs are read-only. You can’t write to them through the API. The idea is simple: reduce the surface area so there’s less to abuse even if someone is poking around.
The side door I almost forgot
After building all of this, I realized there was still a gap. A stupid one.
The Cloud Functions use the Firebase Admin SDK, which bypasses Security Rules entirely. But our old rules were still in place:
allow read, write: if request.auth.uid != null && getUserOrg() == orgId;
Someone could skip the Cloud Functions entirely, open a browser console, import the Firestore SDK, and call getDocs() directly. No rate limiting. No allowlist. No audit logging.
Since every legitimate operation now goes through Cloud Functions, I could do something that would have been unthinkable before: deny all direct client access.
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /{document=**} {
allow read, write: if false;
}
}
}
That’s it. The Admin SDK ignores Security Rules, so our functions keep working. But anyone hitting Firestore directly from the client gets a hard denial. No exceptions.
I also found an old database service file during cleanup — still had direct Firestore imports, marked as deprecated, not wired up anywhere. Dead code with direct database access sitting in the repo like a loaded gun in a drawer. I deleted it. Should have done it months ago.
The boring plumbing that matters
One wrinkle I didn’t anticipate: Cloud Functions communicate over JSON, which can’t represent Firestore Timestamps or query constraints natively. I ended up building a transport layer that serializes these types with __type markers on the way out and deserializes them on the way in. Query constraints get extracted from the Firestore SDK’s internal representation and reconstructed server-side with the Admin SDK.
It’s unglamorous work. The kind of code nobody will ever praise you for writing. But it’s the reason the rest of the system holds together.
What this made me think about
If you’re building on Firebase and your client talks to Firestore directly, you’re trusting every authenticated user to be well-behaved. Security Rules protect you from unauthorized access. They won’t protect your wallet from authorized abuse.
The lesson isn’t technical. I already knew client-side throttling is a suggestion and server-side rate limiting is a wall. I knew that once all access goes through Cloud Functions, you can lock Security Rules to if false and the only path to your data is the one you control. I knew all of it. The lesson is that “it’s too small to be a target” is the most dangerous assumption you can make — because the people scanning for vulnerabilities don’t check your usage or who you are first.
Fastbooks is a small project. No revenue, no bug bounty, no budget for a runaway Firestore bill. But a suspicious email and thirty minutes of checking graphs turned into a week of rebuilding. And the system that came out the other side is genuinely better.
Sometimes the best security improvements come from the scares that don’t quite become incidents. The near-miss that makes you look at what you built and think, honestly, how did I ever think this was enough?