๐ธ How We Helped a SaaS Company Cut Google Cloud Costs by 40% Without Downtime
Complete case study on Google Cloud cost optimization with real results and zero downtime
Table of Contents

๐ธ How We Helped a SaaS Company Cut Google Cloud Costs by 40% Without Downtime
Cloud costs can silently kill growing companies.
One of our enterprise clients -- a global B2B SaaS platform -- came to us with a simple problem:
> "Our Google Cloud bill is out of control. Can you help?"
The result?
- โ We reduced their GCP billing by 40% in just 6 weeks
- โ With zero downtime
- โ No team disruption
- โ While improving performance and observability
Here's exactly what we did -- and how we can help other companies do the same.
๐ The Problem: High Costs, No Visibility
Before we stepped in:
- Their Firestore reads exceeded 30M/day
- Cloud Run instances were scaling inefficiently
- BigQuery jobs were unpartitioned and expensive
- Logs were stored indefinitely
- No clear breakdown of per-feature or per-customer cost
Most teams focus on building -- not optimizing.
That's where we come in.
โ๏ธ Our 6-Step Optimization Process
๐น 1. Full GCP Billing Audit
We reviewed:
- Project-level costs
- Service usage breakdown
- Logs ingestion/storage
- Idle services
- VPC egress and ingress patterns
- Container scaling behavior
๐ง Tools used: Billing export to BigQuery + Data Studio dashboards
๐น 2. Firestore Cost Optimization
Before | After |
|---|---|
๐ด Redundant reads per user session | โ
Batched + cached queries |
๐ด Multiple `onSnapshot` listeners | โ
Moved to `get()` where possible |
๐ด No TTL for old documents | โ
Scheduled purges via Cloud Tasks |
๐ฅ Firestore bill dropped by 60% alone.
๐น 3. BigQuery Cost Control
- Partitioned tables by date
- Clustered large tables on customer ID
- Scheduled query re-use instead of repeated raw queries
- Auto-cancel long-running queries over 10s
- Added UDFs for business logic reuse
๐ BigQuery costs dropped by 45%
๐ Dashboards refreshed faster
๐น 4. Cloud Run & Cloud Functions Tuning
- Set concurrency to 40+ where safe
- Tuned CPU idle allocation to 0.25vCPU where possible
- Switched to reserved min instances for consistent workloads
- Auto-scaled background jobs based on custom metrics
โฑ๏ธ Latency improved, cost reduced
๐น 5. Log Retention & Observability
- Moved from "store everything forever" to:
- 30-day default
- 1-year for audit logs only
- Exported essential logs to BigQuery for long-term use
- Set alert-based logging only on thresholds
๐ Logging costs reduced by 80%
๐น 6. Monitoring and Alerts
- Added custom dashboards in Cloud Monitoring
- Tracked per-customer usage in BigQuery
- Cost anomaly detection via budget alerts + Slack notifications
๐ง Now, they know exactly which features or customers generate most cost.
๐ Final Outcome
Metric | Before | After |
|---|---|---|
Monthly GCP Cost | $12,100 | $7,300 |
Avg Firestore Read | 32M/day | 10M/day |
Cloud Run Instances | Uncapped | Tuned concurrency |
BigQuery Cost | $3,000+ | $1,600 |
Logging | 600GB/month | 120GB/month |
- โ Total Savings: $4,800+/month
- โ Zero performance downgrade
- โ Improved security + cost observability
๐ Enterprise-Grade Practices We Applied
- ๐ IAM audit and cleanup (least privilege)
- ๐ Enforced VPC Service Controls
- ๐ GitOps for infra using Terraform
- ๐ Real-time billing dashboards
- ๐งพ FinOps-ready tagging for departments/customers
๐ฌ What the Client Said
> "They didn't just reduce our cost -- they gave us total visibility. We now understand how our platform scales, and we know what each feature really costs to run."
> -- Director of Engineering, Global SaaS Company
๐ Want to Cut Google Cloud Costs Without Downtime?
We offer:
- โ Full GCP cost audit
- โ Firestore, BigQuery, Cloud Run, and Logging optimization
- โ Security + IAM cleanup
- โ DevOps pipeline enhancements
- โ Feature-level cost tracking for your SaaS