Snowflake Cost & Performance
Snowflake separates compute and storage, which gives fine-grained cost control — but also introduces new ways to overspend. This document covers performance optimization and cost governance strategies.
How Snowflake Charges
| Resource | Billing Unit |
|---|---|
| Compute (Virtual Warehouses) | Credits per second (minimum 60s) |
| Storage | Per TB per month |
| Data Transfer | Per GB (cross-region / cloud) |
| Serverless features (Snowpipe, Tasks, Search) | Credits based on usage |
Compute is the dominant cost driver. Warehouses bill even when idle if not suspended.
Virtual Warehouse Configuration
Auto-Suspend
Always configure auto-suspend. The default is 600 seconds (10 minutes) of idle time.
For interactive BI workloads:
ALTER WAREHOUSE my_warehouse SET AUTO_SUSPEND = 60;
For batch ETL warehouses:
ALTER WAREHOUSE etl_warehouse SET AUTO_SUSPEND = 120;
A warehouse that runs 5 minutes and idles for 10 minutes before suspending wastes 2× the compute cost.
Auto-Resume
Enable auto-resume so warehouses start automatically on query:
ALTER WAREHOUSE my_warehouse SET AUTO_RESUME = TRUE;
Warehouse Sizing
Scaling up (larger warehouse size) helps with:
- Complex joins on large tables
- Large
GROUP BYoperations - High-memory transformations
Scaling up does not help with:
- High concurrency (many simultaneous small queries)
- Simple filtered reads
For concurrency, use multi-cluster warehouses or separate dedicated warehouses per workload.
Warehouse Isolation
Separate warehouses by workload type:
| Warehouse | Purpose | Size |
|---|---|---|
wh_etl | dbt and batch transforms | M–XL |
wh_bi | BI tool queries (Power BI, Tableau) | S–M |
wh_adhoc | Analyst ad-hoc queries | S |
wh_ingest | Snowpipe / loading | XS–S |
This prevents BI slowdowns caused by heavy ETL jobs.
Query Optimization
Clustering Keys
Snowflake uses micro-partitions (typically 50–500 MB each). Queries that filter on a well-clustered column skip most partitions.
Add a clustering key when:
- The table is large (> 1 TB)
- Queries consistently filter by the same column(s)
- Partition overlap is high (visible in
SYSTEM$CLUSTERING_INFORMATION)
ALTER TABLE fct_orders CLUSTER BY (order_date);
Check clustering depth:
SELECT SYSTEM$CLUSTERING_INFORMATION('fct_orders', '(order_date)');
Avoid Full Table Scans
-- Bad: no filter on a large table
SELECT count(*) FROM fct_orders;
-- Good: partition-filtered scan
SELECT count(*) FROM fct_orders WHERE order_date >= '2024-01-01';
Always include a date or clustering column filter on large tables.
Use LIMIT During Development
-- Expensive: full scan during testing
SELECT * FROM fct_orders;
-- Cheap: development sample
SELECT * FROM fct_orders LIMIT 1000;
Result Cache
Snowflake caches query results for 24 hours. Identical queries return instantly from cache.
Conditions for cache hit:
- Exact same SQL text
- Underlying data has not changed
- Same warehouse is not required
Do not artificially bypass the cache (e.g., adding random comments to queries).
Query Profile
Always check the Query Profile for slow queries:
- Run the query in Snowsight
- Click the query → Query Profile
- Look for:
- TableScan with high bytes scanned → missing clustering
- HashJoin with large build side → large dimension causing broadcast issue
- Spill to local/remote disk → warehouse undersized for this query
Cost Monitoring
Resource Monitors
Set spending limits at the account or warehouse level:
CREATE RESOURCE MONITOR bi_monitor
WITH CREDIT_QUOTA = 100
TRIGGERS ON 80 PERCENT DO NOTIFY
ON 100 PERCENT DO SUSPEND;
ALTER WAREHOUSE wh_bi SET RESOURCE_MONITOR = bi_monitor;
Query Cost Estimation
Before running expensive queries in production:
-- Check the estimated bytes scanned with EXPLAIN
EXPLAIN SELECT sum(revenue) FROM fct_orders WHERE order_date >= '2024-01-01';
Monitor Spend with SNOWFLAKE.ACCOUNT_USAGE
-- Top credit-consuming warehouses last 30 days
SELECT
warehouse_name,
sum(credits_used) as total_credits
FROM snowflake.account_usage.warehouse_metering_history
WHERE start_time >= dateadd(day, -30, current_timestamp)
GROUP BY warehouse_name
ORDER BY total_credits DESC;
dbt + Snowflake Cost Rules
- Use
incrementalmaterialization for large tables (nevertablefor multi-billion row models) - Set
+snowflake_warehouseper model to route expensive models to larger warehouses - Use
post-hookto suspend warehouses after heavy batch jobs - Enable
copy_grantsto avoid permission re-grants on full refreshes
models:
my_project:
mart:
+snowflake_warehouse: wh_etl_xl
stg:
+snowflake_warehouse: wh_etl
Golden Rules
- Auto-suspend within 60–120 seconds for all warehouses
- Never run BI and ETL workloads on the same warehouse
- Clustering keys for tables > 1 TB queried by date
- Resource monitors on every production warehouse
- Monitor
WAREHOUSE_METERING_HISTORYweekly - Avoid
SELECT *on large tables in production
Summary
Snowflake cost control requires:
- Warehouse discipline: right-sized, auto-suspended, isolated by workload
- Query hygiene: partition filters, result cache awareness, avoid full scans
- Spend visibility: resource monitors and metering history dashboards
- Incremental loading: avoid full table rebuilds for large models
Compute waste is always the first problem. Fix warehouse configuration before optimizing queries.
Related Docs
- Warehouse Standards — dbt layer definitions and incremental materialization rules
- dbt Project Structure — setting
+snowflake_warehouseper model layer - dbt Testing Strategy — data quality testing patterns for Snowflake models
- Cloud vs On-Premise — decision framework for platform selection