DEV Community

Sagara
Sagara

Posted on

Trying Out Snowflake's Adaptive Warehouse — Auto-Scaling Compute Without Manual Sizing

This is an English translation of the original Japanese article:
https://dev.classmethod.jp/articles/snowflake-try-adaptive-warehouse/

Until now, Snowflake warehouses required you to manually select and manage a size (XS, S, M, L, etc.). With Adaptive Warehouse, compute resources automatically scale based on query characteristics. This eliminates the need for manual warehouse sizing and scale-up/scale-out configuration.

I learned about this feature from @tshowis's post:

https://x.com/tshowis/status/2044446136556282293

While release notes haven't been published yet, the official documentation has full details. The Public Preview supported regions include AWS Tokyo Region (AP Northeast 1), so those using the Tokyo region can try it right away. I tested how it works using TPC-H sample data and have summarized the steps and results below.

https://docs.snowflake.com/en/user-guide/warehouses-adaptive

Feature Overview

Adaptive Warehouse is a compute service that recognizes workloads and automatically allocates resources. It has a dedicated shared compute pool per account and automatically determines the optimal resources based on query characteristics.

Key features include:

  • No warehouse sizing required: No need to manually select or change sizes
  • Automatic per-query scaling: Small queries use fewer resources, large queries automatically use more
  • Online conversion: Converting existing standard warehouses to Adaptive Warehouses requires no downtime
  • Behavior controlled by 2 parameters

The two controllable parameters are:

Parameter Default Description
MAX_QUERY_PERFORMANCE_LEVEL XLARGE Upper performance limit per query (XSMALL–X4LARGE)
QUERY_THROUGHPUT_MULTIPLIER 2 Scale factor for concurrent execution (0 for unlimited, positive integer for cap)

Setting QUERY_THROUGHPUT_MULTIPLIER to N ensures capacity for N queries to run concurrently at the MAX_QUERY_PERFORMANCE_LEVEL resource level.

Limitations

  • As of April 16, 2026, this is a Public Preview feature. Specifications may change before GA
  • Available regions (Public Preview): US West 2 (Oregon), EU West 1 (Ireland), AP Northeast 1 (Tokyo)
  • Required edition: Enterprise Edition or higher
  • Conversion from X5Large/X6Large warehouses to Adaptive Warehouse is not supported
  • Mutual conversion with Snowpark-optimized warehouses and Interactive warehouses is also not supported

Cost

Adaptive Warehouse uses a query-based billing model. Each query's cost is calculated based on the compute resources and software resources consumed. The total warehouse cost is the sum of all executed query costs.

Creating a warehouse itself incurs no cost — billing begins when queries are executed. For cost management, you can use Budgets, Resource Monitors, and views in SNOWFLAKE.ACCOUNT_USAGE for monitoring.

Prerequisites

  • Snowflake: AWS US Oregon region, Enterprise Edition
    • I happened to have a trial account, so I tested on the US Oregon region

Preparation

Verify TPC-H Sample Data

For this test, I used the TPC-H sample data provided by default in Snowflake. TPC-H is a decision support benchmark dataset well-suited for evaluating large-scale data processing with complex joins and aggregations.

https://docs.snowflake.com/en/user-guide/sample-data-tpch

Multiple schemas with different scale factors (SF) are available in the SNOWFLAKE_SAMPLE_DATA database:

Schema Approximate Data Scale
TPCH_SF1 Millions of records
TPCH_SF10 Tens of millions of records
TPCH_SF100 Hundreds of millions of records
TPCH_SF1000 Billions of records

To clearly observe the automatic scaling behavior, I used TPCH_SF1000 (billions of records).

First, verify that you can access the sample data:

-- Verify sample data access
USE SCHEMA SNOWFLAKE_SAMPLE_DATA.TPCH_SF1000;

-- Check table list
SHOW TABLES;
Enter fullscreen mode Exit fullscreen mode

You should see 8 tables (LINEITEM, ORDERS, CUSTOMER, SUPPLIER, PART, PARTSUPP, NATION, REGION).

2026-04-16_08h10_03

Disable Account-Level Cache

To accurately compare execution times before and after parameter tuning, disable the query result cache. With caching enabled, subsequent identical queries would be served from cache, making execution time comparison impossible.

https://docs.snowflake.com/en/sql-reference/parameters#use-cached-result

Disable caching at the account level:

USE ROLE ACCOUNTADMIN;

-- Disable query result cache at account level
ALTER ACCOUNT SET USE_CACHED_RESULT = false;
Enter fullscreen mode Exit fullscreen mode

Verify the parameter is set to false:

-- Confirm the setting
SHOW PARAMETERS LIKE '%USE_CACHED_RESULT%';
Enter fullscreen mode Exit fullscreen mode

2026-04-16_07h45_15

⚠️ If you're using a production account, don't forget to re-enable caching after testing:

USE ROLE ACCOUNTADMIN;
ALTER ACCOUNT SET USE_CACHED_RESULT = true;

Hands-On Testing

1. Create an Adaptive Warehouse (XLARGE Setting)

First, create a new Adaptive Warehouse with MAX_QUERY_PERFORMANCE_LEVEL = XLARGE:

USE ROLE SYSADMIN;

-- Create Adaptive Warehouse with XLARGE setting
CREATE ADAPTIVE WAREHOUSE adaptive_wh_xlarge
  WITH MAX_QUERY_PERFORMANCE_LEVEL = XLARGE
       QUERY_THROUGHPUT_MULTIPLIER = 2;
Enter fullscreen mode Exit fullscreen mode

Verify the configuration:

-- Check warehouse settings
SHOW WAREHOUSES LIKE 'adaptive_wh_xlarge';
Enter fullscreen mode Exit fullscreen mode

Confirm that the type column shows ADAPTIVE.

2026-04-16_08h52_02

2. Run TPC-H Queries with XLARGE Setting

Run representative TPC-H queries using adaptive_wh_xlarge.

Set QUERY_TAG so we can compare execution times later:

-- Set QUERY_TAG for later comparison
ALTER SESSION SET QUERY_TAG = 'adaptive_xlarge_m2';

USE WAREHOUSE adaptive_wh_xlarge;
USE SCHEMA SNOWFLAKE_SAMPLE_DATA.TPCH_SF1000;
Enter fullscreen mode Exit fullscreen mode

TPC-H Q1: Aggregation Query (Full LINEITEM Table Scan)

First, run TPC-H Q1 (Pricing Summary Report) — one of the simplest TPC-H queries. It performs a full table scan and aggregation on the LINEITEM table (approximately 6 billion records at SF1000).

-- TPC-H Q1: Pricing Summary Report
-- Aggregates the LINEITEM table (~6 billion records at SF1000)
SELECT
    L_RETURNFLAG,
    L_LINESTATUS,
    SUM(L_QUANTITY)                                       AS SUM_QTY,
    SUM(L_EXTENDEDPRICE)                                  AS SUM_BASE_PRICE,
    SUM(L_EXTENDEDPRICE * (1 - L_DISCOUNT))               AS SUM_DISC_PRICE,
    SUM(L_EXTENDEDPRICE * (1 - L_DISCOUNT) * (1 + L_TAX)) AS SUM_CHARGE,
    AVG(L_QUANTITY)                                       AS AVG_QTY,
    AVG(L_EXTENDEDPRICE)                                  AS AVG_PRICE,
    AVG(L_DISCOUNT)                                       AS AVG_DISC,
    COUNT(*)                                              AS COUNT_ORDER
FROM
    LINEITEM
WHERE
    L_SHIPDATE <= DATEADD(DAY, -90, TO_DATE('1998-12-01'))
GROUP BY
    L_RETURNFLAG,
    L_LINESTATUS
ORDER BY
    L_RETURNFLAG,
    L_LINESTATUS;
Enter fullscreen mode Exit fullscreen mode

Results after 5 runs:

Run Execution Time (sec)
1 4.47
2 4.55
3 3.68
4 3.85
5 3.77

TPC-H Q5: Multi-Table Join Query

Next, run Q5 (Local Supplier Volume), which joins multiple tables:

-- TPC-H Q5: Local Supplier Volume
-- Complex query joining 6 tables
SELECT
    N.N_NAME,
    SUM(L.L_EXTENDEDPRICE * (1 - L.L_DISCOUNT)) AS REVENUE
FROM
    CUSTOMER   C,
    ORDERS     O,
    LINEITEM   L,
    SUPPLIER   S,
    NATION     N,
    REGION     R
WHERE
    C.C_CUSTKEY    = O.O_CUSTKEY
    AND L.L_ORDERKEY  = O.O_ORDERKEY
    AND L.L_SUPPKEY   = S.S_SUPPKEY
    AND C.C_NATIONKEY = S.S_NATIONKEY
    AND S.S_NATIONKEY = N.N_NATIONKEY
    AND N.N_REGIONKEY = R.R_REGIONKEY
    AND R.R_NAME      = 'ASIA'
    AND O.O_ORDERDATE >= TO_DATE('1994-01-01')
    AND O.O_ORDERDATE <  DATEADD(YEAR, 1, TO_DATE('1994-01-01'))
GROUP BY
    N.N_NAME
ORDER BY
    REVENUE DESC;
Enter fullscreen mode Exit fullscreen mode

Results after 5 runs:

Run Execution Time (sec)
1 5.74
2 3.17
3 3.03
4 3.03
5 2.83

3. Create a Comparison Adaptive Warehouse (SMALL Setting)

Create a separate Adaptive Warehouse with MAX_QUERY_PERFORMANCE_LEVEL = SMALL:

USE ROLE SYSADMIN;

-- Create Adaptive Warehouse with SMALL setting
CREATE ADAPTIVE WAREHOUSE adaptive_wh_small
  WITH MAX_QUERY_PERFORMANCE_LEVEL = SMALL
       QUERY_THROUGHPUT_MULTIPLIER = 2;
Enter fullscreen mode Exit fullscreen mode
-- Check warehouse settings
SHOW WAREHOUSES LIKE 'adaptive_wh_small';
Enter fullscreen mode Exit fullscreen mode

Confirm that the type column shows ADAPTIVE.

2026-04-16_08h58_26

4. Run TPC-H Queries with SMALL Setting

Run the same Q1 and Q5 queries using adaptive_wh_small:

-- Set QUERY_TAG for SMALL setting runs
ALTER SESSION SET QUERY_TAG = 'adaptive_small_m2';

USE WAREHOUSE adaptive_wh_small;
USE SCHEMA SNOWFLAKE_SAMPLE_DATA.TPCH_SF1000;
Enter fullscreen mode Exit fullscreen mode

TPC-H Q1: Aggregation Query

-- Q1 execution (MAX_QUERY_PERFORMANCE_LEVEL = SMALL)
SELECT
    L_RETURNFLAG,
    L_LINESTATUS,
    SUM(L_QUANTITY)                                       AS SUM_QTY,
    SUM(L_EXTENDEDPRICE)                                  AS SUM_BASE_PRICE,
    SUM(L_EXTENDEDPRICE * (1 - L_DISCOUNT))               AS SUM_DISC_PRICE,
    SUM(L_EXTENDEDPRICE * (1 - L_DISCOUNT) * (1 + L_TAX)) AS SUM_CHARGE,
    AVG(L_QUANTITY)                                       AS AVG_QTY,
    AVG(L_EXTENDEDPRICE)                                  AS AVG_PRICE,
    AVG(L_DISCOUNT)                                       AS AVG_DISC,
    COUNT(*)                                              AS COUNT_ORDER
FROM
    LINEITEM
WHERE
    L_SHIPDATE <= DATEADD(DAY, -90, TO_DATE('1998-12-01'))
GROUP BY
    L_RETURNFLAG,
    L_LINESTATUS
ORDER BY
    L_RETURNFLAG,
    L_LINESTATUS;
Enter fullscreen mode Exit fullscreen mode

Results after 5 runs:

Run Execution Time (sec)
1 21.57
2 31.73
3 27.73
4 21.74
5 34.35

TPC-H Q5: Multi-Table Join Query

-- Q5 execution (MAX_QUERY_PERFORMANCE_LEVEL = SMALL)
SELECT
    N.N_NAME,
    SUM(L.L_EXTENDEDPRICE * (1 - L.L_DISCOUNT)) AS REVENUE
FROM
    CUSTOMER   C,
    ORDERS     O,
    LINEITEM   L,
    SUPPLIER   S,
    NATION     N,
    REGION     R
WHERE
    C.C_CUSTKEY    = O.O_CUSTKEY
    AND L.L_ORDERKEY  = O.O_ORDERKEY
    AND L.L_SUPPKEY   = S.S_SUPPKEY
    AND C.C_NATIONKEY = S.S_NATIONKEY
    AND S.S_NATIONKEY = N.N_NATIONKEY
    AND N.N_REGIONKEY = R.R_REGIONKEY
    AND R.R_NAME      = 'ASIA'
    AND O.O_ORDERDATE >= TO_DATE('1994-01-01')
    AND O.O_ORDERDATE <  DATEADD(YEAR, 1, TO_DATE('1994-01-01'))
GROUP BY
    N.N_NAME
ORDER BY
    REVENUE DESC;
Enter fullscreen mode Exit fullscreen mode

Results after 5 runs:

Run Execution Time (sec)
1 42.18
2 38.49
3 38.41
4 38.69
5 36.97

5. Compare Execution Times Using INFORMATION_SCHEMA.QUERY_HISTORY

After completing runs on both warehouses, compare the two QUERY_TAG values using INFORMATION_SCHEMA.QUERY_HISTORY(). Unlike ACCOUNT_USAGE views, this can be queried without delay.

View All 20 Individual Run Times

First, check all 20 individual execution records (Q1 and Q5 × 5 runs each × 2 warehouses). Queries containing L_RETURNFLAG in QUERY_TEXT are identified as Q1, and those containing N_NAME as Q5.

-- All 20 execution records (check execution times)
SELECT
    QUERY_TAG,
    WAREHOUSE_NAME,
    CASE
        WHEN QUERY_TEXT ILIKE '%L_RETURNFLAG%' THEN 'Q1'
        WHEN QUERY_TEXT ILIKE '%N_NAME%'       THEN 'Q5'
        ELSE 'OTHER'
    END                                       AS query_name,
    START_TIME,
    ROUND(EXECUTION_TIME / 1000, 2)           AS exec_sec
FROM TABLE(
    INFORMATION_SCHEMA.QUERY_HISTORY(
        END_TIME_RANGE_START => DATEADD(HOUR, -2, CURRENT_TIMESTAMP()),
        END_TIME_RANGE_END   => CURRENT_TIMESTAMP(),
        RESULT_LIMIT         => 1000
    )
)
WHERE WAREHOUSE_NAME   IN ('ADAPTIVE_WH_XLARGE', 'ADAPTIVE_WH_SMALL')
  AND QUERY_TAG        IN ('adaptive_xlarge_m2', 'adaptive_small_m2')
  AND EXECUTION_STATUS = 'SUCCESS'
  AND QUERY_TYPE       = 'SELECT'
ORDER BY QUERY_TAG, query_name, START_TIME;
Enter fullscreen mode Exit fullscreen mode

2026-04-16_09h15_18

QUERY_TAG WAREHOUSE_NAME Query Run Execution Time (sec)
adaptive_xlarge_m2 ADAPTIVE_WH_XLARGE Q1 1 4.47
adaptive_xlarge_m2 ADAPTIVE_WH_XLARGE Q1 2 4.55
adaptive_xlarge_m2 ADAPTIVE_WH_XLARGE Q1 3 3.68
adaptive_xlarge_m2 ADAPTIVE_WH_XLARGE Q1 4 3.85
adaptive_xlarge_m2 ADAPTIVE_WH_XLARGE Q1 5 3.77
adaptive_xlarge_m2 ADAPTIVE_WH_XLARGE Q5 1 5.74
adaptive_xlarge_m2 ADAPTIVE_WH_XLARGE Q5 2 3.17
adaptive_xlarge_m2 ADAPTIVE_WH_XLARGE Q5 3 3.03
adaptive_xlarge_m2 ADAPTIVE_WH_XLARGE Q5 4 3.03
adaptive_xlarge_m2 ADAPTIVE_WH_XLARGE Q5 5 2.83
adaptive_small_m2 ADAPTIVE_WH_SMALL Q1 1 21.57
adaptive_small_m2 ADAPTIVE_WH_SMALL Q1 2 31.73
adaptive_small_m2 ADAPTIVE_WH_SMALL Q1 3 27.73
adaptive_small_m2 ADAPTIVE_WH_SMALL Q1 4 21.74
adaptive_small_m2 ADAPTIVE_WH_SMALL Q1 5 34.35
adaptive_small_m2 ADAPTIVE_WH_SMALL Q5 1 42.18
adaptive_small_m2 ADAPTIVE_WH_SMALL Q5 2 38.49
adaptive_small_m2 ADAPTIVE_WH_SMALL Q5 3 38.41
adaptive_small_m2 ADAPTIVE_WH_SMALL Q5 4 38.69
adaptive_small_m2 ADAPTIVE_WH_SMALL Q5 5 36.97

Aggregated Comparison

Check whether lowering MAX_QUERY_PERFORMANCE_LEVEL is reflected in execution_time. I also included queued_overload_time to observe any queuing trends.

-- Aggregate execution times by QUERY_TAG × query type for comparison
-- Only target the most recent 5 SELECT statements per QUERY_TAG × query_name
WITH ranked AS (
    SELECT
        QUERY_TAG,
        WAREHOUSE_NAME,
        CASE
            WHEN QUERY_TEXT ILIKE '%L_RETURNFLAG%' THEN 'Q1'
            WHEN QUERY_TEXT ILIKE '%N_NAME%'       THEN 'Q5'
            ELSE 'OTHER'
        END                  AS query_name,
        EXECUTION_TIME,
        TOTAL_ELAPSED_TIME,
        QUEUED_OVERLOAD_TIME,
        ROW_NUMBER() OVER (
            PARTITION BY QUERY_TAG,
                         CASE
                             WHEN QUERY_TEXT ILIKE '%L_RETURNFLAG%' THEN 'Q1'
                             WHEN QUERY_TEXT ILIKE '%N_NAME%'       THEN 'Q5'
                             ELSE 'OTHER'
                         END
            ORDER BY START_TIME DESC
        ) AS rn
    FROM TABLE(
        INFORMATION_SCHEMA.QUERY_HISTORY(
            END_TIME_RANGE_START => DATEADD(HOUR, -2, CURRENT_TIMESTAMP()),
            END_TIME_RANGE_END   => CURRENT_TIMESTAMP(),
            RESULT_LIMIT         => 1000
        )
    )
    WHERE WAREHOUSE_NAME   IN ('ADAPTIVE_WH_XLARGE', 'ADAPTIVE_WH_SMALL')
      AND QUERY_TAG        IN ('adaptive_xlarge_m2', 'adaptive_small_m2')
      AND EXECUTION_STATUS = 'SUCCESS'
      AND QUERY_TYPE       = 'SELECT'
)
SELECT
    QUERY_TAG,
    WAREHOUSE_NAME,
    query_name,
    COUNT(*)                                           AS run_count,
    ROUND(AVG(EXECUTION_TIME)       / 1000, 2)        AS avg_exec_sec,
    ROUND(MIN(EXECUTION_TIME)       / 1000, 2)        AS min_exec_sec,
    ROUND(MAX(EXECUTION_TIME)       / 1000, 2)        AS max_exec_sec,
    ROUND(AVG(TOTAL_ELAPSED_TIME)   / 1000, 2)        AS avg_elapsed_sec,
    ROUND(AVG(QUEUED_OVERLOAD_TIME) / 1000, 2)        AS avg_queued_overload_sec
FROM ranked
WHERE rn <= 5
GROUP BY QUERY_TAG, WAREHOUSE_NAME, query_name
ORDER BY QUERY_TAG, query_name;
Enter fullscreen mode Exit fullscreen mode

2026-04-16_09h17_51

Here are the summarized results in table format.

You can see that adaptive_small_m2 has longer avg_exec_sec than adaptive_xlarge_m2 (execution time increases as the performance cap is lowered).

QUERY_TAG WAREHOUSE_NAME Query Runs Avg Exec (sec) Min (sec) Max (sec) Avg Elapsed (sec) Avg Queue Wait (sec)
adaptive_xlarge_m2 ADAPTIVE_WH_XLARGE Q1 5 4.06 3.68 4.55 7.17 0.00
adaptive_xlarge_m2 ADAPTIVE_WH_XLARGE Q5 5 3.56 2.83 5.74 6.86 0.00
adaptive_small_m2 ADAPTIVE_WH_SMALL Q1 5 27.42 21.57 34.35 25.82 0.00
adaptive_small_m2 ADAPTIVE_WH_SMALL Q5 5 38.95 36.97 42.18 42.72 0.00

Analysis

Both Q1 and Q5 showed significantly shorter execution times on the XLARGE warehouse compared to the SMALL warehouse.

Query XLARGE Avg Exec Time SMALL Avg Exec Time Ratio
Q1 4.06 sec 27.42 sec ~6.8x
Q5 3.56 sec 38.95 sec ~10.9x

Q5 showed a larger ratio than Q1. Since Q5 involves JOINs across 6 tables, it's a more complex query where the difference in available resources becomes more pronounced.

Testing Note: Warehouse Data Cache

There's an important point to keep in mind about this test. Snowflake warehouses have a data cache (local disk cache) separate from the query result cache. This mechanism retains micro-partitions read from remote storage on the warehouse's local SSD.

In this test, I disabled the query result cache at the account level with USE_CACHED_RESULT = false, but I did not disable the data cache. Therefore, when running the same query consecutively on the same warehouse, the 2nd run onward reads data from local SSD, making it faster than the first run.

Looking at the actual data, the XLARGE warehouse's Q5 first run took 5.74 seconds, while runs 2–5 dropped to 2.83–3.17 seconds, confirming the data cache effect.

Checking the query profile, the 2nd run shows a certain percentage for Percentage scanned from cache, confirming that the warehouse data cache was in effect.

SMALL Q5: 1st Run Query Profile

2026-04-16_09h31_43

SMALL Q5: 2nd Run Query Profile

2026-04-16_09h32_36

Since the purpose of this test was to confirm the "relative difference between SMALL and XLARGE," both warehouses were compared under the same conditions (same cache state), making the relative comparison valid. However, if you want to measure pure cold-start performance, note that you need to suspend and restart the warehouse to clear the data cache before measuring.

Conclusion

I tested Snowflake's new Adaptive Warehouse feature using TPC-H sample data.

Comparing MAX_QUERY_PERFORMANCE_LEVEL at XLARGE vs. SMALL, I observed approximately a 6.8x difference for Q1 (aggregation query) and approximately a 10.9x difference for Q5 (6-table JOIN). More complex JOIN queries tend to show a more pronounced difference based on resource allocation, confirming that the MAX_QUERY_PERFORMANCE_LEVEL setting directly impacts performance.

The most impressive aspect was being able to control the "cost vs. performance balance" with just two parameters (MAX_QUERY_PERFORMANCE_LEVEL and QUERY_THROUGHPUT_MULTIPLIER) without thinking about warehouse sizes at all. Previously, you had to make specific size choices like "XL or XXL," but Adaptive Warehouse automates that while letting you simply specify an upper limit — a much simpler design.

This feature is available in Public Preview for Enterprise Edition and above. In addition to the US Oregon region (US West 2) used in this test, it's also available in EU West 1 (Ireland) and AP Northeast 1 (Tokyo). If you're using the Tokyo region, you can try it immediately with the same steps described here — give it a try!

Top comments (0)