Introduction
dbt 1.9 was released in December of 2024. dbt had Developer Day in March 2025 where they announced the Beta of dbt 1.10. dbt then had Launch Day on May 28th where they announced the public beta of their new official VS Code extension and the public beta of the new Fusion engine, the successor to dbt Core. Then, dbt 1.10 was finally released on June 16, 2025. Fusion and the VS Code extension are both currently in Preview. Throughout these events, dbt has continuously been highlighting the themes of safety and security that they want to bring to dbt. In this article we will learn about the safety and security features that dbt 1.10 brings to the table as well as the full range of features and changes that the version 1.10 has to offer. We will also learn about Fusion that is many times faster than dbt Core and that brings native SQL comprehension right into dbt. We will wrap up by learning about the new VS Code extension too. Let's get started with the features of dbt Core 1.10!
Sample Mode (--sample=”duration”)
Large datasets take a significant time to build. You can use the fast --empty flag instead to catch invalid SQL but there is no output to inspect since no data is being read or written. The new sample mode comes in to complement --empty. You can use the new --sample flag to build a subset of your data rather than the entire thing. This allows you to inspect the result and validate the output helping you iterate rapidly and reduce warehouse spend. This makes it perfect for development and CI workflows as well as large time-based datasets.
You can use the --sample flag with the dbt run or dbt build commands to specify a trailing time window:
dbt run --select models/staging/builds --sample="10 days"
If you have an even larger model, you can set sample mode to hours:
dbt run --select models/staging/builds --sample="2 hours"
You can also set the sample size by specifying a time duration using a start time and an end time.
dbt run --sample="{'start': '2025-01-01', 'end': '2025-01-10 16:00:00'}"
At the moment, sample mode only supports time-based sampling; it is expected to become more robust in the future. --sample filters refs and sources but you can prevent a ref from being sampled by calling the render() function on it.
with
source as (
select * from {{ ref('programs').render() }}
),
...
Additionally, you can set a default sample window at the environment level so that you don’t have to manually pass the --sample flag on each run. Since the sampling is time-based, if you have refs like {{ ref('build_times') }} being sampled, you need to set event_time for build_times to the field that will be used as the timestamp. You can read more about sample mode here.
Microbatch Awareness
The microbatch strategy for incremental models was introduced with dbt 1.9 to help process large time-series datasets efficiently. dbt has introduced a new batch object to model Jinja context. batch.first() and batch.last() identify boundaries, pre-hooks only run on the first batch, and post-hooks on the final batch.
{{ config(
materialized='incremental',
incremental_strategy='microbatch',
event_time='session_start',
begin='2025-08-09',
batch_size='day'
) }}
{% if batch.first() %}
{{ log("batch.first") }}
{% endif %}
select * from {{ source("source_name", "table_name") }}
{% if batch.last() %}
{{ log("batch.last") }}
{% endif %}
For incremental microbatch models, if your upstream models don't have event_time configured, dbt cannot automatically filter them during batch processing and will perform full table scans on every batch run.
To avoid this, configure event_time on every upstream model that should be filtered.
You can read more about microbatching incremental models here.
Calculate Source Freshness via a SQL Query
Now you can calculate source freshness** **by providing a custom SQL query. Before this, there were two ways to calculate source freshness: warehouse metadata tables and loaded_at_field.
version: 2
sources:
- name: programs_source
config:
freshness:
warn_after:
count: 1
period: day
error_after:
count: 3
period: day
loaded_at_query: |
SELECT min(build_time) FROM raw.programs
When you define loaded_at_query, loaded_at_field should not be defined.
Now, you can define freshness** **directly at the model level for adaptive jobs:
models:
- name: program_builds
config:
freshness:
build_after:
count: 5
period: hour
updates_on: any
Validation & Linting
Let's take a look at validation and linting advancements in dbt 1.10. We have YAML/JSON schema check that detects misspelled fields, deprecation of duplicate keys, deprecation of certain custom keys, and macro argument validation.
Input Validation
Now, there is YAML/JSON schema check that detects misspelled fields for dbt_profile.yml and other YAMLs. For example, if you enter nane instead of name or desciption instead of description dbt will emit a warning instead of failing silently. Because of this new spell-check, you cannot set custom YAML properties like this:
models:
- name: programs
description: Running computer programs.
a_custom_prop: true # a custom property
dbt will throw a warning for a_custom_prop and will cease support for such in the future preventing name collisions with new reserved properties. Previously you could also define any additional custom properties directly under config. These should now be nested under the meta config property, which will be the sole location for custom properties:
models:
- name: programs
config:
meta:
a_custom_prop: true # a custom property
columns:
- name: my_column
config:
meta:
another_custom_one: false
Just like meta being under config freshness, tags, docs, group, and access are also moving under config. For example, freshness under config:
sources:
- name: ecom
schema: raw
description: E-commerce data for the Jaffle Shop
config:
freshness:
warn_after:
count: 24
period: hour
You will get a warning if two identical keys exist in the same YAML file .In a future version dbt will stop supporting duplicate keys. Previously, dbt would silently use the last configuration listed in the file.
// profiles.yml
example_profile:
target: first_target
outputs:
...
example_profile: # dbt would use only this profile key
target: second_target
outputs:
...
You may move unused keys to a separate YAML file.
validate_macro_args Flag
With the new validate_macro_args flag set to True you can make dbt check if the argument names you've documented in your YAMLs don't match the argument names in your macro definitions or if the argument types don't match likewise or that the types aren't valid according to the supported types. Making dbt raise a warning if not. If no arguments are documented in the YML, dbt infers them from the macro and includes them in the manifest.json file. This flag is disabled by default.
An example documented macro:
// macros/file.yml
version: 2
macros:
- name: unit_conversion
description: A macro to convert between units
arguments:
- name: column_name
description: Column to convert
type: string
Recording Hard Deletes
dbt 1.10 introduces the new hard_deletes configuration.The previous invalidate_hard_deletes: true config can now be set using hard_deletes: invalidate instead, although the former is still supported for existing snapshots. Other than invalidate, which marks deleted records as invalid by setting their dbt_valid_to timestamp to the current time, you can also set hard_deletes to ignore (the default option which takes no action on deleted records) as well as to new_record. Setting hard_deletes to new_record allows you to record whenever a record disappears from the upstream source by adding a new record with a new metadata column dbt_is_deleted set to True in the snapshot table. This allows you to retain a continuous snapshot history without gaps. You can use hard_deletes with dbt-postgres, dbt-bigquery, dbt-snowflake, and dbt-redshift adapters. When the record is restored, dbt_is_deleted is set to False.
// snapshots/schema.yml
snapshots:
- name: my_snapshot
config:
hard_deletes: new_record # options are: 'ignore', 'invalidate', or 'new_record'
strategy: timestamp
updated_at: updated_at
columns:
- name: dbt_valid_from
description: Timestamp when the record became valid.
- name: dbt_valid_to
description: Timestamp when the record stopped being valid.
- name: dbt_is_deleted
description: Indicates whether the record was deleted.
Note: It is advised not to use hard_deletes with an existing snapshot without migrating your data.
Orphaned Jinja Blocks
Starting with dbt 1.10, you will receive warnings for orphaned Jinja blocks like the endmacro tag in the code below. dbt will stop supporting them altogether in the future.
// macros/new.sql
{% endmacro %} # orphaned endmacro jinja block
{% macro greeting() %}
Hello World!
{% endmacro %}
Catalogs Support
dbt 1.10 includes support for parsing the catalogs.yml file. The ability to define catalogs in any schema YAML file just like groups is coming soon. This update enables write integration, an important milestone in dbt’s journey to supporting external catalogs for Apache Iceberg (a format for huge analytic tables) tables. You'll be able to provide a config specifying a catalog integration for your producer model. You can create a catalogs.yml file at the root level of your dbt project. An example (from the docs) of Snowflake Horizon as the catalog is shown below:
catalogs:
- name: catalog_horizon
active_write_integration: snowflake_write_integration
write_integrations:
- name: snowflake_write_integration
external_volume: dbt_external_volume
table_format: iceberg
catalog_type: built_in
- name: databricks_glue_write_integration
external_volume: databricks_external_volume_prod
table_format: iceberg
catalog_type: unity
You can add the catalog_name config parameter in either dbt_project.yml, inside the .sql model file, or property file (model folder). An example of iceberg_model.sql:
{{
config(
materialized='table',
catalog = catalog_horizon
)
}}
select * from {{ ref('programming_languages') }}
Finally, execute the dbt model with dbt run -s iceberg_model. Read more about catalogs and supported configurations here.
Artifact Enhancement
Hybrid projects allow you to seamlessly integrate complementary dbt Core and dbt workflows by automatically uploading dbt Core artifacts like run_results.json, manifest.json, catalog.json, sources.json into dbt. This enables you to visualize and perform cross-project references to models defined in dbt Core projects. An invocation_started_at field has been added alongside the invocation_id field to make certain types of run time calculations easier. The addition of invocations_started_at in run_results.json may require updates in downstream integrations.
Note that dbt now refers to their cloud offering as simply dbt instead of “dbt Cloud”.
Python 3.13 Compatibility
dbt 1.9 stopped supporting Python 3.8 and users were encouraged to upgrade their Python environment to a newer version to ensure compatibility with the latest features and to enhance overall performance. dbt 1.10 brings compatibility with the latest 3.13 runtime. This support is initially available for the Postgres adapter, with official support for more adapters coming soon. Learn more about Python 3.13.
dbt Engine Environment Variables
Currently all dbt engine defined env variables are prefixed with DBT_.
Since Core users can create custom environment variables with any name, whenever a new dbt engine environment variable is added it is breaking for projects with the new variable already defined. Now the prefix DBT_ENGINE is reserved specifically for dbt engine environment variables. These environment variables can still be set but not created.
Custom Output Path for Source Freshness
The --output and -o flags used to override the default path for sources.json have been deprecated. The target path for all artifacts can still be set in the invocation with the --target-path flag, or for the environment with environment variables (DBT_TARGET_PATH). The default target path is the target/ folder, located relative to dbt_project.yml of the active project.
Warn Error Options
The warn_error_option options for include and exclude have been deprecated and replaced with error and warn, respectively.
...
flags:
warn_error_options:
error: # Previously "include"
warn: # Previously "exclude"
silence: # To silence or ignore warnings
- NoNodesForSelectionCriteria
dbt 1.10 also introduces the Deprecations setting for the warn (exclude) option. When error is set to all or *, the warn option can be optionally set to exclude specific warnings from being treated as exceptions. If you are using --warn-error-options '{"error": "all"}' or passing the —-warn-error flag to promote all warnings to errors, you can set "warn": ["Deprecations"] to continue treating the deprecation warnings as warnings.
Spaces in Resource Names Disallowed
By default, spaces in the names of resources such as models are now blocked. The require_resource_names_without_spaces flag enforces using resource names without spaces. When this flag is set to True, dbt will raise an exception (instead of a deprecation warning) if it detects a space in a resource name e.g. models/model name with spaces.sql.
// dbt_project.yml
...
require_resource_names_without_spaces: True
This change can be controlled by the behavior flag disallow-spaces-in-resource-names.
Miscellany
The following are some of the miscellaneous changes coming with dbt 1.10:
- The --models / --model / -m flag will raise a warning
- source-freshness-run-project-hooks now true by default. Legacy workflows may need adjustment.
- Combining --sample and --sample-window CLI params
- Saved queries support tags
- Deprecated {{ modules.itertools }} usage
- Deprecated overrides property for sources
- Supporting loaded_at_query and loaded_at_field on source and table configs
- Begin validating configs from model SQL files
- Cost management features
See full changelog here.
How To Upgrade
dbt is committed to providing backward compatibility for all versions 1.x. These behavior changes are accompanied by behavior change flags to provide a migration window for existing projects. Starting in 2024, dbt provided the functionality from new versions of dbt Core via release tracks with automatic upgrades. If you have selected the "Latest" release track in dbt, you already have access to everything included in dbt Core v1.10.
Fusion Engine
Fusion is the next-generation dbt engine. It is a ground-up rewrite of the dbt Core execution engine using Rust instead of Python which dbt Core is written in. A few months ago, dbt Labs announced the acquisition of SDF Labs and the two teams became one. dbt is using SDF to integrate SQL comprehension technology into dbt to develop Fusion. The dbt Fusion engine is currently in Preview. According to their docs, Fusion will eventually support the full dbt Core framework, a superset of dbt Core’s capabilities, and the vast majority of existing dbt projects. dbt plans for Fusion to reach full feature parity with dbt Core ahead of Fusion's general availability. Fusion fully comprehends your project's SQL which enables precise column-level lineage, and real-time dialect-aware validation of your code without the need to query your warehouse. Fusion complements dbt Core's compilation step of rendering Jinja with a second phase in which it produces and validates with static analysis a logical plan for every rendered query in the project. Fusion promises immensely faster parsing and twice quicker full-project compilation, with near-instant recompilation of single files in the VS Code Extension. With more performance gains expected before General Availability. Fusion also has the state-awareness feature which ensures that models are rebuilt only when they need to process new data. State-awareness develops a sense of what's been materialized; it tracks which columns are used where, and which source tables have fresh data. This helps avoid unnecessary builds which results in higher velocity pipelines and cost savings. dbt says that early customers are already seeing ~10% reductions in warehouse spend. Check out the dbt docs on Fusion to see the full list of features and other details.
VS Code Extension
The official dbt extension for VS Code is also in Preview. This extension is the only way to tap into the full power of the Fusion engine when developing locally using VS Code, Cursor, or Windsurf. The extension does not support dbt Core and to use the extension your project must be running on Fusion. dbt says that this extension is the best way to develop locally with the dbt Fusion Engine. It brings many productivity features that improve developer experience and streamline workflows directly inside VS Code. These include the following:
- hovering over to see column types and schema information,
- seeing lineage at the column or table level, as you develop, inside VS Code itself withouting needing to run separate commands,
- viewing the compiled SQL code that your models will generate live continuously updated side-by-side with your source code as you write it,
- IntelliSense for smart autocompletion for model names, sources, tables, columns, macros, and functions,
- automatic refactoring to update references across your entire project instantly when a model or column is re-named,
- go-to-definition allowing you to instantly jump to another model, ref, source, or macro.
- seeing CTE (Common Table Expression) output previews directly from inside your dbt model allowing for faster validation and rapid debugging,
- viewing detailed logs making it easier to spot and troubleshoot issues and audit performance,
- catching parsing errors without needing to query the data warehouse e.g. missing comma, and
- catching compilation errors without needing to query the data warehouse e.g. wrong data type.
Check out the dbt docs on the extension to see the full range of capabilities and other details.
Conclusion
This is it for this article. I hope that you enjoyed reading about the latest features of dbt Core as well as enjoyed learning about the future of dbt Core which is the new Fusion engine. dbt says that the features that they introduced with version 1.10 are paving the way for Fusion. We also learned about the new VS Code extension. Now that you have learned about all of these you can start to utilize them within the capacity of your existing and future projects. Till then, happy coding!
Top comments (0)