DEV Community

augusto kiniama rosa
augusto kiniama rosa

Posted on • Originally published at blog.archetypeconsulting.com on

The Unofficial Snowflake Monthly Release Notes: December 2025

Monthly Snowflake Unofficial Release Notes #New features #Previews #Clients #Behavior Changes

Welcome to the Unofficial Release Notes for Snowflake for December 2025! You’ll find all the latest features, drivers, and more in one convenient place.

As an unofficial source, I am excited to share my insights and thoughts. Let’s dive in! You can also find all of Snowflake’s releases here.

This December, we provide coverage up to release 9.40 (General Availability — GA). I hope to extend this eventually to private preview notices as well.

I would appreciate your suggestions on how to continue combining these monthly release notes. Feel free to comment below or chat with me on LinkedIn.

Behavior change bundle 2025_05 is generally enabled for all customers, 2025_06 is enabled by default but can be opted out until next BCR deployment, and 2025_07 is disabled by default but may be opted in. Net new coming soon is 2026_01 coming in January, no information yet.

What’s New in Snowflake

AI Updates (Cortex, ML, DocumentAI)

  • AI_REDACT for automated redaction of PII (GA), detects and redacts PII from unstructured text using a large language model (LLM). AI_REDACT recognizes categories like names and addresses, including partial PII like first or last names, replacing them with placeholders
  • CORTEX_AISQL_USAGE_HISTORY Account Usage view (GA), provides detailed information about the usage of Cortex AI Functions in your SQL queries, providing finer-grained insights into how AI features are being used in your account
  • Semantic views: Using standard SQL clauses to query semantic views (Preview), use standard SQL clauses in a SELECT statement to query a semantic view

Snowflake Applications (Container Services, Notebooks and Applications, Snowconvert)

  • Support for Streamlit in Snowflake container runtime (Preview), run your Streamlit in Snowflake apps on containers
  • SnowConvert AI 2.1.0 (New Features: IBM DB2: Implemented DECFLOAT transformation, Oracle: added support for transforming NUMBER to DECFLOAT using the Data Type Mappings feature, added a new report TypeMappings.csv that displays the data types that were changed using the Data Type Mappings feature; PowerBI: added support for the Transact connector pattern for queries and multiple properties in the property list for PowerBI; Teradata: added a new conversion setting Tables Translation which allows transforming all tables in the source code to a specific table type supported by Snowflake, enabled conversion of tables to Snowflake-managed Iceberg tables; SSIS: added support for full cache in SSIS lookup transformations; General: Added temporary credentials retrieval for AI Verification jobs, added summary cards for selection and result pages, implemented full support for the Git Service, added ‘verified by user’ checkboxes and bulk actions to the selection and results pages, added a dependency tag for AI Verification, implemented the generation of a SqlObjects Report; Improvements: RedShift: Optimized RedShift transformations to only add escape characters when necessary in LIKE conditions; SSIS: improved Microsoft.DerivedColumn migrations for SSIS; General: added the number of copied files to relevant outputs, changed some buttons to the footer for improved UI consistency; Fixes: Teradata: fixed transformation of bash variables substitution in scripts)
  • SnowConvert AI 2.0.86 (Improvements: RedShift: added support for the MURMUR3_32_HASH function, replaced Redshift epoch and interval patterns with Snowflake TO_TIMESTAMP; SSIS: added support for converting Microsoft SendMailTask to Snowflake SYSTEM, implemented SSIS event handler translation for OnPreExecute and OnPostExecute; SQL Server: enhanced transformation for the Round function with three arguments; Informatica: updated InfPcIntegrationTestBase to import the real implementation of translators and other necessary components; General: enhanced procedure name handling and improved identifier splitting logic; improved object name normalization in DDL extracted code, implemented a temporal variable to keep credentials in memory and retrieve the configuration file, updated the TOML Credential Manager; improved error suggestions, added missing path validations related to ETL, improved the application update mechanism, implemented an exception to be thrown when calling the ToToml method for Snowflake credentials, changed the log path and updated the cache path, implemented a mechanism to check for updates, merged the Missing Object References Report with ObjectReferences, changed values in the name and description columns of the ETL.Issues report, added support for Open Source and Converted models in AI Verification, added a new custom JSON localizer, added a dialog to appear when accepting changes if multiple code units are present in the same file, added a FileSystemService, added an expression in the ETL issues report for SSISExpressionCannotBeConverted; Fixes: SQL Server: fixed a bug that caused the report database to be generated incorrectly, fixed a bug that caused unknown Code Units to be duplicated during arrangement; General: fixed an issue that prevented the cancellation of AI Verification jobs, fixed an issue to support EAI in the AI specification file, fixed an issue where the progress number was not being updated, fixed the handling of application shutdowns during updates)
  • SnowConvert AI 2.0.57 (Improvements: SQL Server: enhanced SQL Server code extraction to return schema-qualified objects; General: enhanced Project Service and Snowflake Authentication for improved execution, removed GS validation from client-side, as it is now performed on the server side, implemented connection validation to block deployment, data migration, and data validation when a connection is unavailable, enhanced conversion to use the source dialect from project initialization, improved CodeUnitStatusMapper to accurately handle progress status in UI status determination, implemented batch insert functionality for enhanced object result processing; Fixes : resolved an issue where conversion settings were not being saved correctly, corrected data validation select tree to properly skip folders, fixed content centering issues in the UI, normalized object names in AI Verification responses to prevent missing status entries in the catalog)

Data Lake Updates

  • Optimize existing semantic views or models with verified queries (Preview), optimize semantic views with verified queries. Snowflake analyzes them to enhance the semantic layer, enabling Cortex Analyst to answer more questions accurately beyond existing queries
  • Private connectivity for Apache Iceberg™ REST catalog integrations (GA), Configure an Apache Iceberg™ REST catalog integration for outbound private connectivity, enabling connections to external Iceberg REST catalogs like generic Iceberg REST, AWS Glue Data Catalog, and Databricks Unity Catalog via private endpoints instead of the public internet

Snowflake Applications (Container Services, Notebooks and Applications, Snowconvert)

  • Notebooks in Workspaces (Preview), the new notebook experience offers a fully-managed environment for data science and machine learning on Snowflake, combining the familiar Jupyter interface with enterprise-grade compute, governance, and collaboration. Notebooks run on a Container Runtime powered by Snowpark Container Services, with preconfigured containers optimized for AI/ML workloads, supporting CPUs, GPUs, parallel data loading, and distributed training APIs for popular ML packages. Key features include integration with Workspaces, improved compute and cost management, Jupyter compatibility, and an enhanced editing experience

Realtime Data (Hybrid, Interactive & SnowPostGres)

  • Snowflake Postgres (Preview), allows creating, managing, and using Postgres instances directly within Snowflake, each on a dedicated VM. Connect via any Postgres client, integrating Postgres's reliable transactional capabilities with the Snowflake platform
  • Interactive tables and interactive warehouses (GA), deliver low-latency query performance for high-concurrency workloads like real-time dashboards and APIs. This new Snowflake table type is optimized for interactive, low-latency queries. Interactive warehouses are designed for low-latency, interactive workloads, providing the best performance when querying these tables

Data Pipelines, Data Loading, Unloading Updates

  • Schema evolution support for Snowpipe Streaming with high-performance architecture, Support for automatic schema evolution in Snowpipe Streaming enables pipelines to adapt to schema drift in near real-time, removing the need for manual DDL when new data attributes appear
  • Snowflake High Performance connector for Kafka (Preview), is a high-performance Kafka connector for ingesting data into Snowflake tables. It leverages Snowflake’s Snowpipe Streaming for high throughput with low latency. Key features include transparent billing, Rust-based performance, in-flight transformations, server validation, and pre-clustering. PIPE objects manage and configure data ingestion
  • Default pipe for Snowpipe Streaming with high-performance architecture, simplifies data ingestion by removing the need to create a pipe manually with CREATE PIPE DDL statements. Users can start streaming immediately to a target table, as the default pipe is implicitly available for any table receiving streaming data
  • Snowpipe simplified pricing, enhancement to Enterprise and Standard Snowflake accounts, offers a simpler, more predictable Snowpipe pricing model that can significantly reduce data ingestion costs. It charges a fixed 0.0037 credits per GB for Snowpipe ingestion. Text files like CSV and JSON are billed by uncompressed size, while Binary files like Parquet and Avro are billed by their observed size

Data Transformations

  • Dynamic tables: Support for dual warehouses, optimize performance and cost by assigning dedicated warehouses for resource-intensive initializations and reinitializations, while using different warehouses for other refreshes

Security, Privacy & Governance Updates

  • Copy tags when running a CREATE OR REPLACE TABLE command (Preview), Allows copying tags linked to the original table and columns. The new table and its columns share the same tags
  • Notifications for data quality incidents (Preview), Automatically notify when a database data quality incident occurs, which happens when a data metric function (DMF) violates an expectation or shows an anomaly
  • Network rules and policies support Google Cloud Private Service Connect IDs (GA), create Snowflake network rules and policies using Google Cloud Private Service Connect IDs
  • Trust Center: Detection findings and event-driven scanners (Preview) view new findings—detections in your account. This preview introduces event-driven scanners, which continuously monitor your account for specific events, alongside existing schedule-based scanners
  • Programmatic access tokens: Removing the single-role restriction for service users For service users (users with TYPE=SERVICE or TYPE=LEGACY_SERVICE), you can now generate a programmatic access token that is not restricted to a single role
  • Private connectivity for internal stages on Google Cloud (GA)
  • WORM backups (GA), Help organizations protect critical data from modification or deletion. Backups are point-in-time copies of Snowflake objects. You select which objects to back up (tables, schemas, or databases), the backup frequency, retention period, and whether to add a lock to prevent early deletion, supporting regulatory compliance, recovery, and cyber resilience
  • Cost anomalies (GA), automatically detect cost anomalies from previous consumption levels, simplifying the identification of spikes or dips to optimize spending. Use it for account and organization-level anomalies

SQL, Extensibility & Performance Updates

  • Account Usage: New CATALOG_LINKED_DATABASE_USAGE_HISTORY view, displays the credit usage for catalog-linked databases. It includes compute and cloud services credit usage for each entity during an operation
  • Vector aggregate functions, enable element-wise operations on multiple VECTOR values. These functions aggregate vector columns by computing element-wise results across all vectors in a group. Vector aggregate functions are vital in machine learning and data science for tasks like calculating centroids, ranges, or averages in vector datasets. They ignore NULLs, preserve data types, and are optimized for vector data. The new functions are: VECTOR_SUM, VECTOR_MIN, VECTOR_MAX, and VECTOR_AVG
  • Access history improvements, let you monitor the SQL statements executed in Snowflake. It keeps track of the following types of statements: Data Manipulation Language (DML) statements, Data Query Language (DQL) statements, Data Definition Language (DDL) statements; Snowflake is expanding which SQL statements are included in the access history: added support for the following objects: listing, role, share, and session, added DQL command support for externally managed Apache Iceberg™ tables, enhanced support for database DDL commands, including the ALTER DATABASE command and commands related to database replication, enhanced DDL support for tables, including variations of ALTER TABLE and variations of ALTER TABLE…MODIFY COLUMN, enhanced support for file staging commands like GET and PUT

Collaboration, Data Clean Rooms, Marketplace, Listings & Data Sharing

  • Auto-fulfillment for listings that span databases (GA), providers can create listings on databases referencing views or tables across multiple databases. Granting reference usage to a share allows a single listing to span databases, eliminating the need for a combined database per listing. This increases flexibility, simplifies integration, and ensures all related listings are auto-fulfilled together
  • Clean Rooms API Version: 12.2: updates to private preview features
  • Clean Rooms API Version: 12.3: The lookalike audience modeling template was removed from the clean rooms UI. It's now accessible as a custom template via the clean rooms API for adding, modifying, and running, with updates to private preview features

Open-Source Updates

  • terraform-snowflake-provider 2.12.0 (enables tighter enterprise security with the addition of new CRL and Proxy configuration fields. The migration tool has been significantly expanded to now support databases, roles, users, warehouses, and schemas, making IaC adoption much easier for existing environments. You also gain better control over serverless tasks and SCIM integrations with the addition of custom run_as_role support. Finally, check the migration guide before upgrading, as this version formally removes the deprecated account parameter)
  • Modin 0.32.0 ()
  • Snowflake VS Code Extension 1.21.0 (Features: added Snowpark Migration Accelerator (SMA) AI Assistant support for Jupyter Notebooks (.ipynb files), added SnowConvert AI Assistant support for Python (.py), Scala (.scala), and Jupyter Notebook (.ipynb) files, added AI disclaimer header to SMA and SnowConvert AI Assistant windows with links to Privacy Policy, AI Terms, Acceptable Use Policy, and Terms of Service; bug Fixes: fixed statement boundary detection issue for stored procedures with many CASE WHEN clauses)
  • Streamlit 1.52.2 (minor updates)

Client, Drivers, Libraries and Connectors Updates

New features:

  • .NET Driver 5.2.0 (Added multi-targeting support. NuGet now selects the appropriate build based on the target framework and OS, added support for native Arrow structured types)
  • Go Snowflake Driver 1.18.1 (Included a shared library to collect telemetry to identify and prepare testing platforms for native Rust extensions)
  • JDBC Driver 3.28.0 (Introduced a shared library for extended telemetry to identify and prepare the testing platform for native Rust extensions, added the ability to choose the connection configuration in the auto configuration file by specifying the aws-oauth-file parameter in the JDBC URL, updated grpc-java to 1.77.0 to address CVE-2025–58057 from transient dependency, updated netty to 4.1.128.Final to address CVE-2025–59419)
  • Node.js 2.3.2 (Added support for Red Hat Enterprise Linux (RHEL) 9, added support for Node.js version 24, included a shared library to collect telemetry to identify and prepare testing platforms for native node addons)
  • PHP PDO Driver for Snowflake 3.4.0 (Added native OKTA authentication support, implemented a new CRL (Certificate Revocation List) checking mechanism)
  • Snowflake CLI 3.14.0 (Updated the snow streamlit deploy command to use the updated CREATE STREAMLIT syntax (FROM source_location) instead of the deprecated syntax (ROOT_LOCATION = ‘’))
  • Snowflake Python API 1.10.0 (Added support for the Streamlit resource, added support for the DECFLOAT data type)
  • Snowpark Library for Python 1.44.0 (New Features: Added support for targeted delete-insert via the overwrite_condition parameter in DataFrameWriter.save_as_table;Improvements: Improved DataFrameReader to return columns in deterministic order when using INFER_SCHEMA, added a dependency on protobuf<6.34 (was <6.32).)
  • Snowpark Library for Python 1.43.0 (New features Added support for DataFrame.lateral_join, added support for Private Preview feature Session.client_telemetry, added support for Session.udf_profiler, added support for functions.ai_translate, added support for the following iceberg_config options in DataFrameWriter.save_as_table and DataFrame.copy_into_table: target_file_size, partition_by, Added support for the following functions in functions.py: String and Binary functions: base64_decode_binary, bucket, compress, day, decompress_binary, decompress_string, md5_binary, md5_number_lower64, md5_number_upper64, sha1_binary, sha2_binary, soundex_p123, strtok, truncate, try_base64_decode_binary, try_base64_decode_string, try_hex_decode_binary, try_hex_decode_string, unicode, uuid_string, Conditional expressions: booland_agg, boolxor_agg, regr_valy, zeroifnull, Numeric expressions: cot, mod, pi, square, width_bucket; Improvements: Enhanced DataFrame.sort() to support ORDER BY ALL when no columns are specified, removed experimental warning from Session.cte_optimization_enabled; Snowpark pandas API updates: added support for DataFrame.groupby.rolling(), added support for mapping np.percentile with DataFrame and Series inputs to Series.quantile, added support for setting the random_state parameter to an integer when calling DataFrame.sample or Series.sample, added support for the following iceberg_config options in to_iceberg:target_file_size, partition_by ; Improvements : enhanced autoswitching functionality from Snowflake to native pandas for methods with unsupported argument combinations:shift() with suffix or non-integer periods parameters,sort_index() with axis=1 or key parameters,sort_values() with axis=1,melt() with col_level parameter,apply() with result_type parameter for DataFrame,pivot_table() with sort=True, non-string index list, non-string columns list, non-string values list, or aggfunc dict with non-string values,fillna() with downcast parameter or using limit together with value,dropna() with axis=1,asfreq() with how parameter, fill_value parameter, normalize=True, or freq parameter being week, month, quarter, or year,groupby() with axis=1, by!=None and level!=None, or by containing any non-pandas hashable labels,groupby_fillna() with downcast parameter,groupby_first() with min_count>1,groupby_last() with min_count>1, groupby_shift() with freq parameter, slightly improved the performance of agg, nunique, describe, and related methods on 1-column DataFrame and Series objects, add support for the following in faster pandas:groupby.apply, groupby.nunique, groupby.size, concat, copy, str.isdigit, str.islower, str.isupper, str.istitle, str.lower, str.upper, str.title, str.match, str.capitalize, str.__getitem__, str.center, str.count, str.get, str.pad, str.len, str.ljust, str.rjust, str.split, str.replace, str.strip, str.lstrip, str.rstrip, str.translate, dt.tz_localize, dt.tz_convert, dt.ceil, dt.round, dt.floor, dt.normalize, dt.month_name, dt.day_name, dt.strftime, dt.dayofweek, dt.weekday, dt.dayofyear, dt.isocalendar, rolling.min, rolling.max, rolling.count, rolling.sum, rolling.mean, rolling.std, rolling.var, rolling.sem, rolling.corr, expanding.min, expanding.max, expanding.count, expanding.sum, expanding.mean, expanding.std, expanding.var, expanding.sem, cumsum, cummin, cummax, groupby.groups, groupby.indices, groupby.first, groupby.last, groupby.rank, groupby.shift, groupby.cumcount, groupby.cumsum, groupby.cummin, groupby.cummax, groupby.any, groupby.all, groupby.unique, groupby.get_group, groupby.rolling, groupby.resample, to_snowflake, to_snowpark, resample.min, resample.max, resample.count, resample.sum, resample.mean, resample.median, resample.std, resample.var, resample.size, resample.first, resample.last, resample.quantile, resample.nunique, Make faster pandas disabled by default (opt-in instead of opt-out), improve performance of drop_duplicates by avoiding joins when keep!=False in faster pandas)
  • Snowpark Library for Scala and Java 1.18.0 (Add functions.try_to_date overload for format parameter, add functions.try_to_timestamp overload for format parameter, add Column.cast support for Any parameter type, add Column.equal_to support for Any parameter type, add Column.not_equal support for Any parameter type, add Column.gt support for Any parameter type, add Column.lt support for Any parameter type, add Column.leq support for Any parameter type, add Column.geq support for Any parameter type, add Column.equal_null support for Any parameter type, add Column.plus support for Any parameter type, add Column.minus support for Any parameter type, add Column.multiply support for Any parameter type, add Column.divide support for Any parameter type, add Column.mod support for Any parameter type)
  • Snowpark Connect for Spark 1.7.0 (Snowpark Connect for Spark: Fix Parquet logical types (TIMESTAMP, DATE, DECIMAL) handling. Previously, Parquet files were read using physical types only (such as LongType for timestamps). Logical types can now be interpreted by returning proper types like TimestampType, DateType, and DecimalType. You can enable this by setting Spark configuration snowpark.connect.parquet.useLogicalType to true, use the output schema when converting Spark’s Row to Variant, handle empty JAVA_HOME, fix from_json function for MapType, support of configuration spark.sql.parquet.outputTimestampType for NTZ timezone; Snowpark Submit: Add support for --jars for pyspark workload, fix bug for Snowpark Submit JWT authentication)
  • Snowpark Connect for Spark 1.6.0 (Support any type as output or input type in the Scala map and flatmap functions, support joinWith, support any return type in Scala UDFs, support registerJavaFunction)
  • Snowpark ML 1.20.0 (New Model Registry features: vLLM is now supported as an inference back-end. The create_service API accepts a new argument, inference_engine_options, which allows you to specify the inference engine to use and other engine-specific options. To specify vLLM, set the inference_engine option to InferenceEngine.VLLM)
  • SQLAlchemy 1.8.0 (Added logging of the SQLAlchemy version and pandas (if used), added support for Python 3.14 and earlier)
  • Snowflake Connector for SharePoint 1.0.3 (Behavior changes: added progress logs in the event table for the entire ingestion process, unprocessed file updates and inserts are now visible through the PUBLIC.CONNECTOR_ERRORS view)

Bug fixes:

  • .NET Driver 5.2.0 (fixed CRL validation to reject newly downloaded CRLs when their NextUpdate value has expired, added exception handling to the session heartbeat to prevent network errors from disrupting background heartbeat checks, added retry support for HTTP 307/308 status codes, added the ability to specify non-string values in TOML configuration files. For example, port can now be specified as an integer)
  • .NET Driver 5.2.1 (fixed the extremely rare case where intermittent network issues during uploads to Azure Blob Storage prevented metadata updates)
  • Go Snowflake Driver 1.18.1 (Handled HTTP 307 and 308 responses in drivers to achieve better resiliency to backend errors, created a temporary directory only if needed during file transfers, fixed unnecessary user expansion for file paths during file transfers)
  • JDBC Driver 3.28.0 (Fixed an issue where connection and socket timeout were not propagated to the HTTP client, fixed Azure 503 retries and configured it with the putGetMaxRetries parameter)
  • Node.js 2.3.2 (Fixed the TypeScript definition for getResultsFromQueryId where queryId should be required and sqlText should be optional, bumped the dependency glob to address CVE-2025-64756, fixed a regression introduced in version 2.3.1 where SnowflakeHttpsProxyAgent was instantiated without the new keyword, breaking the driver when both OCSP was enabled and the HTTP_PROXY environment variable was used to set the proxy. This bug did not affect HTTPS_PROXY)
  • Node.js 2.3.3 (Replaced the glob dependency used in PUT queries with a custom wildcard matching implementation to address security issues, fixed misleading debug messages during login requests, fixed a bug in the build script that failed to include minicore binaries in the dist folder)
  • PHP PDO Driver for Snowflake 3.4.0 (Fixed the aarch64 build on MacOS)
  • Snowflake CLI 3.13.1 (Fixed an issue with parsing the --vars values provided to snow dbt execute subcommands. This fix allows you to pass variables the same way as you would to the dbt CLI, such as --vars '{"key": "value"}’)
  • Snowflake Connector for Python 4.1.1 (Relaxed the pandas dependency requirements for Python below 3.12, changed the CRL cache cleanup background task to a daemon thread to avoid blocking the main thread, fixed NO_PROXY issues with PUT operations)
  • Snowpark Library for Python 1.43.0 (Bug fixes: Fixed a bug where automatically-generated temporary objects were not properly cleaned up, fixed a bug in SQL generation when joining two DataFrames created using DataFrame.alias and CTE optimization is enabled, fixed a bug in XMLReader where finding the start position of a row tag could return an incorrect file position ; Snowpark pandas API Bug: Fixed a bug in DataFrameGroupBy.agg where func is a list of tuples used to set the names of the output columns, fixed a bug where converting a modin datetime index with a timezone to a numpy array with np.asarray would cause a TypeError,fixed a bug where Series.isin with a Series argument matched index labels instead of the row position)
  • Snowpark Connect for Spark 1.7.0 (Snowpark Connect for Spark: add support for Spark integral types, add support for Scala 2.13, introduce support for integral types overflow behind snowpark.connect.handleIntegralOverflow configuration, add a configuration for using custom JAR files in UDFs, support Scala UDFs if UDFPacket lacks input types metadata, allow as input and output types case classes in reduce function; Snowpark Submit: add support for Scala 2.13, add support for — files argument)
  • Snowpark Connect for Spark 1.6.0 (Fix JSON schema inference issue for JSON reads from Scala, change return types of functions returning incorrect integral types, fix update fields bug with struct type, fix unbounded input decoder, fix struct function when the argument is unresolved_star, fix column name for Scala UDFs when the proto contains no function name, add support for PATTERN in Parquet format, handle error and errorIfExists write modes)
  • Snowpark ML 1.20.0 (Experiment Tracking bug fixes: exceeding the run metadata size limit in log_metrics or og_params issues a warning rather than raising an exception)
  • SQLAlchemy 1.8.2 (Aligned the supported maximum python version with snowflake-connector-python to 3.13)
  • Snowflake Connector for Google Analytics Aggregate Data 2.2.2 (Fixed an issue where the report start date was calculated incorrectly when report ingestion exceeded 2 hours)
  • Snowflake Connector for SharePoint 1.0.3 (fixed internal table definitions that were causing connector application upgrade issues, files without extensions no longer break the ingestion process, when upgrading the connector application, change tracking on connector tables is no longer disabled. We’ve also migrated broken Cortex Search indexes to make them refresh the data)
  • Snowflake Connector for SharePoint 1.0.4 (During the data synchronization of Microsoft 365 groups, group members are now retrieved only once for each group)
  • Snowflake Connector for SharePoint 1.0.5 (Fixed an issue that was causing empty values to be returned in the web_url column in the Cortex Search service responses)

Conclusion

This month’s release notes showcase Snowflake’s strong initiative to unify the entire data lifecycle, from ingestion and transformation to AI-driven application hosting, within a single, governed environment.

AI & Machine Learning are now maturing into production use, shifting emphasis from experimental features to enterprise-ready governance and usability. The GA of AI_REDACT introduces automated, LLM-powered PII protection for unstructured text, while new Cortex AI Usage History views offer essential visibility into AI activity. With Notebooks in Workspaces and Vector Aggregate Functions, data teams now have a fully managed setup for distributed training and complex ML tasks right alongside their data. Snowflake’s rapid expansion into app development continues, strengthening its role as a backend for low-latency applications.

The new Snowflake Postgres supports transactional workloads, while GA features like Interactive Tables and Warehouses deliver the performance needed for high-concurrency APIs and dashboards. In data engineering and governance, foundational processes are becoming more intelligent. Schema Evolution for Snowpipe Streaming reduces manual DDL changes during data drift. On the governance side, GA releases like WORM Backups and Cost Anomalies detection provide vital safeguards for compliance and budget control. Enhancements like SnowConvert, which now supports Informatica, DB2, and Oracle migrations, and the high-performance Kafka Connector, make migrating data into Snowflake easier than ever.

Enjoy the reading.

I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, & Security Architecture at Archetype Consulting. You can follow me on LinkedIn.

Subscribe to my Medium blog https://blog.augustorosa.com for the most interesting Data Engineering and Snowflake news.

Sources:


Top comments (0)