DEV Community

augusto kiniama rosa
augusto kiniama rosa

Posted on • Originally published at blog.archetypeconsulting.com on

The Unofficial Snowflake Monthly Release Notes: October 2025

Monthly Snowflake Unofficial Release Notes #New features #Previews #Clients #Behavior Changes

Welcome to the Unofficial Release Notes for Snowflake for October 2025! You’ll find all the latest features, drivers, and more in one convenient place.

As an unofficial source, I am excited to share my insights and thoughts. Let’s dive in! You can also find all of Snowflake’s releases here.

This month, we provide coverage up to release 9.33 (General Availability — GA). I hope to extend this eventually to private preview notices as well.

I would appreciate your suggestions on continuing to combine these monthly release notes. Feel free to comment below or chat with me on LinkedIn.

Behavior change bundle 2025_05 is generally enabled for all customers, 2025_06 is enabled by default but can be opted out until next BCR deployment, and 2025_07 is disabled by default but may be opted in.

What’s New in Snowflake

New Features

  • New OBJECT_VISIBILITY property (Preview), controls the discoverability of objects in the account, enabling users without explicit access privileges to find objects and request access. Currently, this property only affects Universal Search and its results
  • Hybrid table support for Microsoft Azure (GA)

Snowsight Updates

  • Using the database object explorer in Snowsight to create and manage semantic views (GA)
  • Query insights in Snowsight (GA), view query insights in Snowsight. The Query Profile tab under Query History now displays insights about conditions that affect query performance. Each insight includes a message that explains how query performance might be affected and provides a general recommendation for next steps
  • Performance Explorer (Preview), monitor interactive metrics for SQL workloads. The metrics show the overall health of your Snowflake environment, query activity, changes to warehouses, and changes to tables

AI Updates (Cortex, ML, DocumentAI)

  • Snowflake-managed MCP server (Preview), lets AI agents securely retrieve data from Snowflake accounts without needing to deploy separate infrastructure. You can configure the MCP server to serve Cortex Analyst and Cortex Search as tools on the standards-based interface. MCP clients discover and invoke these tools, and retrieve data required for the application
  • Named scoring profiles for Cortex Search Services (GA), allow you to save and reuse scoring configurations when querying a Cortex Search Service. A scoring configuration consists of optional boost and decay functions, as well as an optional reranker setting
  • Verified query suggestions (Preview), available in Snowsight in preview. Cortex Analyst monitors incoming requests to surface queries for inclusion in a Verified Query Repository, allowing you to craft verified SQL responses for similar queries
  • Cortex Search Component Scores (Preview), access detailed scoring information for search results using Cortex Search Component Scores. Component scores allow developers to understand how search rankings are determined and debug search performance
  • CORTEX_EMBED_USER database role (GA), added a CORTEX_EMBED_USER database role in the SNOWFLAKE database to better manage access to Cortex embedding functions. Embedding functions, which convert text to a vector of numbers that represent the meaning of the text, include AI_EMBED, EMBED_TEXT_768, and EMBED_TEXT_1024
  • AI_EXTRACT AISQL function (GA), function lets you extract information from text or document files using large language models. New features added: Table extraction support: Extract tabular data from documents, which helps you analyze financial reports, data sheets, invoices, and other documents that contain tabular data, flexible response formats: Define the response format using simple object schemas, arrays of questions, or JSON schemas that support both entity and table extraction, contextual guidance: Provide context to the model using the optional description field; for example, to help the model localize the correct table in a document, output length: The maximum output length for entity extraction is 512 tokens per question. For table extraction, the model returns answers that are a maximum of 4096 tokens long
  • Cross-region inference for US Commercial Gov, now available for US Commercial Government regions on AWS. Cross-region inference on US Commercial Gov securely routes your traffic only through regions operating under the same compliance tier. All processing occurs on FIPS-validated infrastructure, keeping your workloads compliant with security requirements

Snowflake Applications (Container Services, Notebooks and Applications, Snowconvert)

  • Snowpark Container Services in Google Cloud (GA)
  • Snowflake Native Apps: Shareback, securely request permission from consumers to share data back with you (the provider) or designated third parties. This powerful capability supports essential business needs such as compliance reporting, telemetry and analytics sharing, and data preprocessing by providing a secure, governed channel for data exchange

Data Transformations

  • dbt Projects on Snowflake: Recent improvements (Preview), support the following functionalities: dbt Project failures show up as failed queries, Compile on create, Install deps on compile, MONITOR privilege and Accessing execution results is easier

Data Lake Updates

  • Query data compaction jobs for Apache Iceberg™ tables, ICEBERG_STORAGE_OPTIMIZATION_HISTORY view to query data compaction jobs for Apache Iceberg™ tables within the last year. This view includes a CREDITS_USED column, which you can use to monitor the cost of data compaction. We will start billing for data compaction of data files for Snowflake-managed Iceberg tables on October 20th, 2025
  • Partitioned writes for Apache Iceberg™ tables (GA), partitioned write support for Iceberg tables, Snowflake improves compatibility with the wider Iceberg ecosystem and enables accelerated read queries from external Iceberg tools. You can now use Snowflake to create and write to both Snowflake-managed and externally managed Iceberg tables with partitioning schemes
  • Set a target file size for Apache Iceberg™ tables (GA), improves cross-engine query performance when you use an external Iceberg engine such as Apache Spark, Delta, or Trino that’s optimized for larger file sizes
  • Write support for externally managed Apache Iceberg™ tables and catalog-linked databases (GA), these features made to GA: Write operations for externally managed Iceberg tables, and Catalog-linked databases that connect to external Iceberg REST catalogs
  • Catalog-linked databases: Auto-refresh for Apache Iceberg™ table creation, leverage auto-refresh for Iceberg table creation in catalog-linked databases to improve metadata consistency
  • Table optimization for Snowflake-managed Apache Iceberg™ tables (GA)

Security, Privacy & Governance Updates

  • Hybrid table support for Tri-Secret Secure, enabling TSS support for hybrid tables requires a storage configuration known as Dedicated Storage Mode
  • Tri-Secret Secure supports private connectivity, support privately connecting Tri-Secret Secure with your key management service. You can now create a private endpoint for your customer-managed key (CMK)
  • Lineage for stored procedures and tasks (GA), view the lineage graph in Snowsight, you can now obtain details about a stored procedure or task that resulted in a downstream object
  • Organization account in a hybrid organization, contains accounts in both regulated regions and non-regulated regions
  • CLIENT_POLICY parameter for authentication policies, create an authentication policy that sets the minimum version that is allowed for each specified client type. For more information, see the description of the CLIENT_POLICY parameter in the CREATE AUTHENTICATION POLICY command
  • Organization-level findings in the Trust Center, include the following information: The number of violations in the organization, The accounts with the most critical violations, and The number of violations for each account in the organization
  • Snowflake Notebooks replication (GA), replication for Snowflake Notebooks. Notebooks will now be replicated when they are part of a database included in a replication or failover group
  • AWS cross-region support for PrivateLink (GA), supports using PrivateLink to privately connect a VPC endpoint in one AWS region to your Snowflake account in another supported AWS region
  • Outbound network traffic to stages and volumes on Google Cloud Storage supports private connectivity (GA)
  • Snowflake-managed network rules (General availability), the SNOWFLAKE.NETWORK_SECURITY schema that contains a suite of Snowflake-managed (built-in) network rules. These network rules provide a secure, consistent, fast, and low-maintenance way to manage network security for popular SaaS and partner applications

SQL, Extensibility & Performance Updates

  • Update to the 2025b release of the TZDB, uses the Time Zone Database (TZDB) for timezone information
  • MERGE ALL BY NAME, When the target table and source must have the same number of columns and the same names for all of the columns, you can simplify MERGE operations by using MERGE ALL BY NAME
  • Aliases for PIVOT and UNPIVOT columns, for Pivot use the AS clause to specify aliases for the pivot column names, and for UNPIVOT queries, you can use the AS clause to specify aliases for column names that appear in the result of the UNPIVOT operation
  • New SQL parameter: ENABLE_GET_DDL_USE_DATA_TYPE_ALIAS, specifies whether the output returned by the GET_DDL function contains data type synonyms specified in the original DDL statement. This parameter is set to FALSE by default
  • Reference table columns in lambda expressions when calling higher-order functions, when calling higher-order functions such as FILTER, REDUCE, and TRANSFORM
  • SEARCH function supports PHRASE and EXACT search modes The SEARCH function now supports two new search modes in addition to the existing OR and AND modes
  • Snowflake Scripting CONTINUE handlers, can catch and handle exceptions without ending the Snowflake Scripting statement block that raised the exception
  • Snowflake Scripting user-defined functions (UDFs) (GA), create SQL UDFs that contain Snowflake Scripting procedural language. Snowflake Scripting UDFs can be called in a SQL statement, such as a SELECT or INSERT statement
  • Enforced join order with directed joins (GA), when you run join queries, you can now enforce the join order of the tables using the DIRECTED keyword. When you run a query with a directed join, the first, or left, table is scanned before the second, or right, table. For example, o1 INNER DIRECTED JOIN o2 scans the o1 table before the o2 table

Data Clean Rooms Updates

  • 10.3 — Managed Account Invites Upon Request: Clean room users now need to reach out to their account representative to enable managed account invitations for their account. Each account will have a specific number of invitations available after requests are approved. This process is being implemented to ensure that users understand that initiating and accepting these invitations will result in separate billing invoices for these accounts
  • 10.4 — Supports linking external and Apache Iceberg™ views. Previously, linking external views or Iceberg views would result in failure of the clean room; now linking these view types in clean rooms is supported
  • 10.4 — Reference usage grants update: You can now include a dataset with a Snowflake policy defined in a different database than the source. To do so, you must grant your clean room access to that policy database to be able to link the data into a clean room
  • 10.4 — Fixes: If a clean room has a template that depends on a dataset that has become unavailable, previously the analysis would fail, and the clean room would become unusable in the UI. Now the template remains available, but the user is prompted to update the clean room to replace the missing dataset
  • 10.5 — Non-overlap Results & Messaging Improvements: Updated handling to ensure that non-overlap result percentage does not display above 100%; added updated messaging for non-overlap results being unavailable when filtering by a collaborator’s column
  • 10.5 — Jinja2 Library Upgrade: Updated Jinja2 templating library to version 3.1.6 with compatibility improvements
  • 10.7 — Enhanced error messaging: When IP addresses are blocked by network policies, enhanced error messages now provide better feedback to users.
  • 10.7 — Autodetection of modified or removed data sources: If a data source becomes unavailable after a clean room is created or configured, the edit flow in the UI now prompts the user to pick from a current list of available data objects and prompts for removal of unavailable data sources

Marketplace, Listings & Data Sharing

  • Organization user groups with organizational listings (Preview), providers can use organization user groups to assign consumers to organizational listings
  • Publishing and consuming public marketplace listings in VPS regions (Preview), Snowflake Marketplace version 2 listings in VPS deployments
  • Listings in government regions can be shared on the internal marketplace (Preview)

Open-Source Updates

  • terraform-snowflake-provider 2.10.0 (Promote features to stable, Compute pool (resource and data source), Git repository (resource and data source), Image repository (resource and data surce), Listing (resource), Service (resource and data source), User programmatic access token (resource and data source), Stabilize authentication policies, Add new WIF authenticator, Add missing compute pool family instances, Use improved SHOW query for warehouse, Use the new generation syntax in warehouses, Fix handling results during account creation)
  • terraform-snowflake-provider 2.9.0 (Add new Oauth authorization code flow, Add new Oauth client credentials flow, Fix token authenticator for the token field)
  • terraform-snowflake-provider 2.8.0 (Add use_private_link_endpoint option to storage integrations and other bug fixes)
  • Modin 0.32.0 ()
  • Snowflake VS Code Extension 1.20.0 (Added Snowpark Migration Accelerator (SMA) IA Assistant support for Scala EWIs, Added Snowpark Migration Accelerator (SMA) IA Assistant support for Snowpark Connect EWIs)
  • Streamlit 1.51.0 (Features & Improvements: [AdvancedLayouts] Add width to st.plotly_chart, Automatically hide row indices in st.dataframe when row selection is active, [AdvancedLayouts] Add width to st.vega_lite_chart, Add codeTextColor config & update linkColor, [AdvancedLayouts] Add height to st.vega_lite_chart, [AdvancedLayouts] Add width to st.pydeck_chart, Use key as main identity for st.color_picker, Add type argument to st.popover to match st.button, [AdvancedLayouts] Add width to st.altair_chart, Add cursor kwarg to st.write_stream, Preload slow-compiling Python modules in streamlit hello, Reusable Custom Themes via theme.base config, streamlit run with no args runs streamlit_app.py, Allow st.feedback to have a default initial value, [AdvancedLayouts] Add height to st.altair_chart, [AdvancedLayouts] Modernize height parameter for st.pydeck_chart, [AdvancedLayouts] Update width & height for st.map, [AdvancedLayouts] Modernize width/height for st.scatter_chart, [AdvancedLayouts] Modernize width/height for st.area_chart & st.bar_chart, Custom Dark Theme — add light/dark configs for theme & theme.sidebar, Use key as main identity for st.segmented_control, Use key as main identity for st.radio, Use key as main identity for st.audio_input, Add pinned parameter to MultiselectColumn, Use key as main identity for st.slider & st.select_slider, Custom Dark Theme — support light/dark inheritance & new session message, Use key as main identity for st.chat_input, Add support for auto color to MultiselectColumn using chart colors, Allow configuring color for ProgressColumn, Use key as main identity for st.feedback & st.pills, [AdvancedLayouts] Add stretch height to st.dataframe, Custom Dark Theme — theme & sidebar creation, Custom Dark Theme — main & settings menu updates, Add API for st.space, Add st.components.v2.components namespace & classes; Bug Fixes: Make slider thumbs not overshoot the track, Fix Vega chart unrecognized dataset error, Add AbortController for async upload operations, Fix Plotly chart flickering by adding overflow hidden, Fix Pills not showing selected value(s) if disabled, Make Python Altair code thread-safe, Fix file watcher issue with common path check, Fix showErrorDetails config parsing for deprecation warnings, Make sure error message is explicitly shown for 500 errors, Make fuzzy search case insensitive, Fix DataFrame content width horizontal alignment, Fix pyplot/image width regression in fragments & containers)

Client, Drivers, Libraries, and Connectors Updates

New features:

  • Snowflake Connector for Google Analytics Aggregate Data 2.1.0 (Behavior changes: The connector creates additional tables in destination schema. The tables are used to store the configuration of the connector. The tables have _SFSDKEXPORT_V1 suffix; New features: IMPORT_STATE procedure was added. The procedure can be used to recover a configuration of the reports, schedules, and history of the ingestions after the connector was uninstalled.)
  • Snowflake Connector for Google Analytics Aggregate Data 2.2.0 (Behavior changes: Revoked the USAGE privilege on the STATE schema from the ADMIN application role)
  • Snowflake Connector for ServiceNow® V2 5.26.0 (Behavior changes: Custom journal tables are currently disabled. They’ll be restored with new functionality in a future release, When you pause the connector, only worker tasks are forcefully canceled. Other tasks keep running until they finish, so pausing might take a bit longer; New features: The connector now lets you use NOT LIKE and NOT IN operators for row filtering, so you can filter your data more flexibly during ingestion)
  • .NET Driver 5.0.0 (BCR changes: Removed the log4net dependency and enabled delegated logging, Upgraded the AWS SDK library to v4, Removed some internal classes from the public API; New features: Implemented a new CRL (Certificate Revocation List) checking mechanism, Enabling CRLs improves security by checking for revoked certificates during the TLS handshake process, Added support for TLS 1.3. The default negotiated version of TLS is either TLS 1.2 or TLS 1.3, and the server decides which one to establish, Removed noisy log messages)
  • Ingest Java SDK 4.3.1 (Enhanced cloud security: Snowpipe Streaming now fully supports server-side encryption with Amazon Web Services (AWS) Key Management Service (SSE-KMS) configured on your external AWS S3 and Google Cloud Storage volumes. This enhancement ensures that data uploaded during ingestion uses your required, higher-grade KMS encryption policy, moving beyond the previously hardcoded default encryption)
  • JDBC Driver 3.27.0 (Added retries for HTTP responses 307 and 308 to handle internal IP redirects, PAT creation with the execute method now returns a ResultSet, Bumped netty to 4.1.127.Final to address CVE-2025–58056 and CVE-2025–58057, Added support for Interval Year-Month and Day-Time types in JDBC, Added support for Decfloat types in JDBC, Implemented a new CRL (Certificate Revocation List) checking mechanism)
  • JDBC Driver 3.27.1 (Upgraded aws-sdk to 1.12.792 and added STS dependency, added RHEL 9 support, added support for identity impersonation when using workload identity federation: For Google Cloud Platform, added the workloadIdentityImpersonationPath connection parameter for authenticator=WORKLOAD_IDENTITY allowing workloads to authenticate as a different identity through transitive service account impersonation, and For AWS, added the workloadIdentityImpersonationRole connection parameter for authenticator=WORKLOAD_IDENTITY allowing workloads to authenticate through transitive IAM role impersonation, Bumped grpc-java to 1.76.0 to address CVE-2025–58056 from transient dependency)
  • Snowflake Connector for Google Analytics Raw Data 1.8.0 ()
  • Node.js 3.2.1 (Added the workloadIdentityAzureClientId configuration option, allowing you to customize the Azure Client for WORKLOAD_IDENTITY authentication, Added the workloadIdentityImpersonationPath configuration option for authenticator=WORKLOAD_IDENTITY, allowing workloads to use service account impersonation)
  • ODBC 3.11.0 (Added support for workload identity federation in the AWS, Azure, Google Cloud, and Kubernetes platforms, Added the workload_identity_provider connection parameter, Added WORKLOAD_IDENTITY to the values for the authenticator connection parameter, Added the following configuration parameters: DisableTelemetry to disable telemetry, SSLVersionMax to specify the maximum SSL version, Added the PRIV_KEY_BASE64 and PRIV_KEY_PWD connection parameters that allow passing a base64-encoded private key)
  • ODBC 3.12.0 (Improved performance of the multi-threaded bulk fetching workflow)
  • Snowflake Connector for Python 3.18.0 (Added support for pandas conversion for Day-time and Year-Month Interval types)
  • Snowflake Connector for Python 4.0.0 (BCR changes: Configuration files writable by a group or others now raise a ConfigSourceError with detailed permission information, preventing potential credential tampering, Reverted changing the exception type in case of token expired scenario for Oauth authenticator back to DatabaseError; New features: Implemented a new CRL (Certificate Revocation List) checking mechanism, Added the workload_identity_impersonation_path parameter to support service account impersonation for Workload Identity Federation. Impersonation is available only for Google Cloud and AWS workloads, Added the oauth_credentials_in_body parameter to support sending OAuth client credentials in a connection request body, Added an option to exclude botocore and boto3 dependencies during installation by setting the SNOWFLAKE_NO_BOTO environment variable to true, Added the ocsp_root_certs_dict_lock_timeout connection parameter to set the timeout (in seconds) for acquiring the lock on the OCSP root certs dictionary. The default value is -1, which represents no timeout)
  • Snowpark Library for Python 1.42.0 (Snowpark Python DB-API is now generally available, To access this feature, use DataFrameReader.dbapi() to read data from a database table or query into a DataFrame using a DB-API connection)
  • Snowpark Library for Python 1.41.0 (New features: Added a new function service in snowflake.snowpark.functions that allows users to create a callable representing a Snowpark Container Services (SPCS) service, Added a new function group_by_all() to the DataFrame class, Added connection_parameters parameter to DataFrameReader.dbapi() (Public Preview) method to allow passing keyword arguments to the create_connection callable, Added support for Session.begin_transaction, Session.commit, and Session.rollback, Added support for the following functions in functions.py: Geospatial functions, Added a parameter to enable and disable automatic column name aliasing for interval_day_time_from_parts and interval_year_month_from_parts functions; Improvements: The default maximum length for inferred StringType columns during schema inference in DataFrameReader.dbapi is now increased from 16 MB to 128 MB in parquet file–based ingestion; Dependency updates Updated: dependency of snowflake-connector-python>=3.17,<5.0.0; Snowpark pandas API updates: Added support for the dtypes parameter of pd.get_dummies, Added support for nunique in df.pivot_table, df.agg, and other places where aggregate functions can be used, Added support for DataFrame.interpolate and Series.interpolate with the “linear”, “ffill”/”pad”, and “backfill”/bfill” methods. These use the SQL INTERPOLATE_LINEAR, INTERPOLATE_FFILL, and INTERPOLATE_BFILL functions (Public Preview); Improvements: Improved performance of Series.to_snowflake and pd.to_snowflake(series) for large data by uploading data via a parquet file. You can control the dataset size at which Snowpark pandas switches to parquet with the variable modin.config.PandasToSnowflakeParquetThresholdBytes, Enhanced autoswitching functionality from Snowflake to native pandas for methods with unsupported argument combinations: get_dummies() with dummy_na=True, drop_first=True, or custom dtype parameters, cumsum(), cummin(), cummax() with axis=1 (column-wise operations), skew() with axis=1 or numeric_only=False parameters, round() with decimals parameter as a Series, corr() with method!=pearson parameter, Set cte_optimization_enabled to True for all Snowpark pandas sessions, Add support for an expanded list for the faster pandas, Reuse row count from the relaxed query compiler in get_axis_len)
  • Snowpark Library for Python 1.40.0 (New features Added a new module snowflake.snowpark.secrets that provides Python wrappers for accessing Snowflake Secrets within Python UDFs and stored procedures that execute inside Snowflake, Conditional expression functions, Semi-structured and structured date functions, String & binary functions, Differential privacy functions, Context functions, Geospatial functions; Snowpark pandas API updates : Dependency updates: Updated the supported modin versions to >=0.36.0 and <0.38.0 (was >= 0.35.0 and <0.37.0), New features: Added support for DataFrame.query for DataFrames with single-level indexes, Added support for DataFrameGroupby.__len__ and SeriesGroupBy.__len__; Improvements: Hybrid execution mode is now enabled by default. Certain operations on smaller data now automatically execute in native pandas in-memory. Use from modin.config import AutoSwitchBackend; AutoSwitchBackend.disable() to turn this off and force all execution to occur in Snowflake, Added a session parameter pandas_hybrid_execution_enabled to enable/disable hybrid execution as an alternative to using AutoSwitchBackend, Removed an unnecessary SHOW OBJECTS query issued from read_snowflake under certain conditions, When hybrid execution is enabled, pd.merge, pd.concat, DataFrame.merge, and DataFrame.join can now move arguments to backends other than those among the function arguments, Improved performance of DataFrame.to_snowflake and pd.to_snowflake(dataframe) for large data by uploading data via a parquet file. You can control the dataset size at which Snowpark pandas switches to parquet with the variable modin.config.PandasToSnowflakeParquetThresholdBytes)
  • Snowpark Connect for Spark 0.32.0 (Support for RepairTable, Make jdk4py an optional dependency of Snowpark Connect for Spark to simplify configuring Java home for end users, Support more interval type cases)
  • Snowpark Connect for Spark 0.31.0 (Add support for expressions in the GROUP BY clause when the clause is explicitly selected, Add error codes to the error messages for better troubleshooting)
  • Snowflake ML 1.16.0 (New modeling features: Support for scikit-learn versions earlier than 1.8, New ML Jobs features: Support for configuring the runtime image via the runtime_environment parameter at submission time. You may specify an image tag or a full image URL, New Model Registry features: Ability to mark model methods as volatile or immutable. Volatile methods may return different results when called multiple times with the same input, while immutable methods always return the same result for the same input. Methods in supported model types are immutable by default, while methods in custom models are volatile by default)

Bug fixes:

  • Snowflake Connector for Google Analytics Aggregate Data 2.1.1 (Export process faile. Scoped temporary tables could not be created in the destination schema)
  • Snowflake Connector for Google Analytics Aggregate Data 2.1.2 (The IMPORT_STATE procedure now grants SELECT privilege to the application roles ADMIN and DATA_READER)
  • Snowflake Connector for ServiceNow® V2 5.26.0 (The connector now retries curl errors more times, making it more resilient to network issues in Azure deployments)
  • Ingest Java SDK 4.3.1 (Fixed vulnerable dependencies and cleaned up internal dependency workarounds)
  • JDBC Driver 3.27.0 (Fixed permission check of the .toml configuration file, Fixed pattern search for file when QUOTED_IDENTIFIERS_IGNORE_CASE is enabled)
  • JDBC Driver 3.27.1 (Fixed exponential backoff retry time for non-auth requests)
  • Node.js 3.2.1 (Fixed a regression causing PUT operations to encrypt files with the wrong smkId)
  • ODBC 3.7.1 (Fixed a bug with numeric data conversion when using bulk fetching)
  • ODBC 3.8.1 (Fixed a bug with numeric data conversion when using bulk fetching)
  • ODBC 3.9.1 (Fixed a bug with numeric data conversion when using bulk fetching)
  • ODBC 3.10.1 (Fixed a bug with numeric data conversion when using bulk fetching)
  • ODBC 3.11.0 (Fixed an issue with the in-band telemetry event handler to properly reset the events, Fixed the HTTP headers used to authenticate via OKTA, Removed the trailing slash from the default RedirectUri within the OAuth Authorization process)
  • ODBC 3.11.1 (Fixed a bug with numeric data conversion when using bulk fetching)
  • ODBC 3.12.0 (Fixed a bug that, during OIDC usage, the token was not required, causing errors, Fixed the MacOS release to include the x86_64 architecture, Fix a bug with DEFAULT_VARCHAR_SIZE in configuration of the default varchar length parameter, Fixed a bug with numeric data conversion when using bulk fetching)
  • Snowflake Connector for Python 4.0.0 (Fixed get_results_from_sfqid when using DictCursor and executing multiple statements at once, Fixed retry behavior for ECONNRESET errors, Fixed the return type of SnowflakeConnection.cursor(cursor_class) to match the type of cursor_class, Constrained the types of fetchone, :code:fetchmany, and fetchall, Fixed the “No AWS region was found” error when AWS region was set in the AWS_DEFAULT_REGION variable instead of in AWS_REGION for the WORKLOAD_IDENTITY authenticator)
  • Snowpark Library for Python 1.40.0 (Snowpark pandas API updates: Bug fixes: Fixed a bug that caused DataFrame.limit() to fail if the executed SQL contained parameter binding when used in non-stored-procedure/udxf environments, Added an experimental fix for a bug in schema query generation that could cause invalid sql to be generated when using nested structured types, ; Improvements: Improved DataFrameReader.dbapi (Public Preview) so it doesn’t retry on non-retryable errors, such as SQL syntax error on external data source query, Removed unnecessary warnings about local package version mismatch when using session.read.option(‘rowTag’, ).xml() or xpath functions, Improved DataFrameReader.dbapi (Public Preview) reading performance by setting the default fetch_size parameter value to 100000, Improved error message for XSD validation failure when reading XML files using session.read.option(‘rowValidationXSDPath’, ).xml(), Fixed multiple bugs in DataFrameReader.dbapi (Public Preview): Fixed UDTF ingestion failure with pyodbc driver caused by unprocessed row data, Fixed SQL Server query input failure due to incorrect select query generation, Fixed UDTF ingestion not preserving column nullability in the output schema, Fixed an issue that caused the program to hang during multithreaded Parquet based ingestion when a data fetching error occurred, Fixed a bug in schema parsing when custom schema strings used upper-cased data type names (NUMERIC, NUMBER, DECIMAL, VARCHAR, STRING, TEXT), Fixed a bug in Session.create_dataframe where schema string parsing failed when using upper-cased data type names (e.g., NUMERIC, NUMBER, DECIMAL, VARCHAR, STRING, TEXT))
  • Snowpark Library for Python 1.41.0 (Fixed a bug that DataFrameReader.xml fails to parse XML files with undeclared namespaces when ignoreNamespace is True, Added a fix for floating point precision discrepancies in interval_day_time_from_parts, Fixed a bug where writing Snowpark pandas DataFrames on the pandas backend with a column multiindex to Snowflake with to_snowflake would raise KeyError, Fixed a bug that DataFrameReader.dbapi (Public Preview) is not compatible with oracledb 3.4.0, Fixed a bug where modin would unintentionally be imported during session initialization in some scenarios, Fixed a bug where session.udf|udtf|udaf|sproc.register failed when an extra session argument was passed. These methods do not expect a session argument; please remove it if provided; Snowpark pandas API updates: Fixed a bug where the row count was not cached in the ordered DataFrame each time count_rows() was called)
  • Snowpark Connect for Spark 0.32.0 (Fix Join issues by refactoring qualifiers, Fix percentile_cont to allow filter and sort order expressions, Fix histogram_numeric UDAF, Fix the COUNT function when called with multiple args)
  • Snowpark Connect for Spark 0.31.0 (Fix the window function unsupported cast issue)
  • Snowflake ML 1.16.0 (Model Registry bug fixes: Remove redundant pip dependency warnings when artifact_repository_map is provided for warehouse model deployments)

Conclusion

As we conclude this month’s unofficial Snowflake release notes, it’s evident that October 2025 brought one of the most significant waves of improvements across AI, governance, performance, and the broader Snowflake ecosystem. From powerful Cortex and DocumentAI upgrades to major advances in Apache Iceberg interoperability, Native Apps, and Snowpark, Snowflake continues to develop into a more open, developer-friendly, and AI-integrated data platform. The pace of innovation remains rapid — and the intersection of AI + data governance + secure collaboration is becoming the new core of Snowflake’s strategy.

Thanks for joining me for another deep dive into what’s new. I hope these curated notes saved you time and clarified the most important changes you need to know. If this was helpful, I’d love to hear your feedback—whether it’s suggestions on format, sections you’d like to see expanded, or requests for private previews in future editions. Drop a comment or reach out on LinkedIn, and I’ll keep refining this resource for the community.

I hear that there are a lot new product announcements coming during BUILD on November 4th to 6th. I had an early peak and they are very interesting.

Enjoy the reading.

I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, & Security Architecture at Archetype Consulting. You can follow me on LinkedIn.

Subscribe to my Medium blog https://blog.augustorosa.com for the most interesting Data Engineering and Snowflake news.

Sources:


Top comments (0)