Monthly Snowflake Unofficial Release Notes #New features #Previews #Clients #Behavior Changes
Welcome to the March 2025 Unofficial Release Notes for Snowflake! Here, you’ll find all the latest features, drivers, and more in one handy place.
As an unofficial source, I am excited to share my insights and thoughts. Let’s dive in! You can also find all of Snowflake’s releases here.
This month, we provide coverage up to release 9.7 (General Availability — GA). I hope to extend this eventually to private preview notices as well.
I would appreciate your suggestions on continuing to combine these monthly release notes. Feel free to comment below or chat with me on LinkedIn.
Behavior change bundles 2024_08 are active by default, 2025_01 is enabled by default but can be disabled, and 2025_02 can be enabled. See 2025_01 changes session when I discuss the effects of it.
What’s New in Snowflake
New Features
- Alerts on new data(Preview), use alerts for monitoring dynamic table refreshes and task completions. An alert triggers when new rows are added to a table or view. Snowflake evaluates conditions on these new rows. Set up alerts to notify you of new error message rows in the event table for your account
- Organization profiles (Preview), allow providers to organize their Internal Marketplace listings by department
Snowsight Updates
- Collapsible navigation bar in Snowsight (GA), the left navigation now supports global collapsing and expanding, saving your preferences across pages and refreshes. Enhanced animations ensure smoother transitions for menus, while improved state memory streamlines navigation across projects and data
- Universal Search ML model support (GA), supports ML models in Universal Search results, making it easier to discover relevant assets
Snowflake Applications
- Native Apps with Snowpark Container Services — Support for Azure Private Link (Preview), allows connections from the consumer’s Microsoft Azure virtual network to apps with containers deployed in a Snowflake virtual network on Microsoft Azure
- Native Apps with Snowpark Container Services — Support for AWS PrivateLink(GA), support for Snowflake Native Apps with Snowpark enables connections from AWS consumer networks to app containers in the Snowflake network
- Snowpark Container Services support for application metrics, supports metrics and traces from your service. Your service containers are capable of generating OLTP or Prometheus metrics, which Snowflake then publishes to the designated event table in your account. For additional details, refer to Publishing and accessing application metrics
- Snowflake Notebooks on Container Runtime for AWS (GA) with these new features, preconfigured ML environment: Base machine-learning runtime image with the most popular ML development packages pre-installed, scalable compute resources: Access to configurable CPU or GPU pools for resource-efficient model development, enhanced package management: Support for Pip and Conda Python package version upgrades, real-time resource monitoring: Hover over the Active button to view detailed CPU, GPU, and memory usage metrics, optimized GPU notebook storage: Notebooks running on GPU compute pools now use high-performance NVMe storage as the default boot device, and maximize session uptime: Sessions run up to seven days (for example, long-running jobs) without maintenance disruptions
- Configuring Snowflake support for CORS on HTTP requests to public endpoints (Preview), corsSettings: The fields under endpoints allow you to configure Snowflake support for CORS on HTTP requests to public endpoints
- Executing a Snowpark Container Services job service asynchronously(Preview), run the job service asynchronously by specifying the optional ASYNC parameter
Data Lake Updates
- Adaptively optimizes compute and I/O resources for queries executed against Apache Iceberg™ tables, Improves Apache Iceberg™ query performance and memory efficiency in high-concurrency scenarios
Streamlit Updates
- Multi-file editing in Streamlit in Snowflake (Preview), add files to your Streamlit app from local computer
- Git integration for Streamlit in Snowflake (Preview), Sync Streamlit in Snowflake apps with a Git repository
- Support for st.file_uploader (GA)
- Support for st.experimental_audio_input and st.camera_input (GA)
SQL Updates
- Snowflake Scripting: Asynchronous child jobs (GA), support for asynchronous child jobs in stored procedures is generally available. Stored procedures run asynchronous child jobs concurrently. A child job can be any valid SQL statement, including SELECT statements and DML statements, such as INSERT or UPDATE
- Snowflake Scripting: Improved error messages, improved to provide more accurate information about the error and about the line number in the code that caused the error
AI Updates (Cortex, ML, DocumentAI)
- Snowflake Cortex Document Processing Usage History (GA), Support for accessing the history of query usage related to document processing functions. Utilize the ACCOUNT_USAGE.CORTEX_DOCUMENT_PROCESSING_USAGE_HISTORY view to examine the document processing features, including Document AI and PARSE_DOCUMENT, that were executed, along with the number of credits they consumed
- Cortex AI PARSE_DOCUMENT function for OCR (GA), this SQL function allows customers to precisely extract text and data from millions of document pages. It is fully managed and provides OCR quality comparable to other cloud providers while benefiting from the scalability, performance, and user-friendliness of Snowflake. PARSE_DOCUMENT OCR retrieves text from PDF, DOCX, and PPTX files saved in a Snowflake or external stage using SQL, eliminating the need for intricate cloud architecture
- Document AI, highlight the answer within a document by selecting the locate answer icon
- Additional file format support for Cortex AI Parse Document, supports an expanded range of file formats to deliver more comprehensive document analysis. The new file formats are image formats and include TIFF, TIF, JPEG, JPG, and PNG. PARSE_DOCUMENT already supported PDF, DOCX, and PPTX file formats
- Cortex COMPLETE Structured Outputs (GA), support for structured outputs in the Snowflake Cortex COMPLETE function conforms to a user-defined JSON schema. This simplifies prompting, reduces post-processing needs, and allows integration with systems requiring deterministic responses. COMPLETE Structured Outputs is available in SQL, Python, and REST APIs
- Support for multiple semantic models in Cortex Analyst queries (GA), You can now include multiple semantic models in your query instead of just one, allowing Cortex Analyst to select the most suitable model for your needs. This enhancement facilitates a seamless integration of a unified search UI that queries various data sources, making client programming simpler
Data Clean Rooms Updates
- Simplified onboarding, installation now happens through the Snowflake Marketplace, and separates the installation flow for the APIs and the UI
- Analysis Error Messaging in the clean rooms UI, Users running queries in the clean rooms UI can now view any encountered query errors. This feature enables users to troubleshoot errors independently or offers details they can share with the Snowflake support team for further assistance
- Obfuscated provider templates, choose to hide their template logic from collaborators, in order to protect their template intellectual property. To hide your template body from consumers, set the is_obfuscated argument to FALSE in provider.add_custom_sql_template.
- Cross-cloud collaboration support for multiple accounts, enable cross-cloud collaboration for Data Clean Rooms across multiple accounts under the same organization
- Update to default caching behavior in consumer run analysis, improve template testing and ensure the most recent results are being generated for users, the default cache behavior for the consumer.run_analysis API is now FALSE
- New limited API access role for developers, grant a limited access role to consumers to enable limited API access to specified clean rooms. The role grants permission to run a subset of consumer clean room procedures against specified clean rooms. See the consumer.grant_run_on_cleanrooms_to_role documentation for more information
- LiveRamp Identity & Translation integration update, Liveramp’s Embedded Identity resolves personally-identifiable information (PII) or device identifiers into a durable, pseudonymous RampID. LiveRamp’s RampID Translation capability allows for the transcoding of a RampID from one partner domain encoding to another, enabling you to match persistent pseudonymous identifiers to one another without sharing the sensitive underlying identifiers. This functionality is available through the LiveRamp native app in the Snowflake Marketplace
Data Pipelines/Data Loading/Unloading Updates
- Dynamic tables: The Maximum number of dynamic tables in an account increased to 50,000, the account can now hold a maximum of 50,000 dynamic tables, whereas the previous limit was 10,000 dynamic tables in a single account
Security & Governance Updates
- Cortex Powered Object Descriptions: Support for additional table types You can now use the Snowflake Cortex COMPLETE function to generate descriptions for the following table types: Dynamic tables, Hybrid tables, Apache Iceberg™ tables and External tables
- Automatic sensitive data classification (GA), automatic sensitive data classification, user-defined tags and masking policies can be automatically applied to columns when sensitive data is detected
Performance Updates
- Search optimization improves the performance of queries containing scalar subqueries, ascalar subquery yields a single value (one column, one row). To enhance query performance, ensure that search optimization is activated for the column matching the subquery’s result
- RESOURCE_CONSTRAINT clause for Snowpark-optimized warehouses (GA), define the memory and CPU architecture tailored for Snowpark-optimized warehouses. Utilize the RESOURCE_CONSTRAINT clause alongside the CREATE WAREHOUSE and ALTER WAREHOUSE commands
- Search optimization: Support for column collations, improve the performance of queries on columns defined with a COLLATE clause
- Improves performance for dynamic tables with incremental refresh mode using left outer joins, provides faster incremental refresh performance for dynamic tables that contain one or more left outer joins. Performance gains can be substantial depending on the workload
- Adaptively optimizes compute and I/O resources for queries executed against Apache Iceberg™ tables (preview), improves Apache Iceberg™ query performance and memory efficiency in high-concurrency scenarios
- Storing larger database objects (Preview), the limits for many objects have been raised to 64MB and 128MB from 16MB and 8MB
- Improves the batching of files during replication refresh operations(GA), Replication refresh jobs that replicate up to 8 GB of data will have less variance and more predictability
- Improves performance for dynamic tables with incremental refresh mode using left outer joins, Provides faster incremental refresh performance for dynamic tables that contain one or more left outer joins. Performance gains can be substantial depending on the workload
Hybrid Tables
- Cloning databases that contain hybrid tables (Preview), for the purposes of running point-in-time restore operations and hydrating other environments from a source environment
Open-Source Updates
- terraform-snowflake-provider 1.0.5 (Document GODEBUG flag usage, Remove driver instrumentation, Remove SF_TF_ADDITIONAL_DEBUG_LOGGING, Cleanup and update GitHub actions, Adjust docs, Correct a typo in a tag resource example, Bug fixes: Verify TOML file permissions, Limit TOML file size, Add boolean env validations and unit tests for TOML config validation, Apply new assertions setup, Fix acceptance tests)
- Streamlit 1.43.2 ( Major Theming Enhancements: Significant additions and refinements to theme customization options (e.g., headingFont, codeBackgroundColor, showSidebarSeparator, theme.sidebar), New Features & Commands: Introduction of st.badge, the streamlit init command, and adding Google/ChatGPT links to exception details (these were highlights of the 1.44.0 release), UI Improvements: Revamped dataframe search bar, support for Pandas Styler tooltips in st.table, file size limit errors for chat input, and updated emoji support, Logging & Internals: Better exception logging when rich is installed and an internal update to utilize the React 18 createRoot API)
- Streamlit 1.44.0 (Advanced Theming: Introduced extensive new configuration options to customize the visual appearance (fonts, colors, element roundness) of Streamlit apps directly, without needing custom CSS, st.badge: Added a new element st.badge to display colored badges. This functionality is also available within Markdown using a special directive, streamlit init Command: A new terminal command streamlit init was added to quickly create the necessary local file structure for a new Streamlit application, Notable Changes: Error details shown via st.exception (and for uncaught exceptions) now include helpful links to automatically search Google or query ChatGPT with the error information, User locale information can now be accessed programmatically using st.context, st.slider and st.number_input now raise errors if set to a value outside their specified min_value or max_value, Streamlit provides more detailed logging automatically if the rich library is installed, Script compilation errors are now logged at the error level (previously only debug) to improve compatibility with AI tools)
- Streamlit 1.44.1 ( Bug Fixes: resolved a TypeError that could occur in st.dataframe if the underlying data was modified, Fixed incorrect resize handling for the st.html component, corrected the streamlit config show command to display the accurate value for showErrorDetails, ensured sidebar page navigation links are disabled when the application is disconnected, UI/Theming Enhancements: a dded an internal marker to identify when an icon provided to set_page_config is an emoji, fine-tuned element spacing based on the baseRadius theme configuration, adjusted theme hover colors to use a dark mix instead of the secondary background color, updated the sidebar to use the specific theme.sidebar.borderColor for its border, Developer Experience & Internal: r estricted the display of helper links (like Google/ChatGPT search) to only appear when running on localhost, included necessary version bumps (to 1.44.1), snapshot updates, and test adjustments for the release)
Client, Drivers, Libraries and Connectors Updates
New features:
- Go Snowflake Driver 1.13.2 (Bumped the JWT library version from 5.2.1 to 5.2.2, implemented an improved file-based credentials cache for Linux)
- Go Snowflake Driver 1.13.1 (PrPr-Added support for PAT (programmatic access token) in Private Preview — added the PROGRAMMATIC_ACCESS_TOKEN parameter for the parameter authenticator, dropped support for Go 1.21 and added support for Go 1.24, upgraded Arrow to v18, added a log for JWT claims)
- ODBC 3.6.0 (Added support for regional Google Cloud Storage endpoints)
- Snowflake CLI 3.5.0 (Extended project definition (snowflake.yml) support for the following SPCS (Snowpark Container Services) entities: Compute pool, Image repository, Service, added the snow spcs compute pool deploy command that reads a snowflake.yml project definition file, added the snow spcs image repository deploy command that reads a snowflake.yml project definition file, added the snow spcs service deploy command that reads a snowflake.yml project definition file)
- Snowflake Connector for Python 3.14.0 (Bumped the pyOpenSSL dependency upper boundary from <25.0.0 to <26.0.0, optimized distribution package lookup to improve import speed, added support for iceberg tables to write_pandas, added support for File types)
- Snowflake Python API 1.2.0 (Added support for asynchronous requests across all of the existing endpoints, asynchronous methods are denoted by the _async suffix in their names and use polling to determine whether an operation was completed, the number of calls that can execute in parallel depends on the number of CPUs. To change the size of the thread pool, use the _SNOWFLAKE_MAX_THREADS environment variable, for example usage, see the snowflake.core.PollingOperation class documentation, added support for creating serverless tasks using the StoredProcedureCall definition, added support for the SERVERLESS_TASK_MIN_STATEMENT_SIZE and SERVERLESS_TASK_MAX_STATEMENT_SIZE serverless attributes to the Database and Schema resources (dependent on Snowflake version 9.8), added support for setting the SUSPEND_TASK_AFTER_NUM_FAILURES, USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE, and USER_TASK_TIMEOUT_MS attributes on cloned databases and schemas (dependent on Snowflake version 9.8), deprecated CortexAgentService.Run in favor of CortexAgentService.run, Added new optional attributes to various models within the Cortex Search service API: text_boosts and vector_boosts to the Function model,weights to the ScoringConfig model)
- Snowflake Python API 1.1.0 (Added support for the TARGET_COMPLETION_INTERVAL, SERVERLESS_TASK_MIN_STATEMENT_SIZE, and SERVERLESS_TASK_MAX_STATEMENT_SIZE serverless attributes to the Task resource, added support for the following new resources: API integrations, Iceberg tables (dependent on Snowflake version 9.6))
- Snowpark Library for Python 1.29.0 (Added support for the following AI-powered functions in functions.py (Private Preview):ai_filter, ai_agg, summarize_agg, added support for the new FILE SQL type, with the following related functions in functions.py (Private Preview):fl_get_content_type, fl_get_etag, fl_get_file_type, fl_get_last_modified, fl_get_relative_path, fl_get_scoped_file_url, fl_get_size, fl_get_stage, fl_get_stage_file_url, fl_is_audio, fl_is_compressed, fl_is_document, fl_is_image, fl_is_video, added support for importing third-party packages from PyPi using Artifact Repository (Private Preview): Use keyword arguments artifact_repository and artifact_repository_packages to specify your artifact repository and packages respectively when registering stored procedures or user defined functions, supported APIs are:Session.sproc.register, Session.udf.register, Session.udaf.register, Session.udtf.register, functions.sproc, functions.udf, functions.udaf, functions.udtf, functions.pandas_udf, functions.pandas_udtf, improved version validation warnings for snowflake-snowpark-python package compatibility when registering stored procedures. Now, warnings are only triggered if the major or minor version does not match, while bugfix version differences no longer generate warnings, bumped cloudpickle dependency to also support cloudpickle==3.0.0 in addition to previous versions, Snowpark pandas API updates: Improve error message for pd.to_snowflake, DataFrame.to_snowflake, and Series.to_snowflake when the table does not exist, improve readability of docstring for the if_exists parameter in pd.to_snowflake, DataFrame.to_snowflake, and Series.to_snowflake, improve error message for all pandas functions that use UDFs with Snowpark objects, Snowpark local testing updates New features: added support for literal values to range_between window function)
- Snowpark Library for Python 1.30.0 (Added Support for relaxed consistency and ordering guarantees in Dataframe.to_snowpark_pandas by introducing the new parameter relaxed_ordering:code: dataFrameReader.dbapi (preview) now accepts a list of strings for the session_init_statement parameter, allowing multiple SQL statements to be executed during session initialization, improved query generation for Dataframe.stat.sample_by to generate a single flat query that scales well with large fractions dictionary compared to older method of creating a UNION ALL subquery for each key in fractions. To enable this feature, set session.conf.set(“use_simplified_query_generation”, True), improved the performance of DataFrameReader.dbapi by enabling the vectorized option when copying a parquet file into a table, improved query generation for DataFrame.random_split in the following ways. They can be enabled by setting session.conf.set(“use_simplified_query_generation”, True): removed the need to cache_result in the internal implementation of the input dataframe resulting in a pure lazy dataframe operation, the seed argument now behaves as expected with repeatable results across multiple calls and sessions, dataFrame.fillna and DataFrame.replace now both support fitting int and float into Decimal columns if include_decimal is set to True, added documentation for the following UDF and stored procedure functions in files.py as a result of their General Availability, SnowflakeFile.write, SnowflakeFile.writelines, SnowflakeFile.writeable; Minor documentation changes for SnowflakeFile and SnowflakeFile.open(), Snowpark pandas API updates, added support for list values in Series.str.__getitem__ (Series.str[...]), added support for pd.Grouper objects in GROUP BY operations. When freq is specified, the default values of the sort, closed, label, and convention arguments are supported; origin is supported when it is start or start_day, added support for relaxed consistency and ordering guarantees in pd.read_snowflake for both named data sources (for example, tables and views) and query data sources by introducing the new parameter relaxed_ordering, Raise a warning whenever QUOTED_IDENTIFIERS_IGNORE_CASE is found to be set, ask user to unset it, improved how a missing index_label in DataFrame.to_snowflake and Series.to_snowflake is handled when index=True. Instead of raising a ValueError, system-defined labels are used for the index columns, improved the error message for groupby, DataFrame, or Series.agg when the function name is not supported, Snowpark local testing updates: raise a warning whenever QUOTED_IDENTIFIERS_IGNORE_CASE is found to be set, ask user to unset it, improved how a missing index_label in DataFrame.to_snowflake and Series.to_snowflake is handled when index=True. Instead of raising a ValueError, system-defined labels are used for the index columns, improved error message for groupby or DataFrame or Series.agg when the function name is not supported)
- Snowpark ML 1.8.0 (Behavior changes: Model Registry behavior changes: automatically-inferred signatures in transformers.Pipeline have been changed to use the FeatureGroupSpec task class and several others, PyTorch and TensorFlow models now expect a single tensor input and output by default when they are logged to the Model Registry. To use multiple tensors (previous behavior), set options={"multiple_inputs": True}, enable_explainability now defaults to False when the model can be deployed to Snowpark Container Services, New Model Registry features: support for using a single torch.Tensor, tensorflow.Tensor and tensorflow.Variable as input or output data, sSupport for xgboost.DMatrix datatype for XGBoost models)
- Snowpark ML 1.7.5 (New Model Registry features: Support for Hugging Face model configurations with auto-mapping functionality, Support for keras 3.x models with tensorflow and pytorch backends, New Model Explainability features: Support for native and snowflake-ml-python sklearn pipelines)
- Snowflake Connector for ServiceNow® V2 5.19.0 (The DELETE_TABLE procedure now accepts an optional drop_related_objects boolean parameter. When this parameter is set to true, the procedure drops all the objects related to the table, such as the flattened views, the event log table, and the sink table, the filtered reload feature now supports detection of deletes and can filter out these records when using the sys_ids parameter in the RELOAD_TABLE procedure. Prior to this release, the filtered reload feature only detected data updates and insertion)
Bug fixes:
- Go Snowflake Driver 1.13.2 (Fixed PUT/GET handling when the query begins with a newline, added more logging to certificate chain verification, falling back to OCSP GET request only if the response for POST request was malformed, fixed a memory leak related to not clearing OCSP cache)
- Go Snowflake Driver 1.13.1 (Fixed error messages for HTTP retries)
- Ingest Java SDK 3.1.2 (Fixed issues with the filename mismatch for Iceberg ingestion)
- JDBC Driver 3.23.1 (Fixed a missing dependency version declaration for the nimbusds library, fixed an issue with creating the file used for caching on Windows environment, fixed an issue with logging on the debug level when the client-side encryption master key of the target stage during the execution of GET/PUT commands was logged locally. The key by itself does not grant access to any sensitive data. For more information, see CVE-2025–27496, fixed an issue with prioritizing GCS credentials over the Snowflake credentials during communication with storage. Changed the default value of parameter disableGcsDefaultCredentials to true, fixed the retry mechanism used in the authentication process using OKTA)
- Node.js 2.0.3 (Fixed an issue with promise rejection for file upload errors)
- ODBC 3.6.0 (Fixed an issue with the driver crashing when basic_string::_M_construct is null or not valid or when a segmentation fault because the HOME environment variable is unset, fixed an issue with the MacOS Secure Storage helper, fixed issues with lowercasing URL when using OKTA authentication, fixed a logging issue with the test button, fixed an issue when a query response omits its length, fixed an issue with the HTTP Date header format depending on the locale)
- Snowflake CLI 3.5.0 (Fixed an issue with data type handling in the snow sql command when using JSON for the output format)
- Snowflake Connector for Python 3.14.0 (Added a <19.0.0 pin to pyarrow as a workaround to a bug affecting Azure Batch, fixed a bug where the privatelink OCSP Cache url could not be determined if the privatelink account name was specified in uppercase, fixed base64 encoded private key tests, fixed a bug with file permission checks on Windows, added the unsafe_file_write connection parameter that restores the previous behavior of saving files downloaded with GET with 644 permissions)
- Snowflake Python API 1.2.0 (You can now call create_or_alter with a task object returned from the iter method, Snowpark local testing updates: Fixed a bug in aggregation that caused empty groups to still produce rows, fixed a bug in Dataframe.except_ that would cause rows to be incorrectly dropped, fixed a bug that caused to_timestamp to fail when casting filtered columns)
- Snowpark Library for Python 1.29.0 (Fixed a bug where creating a Dataframe with large number of values raised Unsupported feature 'SCOPED_TEMPORARY'. error if thread-safe session was disabled, fixed a bug where df.describe raised internal SQL execution error when the DataFrame is created from reading a stage file and CTE optimization is enabled, fixed a bug where df.order_by(A).select(B).distinct():code: would generate invalid SQL when simplified query generation was enabled using session.conf.set("use_simplified_query_generation", True), Disabled simplified query generation by default, Snowpark pandas API updates: Fixed a bug in Series.rename_axis where an AttributeError was being raised, fixed a bug where pd.get_dummies didn’t ignore NULL/NaN values by default, fixed a bug where repeated calls to pd.get_dummies results in ‘Duplicated column name error’, fixed a bug in pd.get_dummies where passing list of columns generated incorrect column labels in output DataFrame, update pd.get_dummies to return bool values instead of int)
- Snowpark Library for Python 1.29.1 (Fixed a bug in DataFrameReader.dbapi (private preview) that prevents usage in stored procedures and Snowbooks)
- Snowpark Library for Python 1.30.0 (Fixed a bug for the following functions that raised errors. .cast() is applied to their output: from_json, size)
- Snowpark ML 1.8.0 (Modeling bug fixes: Fix a bug in some metrics that allowed an unsupported version of numpy to be installed automatically in the stored procedure, resulting in a numpy error on execution, Model Registry bug fixes: fix a bug that leads to incorrect Model does not have _is_inference_api error message when assigning a supported model as a property of a CustomModel, fix a bug where inference does not work when models with more than 500 input features are deployed to SPCS)
- Snowpark ML 1.7.5 (Model Registry bug fixes: fixed a compatibility issue where, when using snowflake-ml-python 1.7.0 or later to save a tensorflow.keras model with keras 2.x, the model could not be run in Snowflake. This issue occurred when relax_version is set to True (or default) and a new version of snowflake-ml-python is available. If you have logged an affected model, you can recover it by loading it using ModelVerison.load and logging it again with the latest version of snowflake-ml-python, removed the validation that prevents data that does not have non-null values from being passed to ModelVersion.run)
- Snowflake Connector for Google Analytics Raw Data 2.11.1 (Fixed an issue where the connector was unable to ingest data from Google Analytics with the QUOTED_IDENTIFIERS_IGNORE_CASE account parameter set to true)
- Snowflake Connector for Google Analytics Raw Data 1.7.2 ()
- Snowflake Connector for ServiceNow® V2 5.19.1 (Fixed a bug that caused the parsing process of the API response from ServiceNow® to fail when a header name in the response didn’t match the expected format, fixed a bug that caused the export of the connector state and configuration to fail, when a filtered reload was run on a table)
- Snowflake Connector for ServiceNow® V2 5.19.0 (Corrected error in CONNECTOR_STATS viewing ingested row statistics when running filtered reload)
- Snowflake Connector for ServiceNow® V2 5.18.1 (Reverted a performance optimization that could cause increased warehouse consumption)
Conclusion
March 2025 demonstrated Snowflake’s continued commitment to broadening its platform capabilities while maturing existing features into General Availability. The push towards powerful, integrated AI tooling within Cortex is undeniable, making sophisticated analysis more accessible. Simultaneously, enhancing the developer toolkit with improved Notebooks, Streamlit features, Snowpark updates, and Container Services integrations remains a high priority. Performance and governance, crucial for enterprise adoption, also received significant attention with key GA milestones like automatic classification and resource constraints.
As your unofficial guide, I hope this compilation provides a valuable overview of the landscape. Please continue to share your feedback on how to make these notes even more useful, perhaps as we look towards incorporating preview features more formally. For official details and specific syntax, always consult the Snowflake documentation. Thanks for reading, and see you next month!
Enjoy the reading.
I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, & Security Architecture at Archetype Consulting. You can follow me on LinkedIn.
Subscribe to my Medium blog https://blog.augustorosa.com for the most interesting Data Engineering and Snowflake news.
Sources:
- https://docs.snowflake.com/en/release-notes/preview-features
- https://docs.snowflake.com/en/release-notes/new-features
- https://docs.snowflake.com/en/release-notes/sql-improvements
- https://docs.snowflake.com/en/release-notes/performance-improvements-2024
- https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases
- https://docs.snowflake.com/en/release-notes/connectors/
- https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc
Top comments (0)