Monthly Snowflake Unofficial Release Notes #New features #Previews #Clients #Behavior Changes
Welcome to the Unofficial Release Notes for Snowflake for July 2025! You’ll find all the latest features, drivers, and more in one convenient place.
As an unofficial source, I am excited to share my insights and thoughts. Let’s dive in! You can also find all of Snowflake’s releases here.
This month, we provide coverage up to release 9.21 (General Availability — GA). I hope to extend this eventually to private preview notices as well.
I would appreciate your suggestions on continuing to combine these monthly release notes. Feel free to comment below or chat with me on LinkedIn.
Behavior change bundle 2025_02 is generally enabled for all customers, 2025_03 is enabled by default but can be opted out until next BCR deployment, 2025_04 is disabled by default but may be opted in, 2025_05 is coming as per plan on 9.22 version.
What’s New in Snowflake
New Features
- Alerts on new data (GA), Executed when new rows are added to a specified table or view, Snowflake evaluates the condition against these rows. You can set alerts for new data, such as error messages or dynamic table refreshes, or task executions logged to the event table, to stay notified
Snowsight Updates
- Billing contact updates (GA) for on-demand, self-service customers. Trial account holders adding a payment method can now update their billing contact info
AI Updates (Cortex, ML, DocumentAI)
- AI Observability in Snowflake Cortex (GA), enables you to evaluate and trace your generative AI apps, making them more trustworthy and transparent. Systematically measure performance by running evaluations, logging traces for debugging, and benchmarking for deployments. Key features include: Evaluations, Comparison, Tracing
- Cortex Agents integration for Microsoft Teams and Copilot (Preview), Allow natural language queries of structured and unstructured data, now supporting integration with Microsoft Teams and Microsoft 365 Copilot. Available in preview in Azure US East 2 (Virginia). Users can interact with a Cortex Agent in Teams or Copilot, making Snowflake data more accessible where users work
- Snowflake AISQL AI_SENTIMENT (GA), offers advanced sentiment classification across various content and languages. It helps organizations understand customer feelings and the specific aspects influencing satisfaction or concern
- Snowflake AISQL AI_EMBED multimodal embeddings (Preview), enabling customers to generate high-quality image and text embedding vectors directly within Snowflake using simple SQL. Embedding vectors allow text and images to be compared and searched based on their features. AI_EMBED allows organizations to: Develop advanced image search and similarity tools: Find similar products, medical images, or design assets across large datasets. Convert complex visuals into searchable vectors: Transform unstructured content into data. Improve content moderation: Detect and flag inappropriate visual media. Optimize digital asset management: Organize and retrieve marketing, brand, and creative assets via semantic image search. Support manufacturing quality control: Detect defects by comparing product images to standards. Enable intelligent document processing: Extract insights from invoices, contracts, and forms by embedding text and layout.
- ML Explainability visualizations (General availability), these visualizations help reveal how features affect the model’s behavior and predictions
- Snowflake Multi-Node ML Jobs (Preview), enables running distributed ML workflows within Snowflake ML container runtimes on multiple compute nodes. Multi-node ML jobs distribute work across nodes, handling large datasets and complex models with better performance and scalability
Snowflake Applications (Container Services, Notebooks and Applications, Snowconvert)
- Snowflake Native App Framework support for Snowflake machine learning models (GA), introduces machine learning models in the Snowflake Native App Framework, enabling use of Snowflake ML models in a Snowflake Native App
- Snowflake Native App with Snowpark Container Services support for Google Cloud (Preview), Apps with containers can be deployed on Google Cloud
- Support for Streamlit 1.45.1 (GA) is now supported in Streamlit in Snowflake
- Snowconvert 1.11.1, Support for new Snowflake Out Arguments syntax within Snowflake Scripting on Teradata, Oracle, SQL Server, and Redshift migrations, Fixes : Enhanced Teradata Data Type Handling: JSON to VARIANT migration, improved recovery on Redshift procedures written with Python
- Snowconvert 1.11.0, New Data Validation framework integration for SQL Server End-to-End experience: Now, users can validate their data after migrating it. The Data Validation framework offers the following validations: Schema validation: Validate the table structure to attest the correct mappings among datatypes, and Metrics validation: Generate metrics of the data stored in a table, ensuring the consistency of your data post-migration
Data Lake Updates
- Partitioned writes for Apache Iceberg™ tables (Preview), Improves compatibility with the Iceberg ecosystem and speeds up read queries from external Iceberg tools. Snowflake now allows creating and writing to both Snowflake-managed and external Iceberg tables with partitioning
- Write support for externally managed Apache Iceberg™ tables and catalog-linked databases (Preview), these features enhance data workflows between Snowflake and the Iceberg ecosystem. Key features include creating Iceberg tables in remote catalogs via Snowflake, performing full DML on externally managed tables, linking a Snowflake database to remote Iceberg catalogs (e.g., AWS Glue, Snowflake Open Catalog), and discovering multiple remote tables without individual definitions
Data Pipelines/Data Loading/Unloading Updates
- Dynamic tables: Support for externally managed Apache Iceberg™ tables (GA), Create dynamic tables that read from Iceberg tables managed by external catalogs, enabling data processing from external data lakes without duplicating or ingesting data into Snowflake
- Run Spark workloads on Snowflake (Preview), Spark workloads on Snowflake with Snowpark Connect for Spark
Security, Privacy & Governance Updates
- Workload Identity Federation for AWS, GCP and Azure (Private Preview), process of enabling secure, credential-less access for workloads (applications, services, or scripts) running outside of Snowflake to access Snowflake resources
- Enforce privatelink-only access (Preview), Disable public access to privatelink-only accounts
- Data Quality: ACCEPTED_VALUES DMF function (GA), The new ACCEPTED_VALUES function checks if column values match a Boolean expression and returns the count of non-matching records, indicating data quality issues
- External network access with private connectivity: Google Cloud (GA), Create and manage Google Cloud Private Service Connect endpoints to enable access from external networks
- Cortex Powered Object Descriptions (GA), You can now generate descriptions for Snowflake objects with SELECT privilege using Snowflake Cortex, without needing OWNERSHIP privilege, except when saving descriptions
- Single-use refresh tokens for Snowflake OAuth (GA), Use single-use refresh tokens to boost your Snowflake OAuth security integrations
- Automatic classification of a database (Preview), set a classification profile on a database rather than a schema so that all tables and views within the database are automatically classified for sensitive data
- Determine which databases and schemas are monitored by automatic sensitive data classification (Preview), Call a system function to identify tables and views automatically classified by sensitive data classification. SYSTEM$SHOW_SENSITIVE_DATA_MONITORED_ENTITIES returns databases and schemas with classification profiles, indicating objects are classified at the profile's specified interval
- Automatic tag propagation: Event table to monitor conflicts (GA), Use an event table to collect telemetry data on automatic tag propagation, including conflicts and resolutions. After Snowflake begins data collection, query the table, create a stream for changes, or set alerts for specific events
- Account Usage: New CREDENTIALS view, View details about Programmatic access tokens, Passkeys, and Time-based one-time passcodes (TOTPs) in the new CREDENTIALS view within the ACCOUNT_USAGE schema
SQL, Extensibility & Performance Updates
- Snowflake Scripting output (OUT) arguments (GA), When an output argument is defined in a Snowflake Scripting stored procedure, it can return the current value to a calling program, like an anonymous block or another stored procedure
- Query insights (GA) are messages that detail how query performance could be impacted and offer general recommendations for follow-up actions. These insights can be accessed through the QUERY_INSIGHTS view
- You can use the ORDER BY ALL clause to sort results by all columns in the SELECT list, eliminating the need to specify each column individually
Data Clean Rooms Updates
- Check your account’s provider activation history for clean room calls via dcr_health.provider_run_provider_activation_history
- See clean room tasks running or recently stopped in your account. The new procedure dcr_health.dcr_tasks_health_check provides information about these tasks
- Analysis reports are now retained for Audience Overlap, SQL, and custom templates when editing or deleting a clean room. Previously, such actions would delete those reports
- Cross-Cloud Auto-Fulfillment updates now require enabling it before installing a clean room, simplifying sharing flow and enhancing the experience
Open-Source Updates
- terraform-snowflake-provider 2.4.0 ( What’s new : Implement User Programmatic Access Tokens, Add PAT integration tests, Add a user_programmatic_access_token resource, Add a user_programmatic_access_tokens data source, Implement rotating pats, Implement Current Organization Account; Misc : Add an example of recreating the pipe on changing stage attributes, Fix tests and update documentation after reverting changes from BCR 2025_03, Add basic listings to sdk, Add builders for legacy config DTO, Address pre-push errors, Adjust sweepers and test workflows, Mark DecodeSnowflakeID function as legacy, Add tests for the Plugin Framework: Test custom type with metadata, Test enum handling (suppression and validation), Test optional computed in plugin framework, Test optional with backing field, Test parameters handling (multiple variants), Test zero values in plugin framework; Use existing schema descriptions instead of string equivalents, Update labels in the repository)
- terraform-snowflake-provider 1.2.3 ( Misc: Add Snowflake BCR migration guide, Set version to v1.2.3; Bug fixes : Fix data types parsing for functions and procedures with 2025_03 Bundle, Introduce a new function and procedure parsing function)
- terraform-snowflake-provider 2.3.0 (What’s new: Add programmatic access token support to SDK, Add support for PROGRAMMATIC_ACCESS_TOKEN authenticator; Misc : Account modification test assertion, Add Snowflake BCR migration guide, Configure plugin framework in functional tests, Do not build the whole project after the changelog entry, Enable testifylint and fix reported issues, Set up muxing in tests, Small account adjustments; Bug fixes : Fix data types parsing for functions and procedures with 2025_03 Bundle; Introduce a new function and procedure, parsing function, remove unused conversion functions interfering with other tests)
- Modin 0.34.0 ( Stability and Bugfixes : Preserve dtypes when inserting column to empty frame, Fix name ambiguity for value_counts() on Pandas backend, Add copy parameter to array methods, Log backend switching information with the modin logger, Display ‘modin.pandas’ instead of ‘None’ in backend switching information, Implement array_function stub, Update testing suite : Use https for modin-datasets.intel.com, Stop calling np.array(copy=None) for numpy<2, Allow xgboost to log to root, Fix test_pickle by correctly using fixtures, Cap mpi4py<4.1 in CI; New Features : Consider self_cost in hybrid casting calculator, Support pinning groupby objects in place, Support set_backend() for groupby objects, Support pin_backend(inplace=False) for groupby objects)
- Snowflake VS Code Extension 1.17.0 (Features: Enhanced Cortex AI integration with streaming support for real-time response generation; Bug Fixes: Fixed Snowpark: Debug execution for China deployments, Fixed Snowpark checkpoints loading into Jupyter Notebook files)
- Streamlit 1.47.1 (small bug fix)
- Streamlit 1.47.0 (Enhanced Theming and Customization: Font Weight Control, Categorical Colors for Charts, Dataframe and Heading Customization, Pandas Styler Support; expanded Widget Functionality: Width and Height Parameters, Chat Input Enhancements, Improved Column Configuration, Enhanced LinkColumn, Markdown in Dialog Titles, Spinner with Elapsed Time, Bytes Format for Columns, Proxy Support for Custom Components; Bug Fixes and Other Improvements)
Client, Drivers, Libraries and Connectors Updates
New features:
- Go Snowflake Driver 1.15.0 (Private Preview (PrPr) features: Added support for Workload Identity Federation in the AWS, Azure, GCP, and Kubernetes platforms. This feature can only be accessed by setting the SF_ENABLE_EXPERIMENTAL_AUTHENTICATION environment variable to true, New features and updates: Added support for snake-case connection parameters, Optimized memory consumption during execution of PUT commands)
- JDBC Driver 3.25.1 (Added the ENABLE_WILDCARDS_IN_SHOW_METADATA_COMMANDS parameter to enable using patterns in DatabaseMetaData SHOW … IN … commands, added the OWNER_ONLY_STAGE_FILE_PERMISSIONS_ENABLED parameter which forces the directory that contains the stage files to have owner only permissions (0600))
- JDBC Driver 3.25.0 (Added support for sovereign clouds and removed obsolete issuer checks for Workload Identity Federation)
- Node.js 2.1.1 (Private Preview (PrPr) features: Added support for Workload Identity Federation in the AWS, Azure, GCP, and Kubernetes platforms. This feature can only be accessed by setting the SF_ENABLE_EXPERIMENTAL_AUTHENTICATION environment variable to true, Removed token caching for Client Credentials authentication, This release introduces TypeScript for development: The npm package contains compiled JavaScript code that contains no anticipated breaking changes for driver users)
- ODBC Driver 3.10.0 (Private Preview (PrPr) features: Added support for Workload Identity Federation in the AWS, Azure, GCP, and Kubernetes platforms. This feature can only be accessed by setting the SF_ENABLE_EXPERIMENTAL_AUTHENTICATION environment variable to true, Added support for configuring connection parameters in TOML files)
- Snowflake CLI 3.10.0 (Deprecations: Snowpark processor in the Snowflake Native App Framework, New features and updates: Added support for passing an OAuth token with the --token option, added the ability to suppress new Snowflake CLI version messages, added the following new --format options for outputting data:CSV, which formats query output as CSV, JSON_EXT, which outputs JSON as JSON objects instead of strings, added the --enabled_templating option for the snow sql command that lets you specify which of the following templates to use when resolving variables: Standard (<% ... %>), enabled by default, Legacy (&{ ... }), enabled by default, Jinja ({{ ... }}), disabled by default, added a packages alias for artifact_repository_packages in the snowflake.yml schema, added the snow stage copy @src_stage @dst_stage command for copying files directly between two named stages, added support for the DBT deploy, execute, and list commands)
- Snowflake Connector for Python 3.16.0 (Added the client_fetch_use_mp connection parameter that enables multi-processed fetching of result batches, which usually reduces fetching time, added support for the new Personal Access Token (PAT) authentication mechanism with external session ID, added the bulk_upload_chunks parameter to the write_pandas function. Setting this parameter to True changes the behavior of the write_pandas function to first write all the data chunks to the local disk and then perform the wildcard upload of the chunks folder to the stage. When set to False (default), the chunks are saved, uploaded, and deleted one by one, added Windows support for Python 3.13, added basic arrow support for Interval types, added support for Snowflake OAuth for local applications)
- Snowflake Python APIs 1.7.0 (Added support to the following methods for specifying the point-of-time reference when you use Time Travel to create streams:PointOfTimeStatement, PointOfTimeStream, PointOfTimeTimestamp)
- Snowpark Library for Python 1.35.0 (New features: Added support for the following functions in functions.py: ai_embed, try_parse_json, Improvements: Improved query parameter in DataFrameReader.dbapi (Private Preview) so that parentheses aren’t needed around the query, improved error experience in DataFrameReader.dbapi (Private Preview) for exceptions raised when inferring the schema of the target data source; Snowpark local testing updates: added local testing support for reading files with SnowflakeFile. The testing support uses local file paths, the Snow URL semantic (snow://…), local testing framework stages, and Snowflake stages (@stage/file_path))
- Snowpark Library for Python 1.34.0 (Added a new option TRY_CAST to DataFrameReader. When TRY_CAST is True, columns are wrapped in a TRY_CAST statement instead of a hard cast when loading data, added a new option USE_RELAXED_TYPES to the INFER_SCHEMA_OPTIONS of DataFrameReader. When set to True, this option casts all strings to max length strings and all numeric types to DoubleType, added debuggability improvements to eagerly validate dataframe schema metadata. Enable it using snowflake.snowpark.context.configure_development_features(), added a new function snowflake.snowpark.dataframe.map_in_pandas that allows users to map a function across a dataframe. The mapping function takes an iterator of pandas DataFrames as input and provides one as output, added a ttl cache to describe queries. Repeated queries in a 15-second interval use the cached value rather than requery Snowflake, added a parameter fetch_with_process to DataFrameReader.dbapi (PrPr) to enable multiprocessing for parallel data fetching in local ingestion. By default, local ingestion uses multithreading. Multiprocessing can improve performance for CPU-bound tasks like Parquet file generation, added a new function snowflake.snowpark.functions.model that allows users to call methods of a model; Improvements: Added support for row validation using XSD schema using rowValidationXSDPath option when reading XML files with a row tag using rowTag option, Improved SQL generation for session.table().sample() to generate a flat SQL statement, added support for complex column expression as input for functions.explode, added debuggability improvements to show which Python lines a SQL compilation error corresponds to. Enable it using snowflake.snowpark.context.configure_development_features(). This feature also depends on AST collections to be enabled in the session, which can be done using session.ast_enabled = True,Set enforce_ordering=True when calling to_snowpark_pandas():code: from a Snowpark DataFrame containing DML/DDL queries instead of throwing a NotImplementedError; Snowpark Local testing Updates : Fixed a bug when processing windowed functions that lead to incorrect indexing in results, When a scalar numeric is passed to fillna, Snowflake will ignore non-numeric columns instead of producing an error; Snowpark pandas API Updates : Added support for DataFrame.to_excel and Series.to_excel, added support for pd.read_feather, pd.read_orc, and pd.read_stata, added support for pd.explain_switch() to return debugging information on hybrid execution decisions, support pd.read_snowflake when the global modin backend is Pandas, added support for pd.to_dynamic_table, pd.to_iceberg, and pd.to_view; Improvements: added modin telemetry on API calls and hybrid engine switches, show more helpful error messages to Snowflake Notebook users when the modin or pandas version does not match our requirements, added a data type guard to the cost functions for hybrid execution mode (Private Preview) that checks for data type compatibility, added automatic switching to the pandas backend in hybrid execution mode (Private Preview) for many methods that are not directly implemented in pandas on Snowflake, Set the type and other standard fields for pandas on Snowflake telemetry; Dependency updates : added tqdm and ipywidgets as dependencies so that progress bars appear when the user switches between modin backends, updated the supported modin versions to >=0.33.0 and <0.35.0 (was previously >= 0.32.0 and <0.34.0))
- Snowpark ML 1.9.1 (New DataConnector features: DataConnector objects can now be pickled, New Dataset features: Dataset objects can now be pickled, New Model Registry features: Models hosted on Snowpark Container Services now support wide input (500+ features))
- SnowSQL 1.4.4 (Updated openssl to version 3 for Windows)
Bug fixes:
- .NET Driver 4.7.0 (Set ConfigureAwait(false) for asynchronous Programmatic Access Token authentications, fixed an issue with the missing OAuthClientSecret parameter provided externally to a connection string when creating sessions that use the MinPoolSize feature)
- Go Snowflake Driver 1.15.0 (issue with permission handling for the configuration.toml file)
- JDBC Driver 3.25.1 (Fixed unnecessary exception wrapping during network retries, added retries for protocol_version error during TLS negotiation, fixed an issue with the default trust manager not extending X509ExtendedTrustManager, added a missing log parameter to the Session logs)
- JDBC Driver 3.25.0 (Fixed a bug that prevented TelemetryThreadPool from scaling based on the workload, fixed access token expiration handling for the legacy OAuth flow, removed an obsolete error log on HTTP response checks)
- Node.js 2.1.3 (Fixed an issue with using the Google Cloud Platform (GCP) XML API when useVirtualUrl=true, fixed a permission check for .toml configuration files, fixed unhandled resources after creating a connection to prevent the process from terminating when using external browser authentication, fixed an issue with oauthEnableSingleUseRefreshTokens in the authorization code flow)
- Node.js 2.1.2 (Fixed a TypeScript error that was introduced in version 2.1.1)
- Node.js 2.1.1 (Corrected an issue where Util.getProxyFromEnv incorrectly assumed HTTPS, causing HTTP_PROXY values to be ignored for HTTP traffic (port 80), improved extractQueryStatus to handle cases where getQueryResponse returns a null response, preventing occasional breaks, added ErrorCode to the core instance)
- ODBC Driver 3.10.0 (Fixed an issue with supporting virtual-style domains, fixed an issue that could potentially cause a buffer overflow)
- Snowflake CLI 3.10.0 (Fixed an issue where the snow sql command would fail when snowflake.yml is invalid and the query has no templating, fixed an issue with JSON serialzation for the Decimal, time, and binary data types)
- Snowflake Connector for Python 3.16.0 (Fixed write_pandas special characters usage in the location name, fixed the usage of use_virtual_url when building the location for a Google Cloud Storage (GCS) client)
- Snowflake Python APIs 1.7.0 (Fixed a warning: 'allow_population_by_field_name' has been renamed to 'validate_by_name',restored the behavior of the drop method of DAGOperation such that drop_finalizer must be set to True before the finalizer task is dropped, as a result of changes in the 9.20 Snowflake release, fetch_task_dependents started returning the finalizer task alongside other tasks that belong to the Directed Acyclic Graph (DAG). This behavior caused the drop method to always drop the finalizer)
- Snowpark Library for Python 1.35.0 (Fixed a bug in DataFrameReader.dbapi (Private Preview) that fails dbapi with process exit code 1 in a Python stored procedure, fixed a bug in DataFrameReader.dbapi (Private Preview) where custom_schema accepts an illegal schema, fixed a bug in DataFrameReader.dbapi (Private Preview) where custom_schema doesn’t work when connecting to Postgres and MySQL, fixed a bug in schema inference that causes it to fail for external stages)
- Snowpark Library for Python 1.34.0 (Fixed a bug caused by redundant validation when creating an iceberg table, fixed a bug in DataFrameReader.dbapi (Private Preview) where closing the cursor or connection could unexpectedly raise an error and terminate the program, fixed ambiguous column errors when using table functions in DataFrame.select() that have output columns matching the input DataFrame’s columns. This improvement works when DataFrame columns are provided as Column objects, fixed a bug where having a NULL in a column with DecimalTypes would cast the column to FloatTypes instead and lead to precision loss)
- Snowpark ML 1.9.1 (Model Registry bug fixes: fix a bug with setting the PAD token when the HuggingFace text-generation model had multiple EOS tokens. The handler now picks the first EOS token as PAD token)
- SnowSQL 1.4.3 (Updated !system command library cleanup. Removed deprecation warning for setuptools)
- SQLAlchemy 1.7.6 (Fixed an issue with get_multi_indexes that assigned the wrong returned indexes when processing multiple indexes in a table)
- Native SDK for Connectors Java 2.2.0 (replacement of the SnowSQL tool with the new Snowflake CLI tool, which streamlines development workflows. This version also includes updated Java dependencies, For developers, this release introduces several new test builders for handlers, allowing for more comprehensive and customizable testing of connector components. These include builders for reset configuration, resource creation, and enabling/disabling resources, the SDK now provides new in-memory implementations for various services, such as scheduler, connector configuration, and task management. These in-memory versions are invaluable for unit testing, as they allow developers to test their connector logic without needing to connect to a live Snowflake instance. Additional new features include: New assertion classes for ingestion configuration and UUIDs, which simplify the process of writing assertions in tests, New classes for integration testing, such as SharedObjects, PathResolver, and ProcedureDescriptor, which provide helpful utilities for managing test objects and procedures, The InMemoryIngestionProcessRepository now has an implemented endProcess method, which previously threw an UnsupportedOperationException)
- Native SDK for Connectors Java 2.1.0 (significant new features and behavior changes aimed at improving connector management, configuration, and security, a major enhancement is the introduction of new procedures for managing the connector lifecycle. These include PUBLIC.RESET_CONFIGURATION() to reset the configuration wizard, and PUBLIC.RECOVER_CONNECTOR_STATE(STRING) to reset the connector's state. These procedures give developers more control over the connector's state and configuration. Additionally, the TASK_REACTOR.REMOVE_INSTANCE(STRING) procedure has been added to allow for the removal of a Task Reactor instance, this release also brings improvements to resource management with new callbacks for the PUBLIC.CREATE_RESOURCE() procedure and the introduction of ENABLE_RESOURCE(), DISABLE_RESOURCE(), and UPDATE_RESOURCE() procedures. These allow for more dynamic and programmatic management of resources within the connector, from a security and authentication perspective, this version introduces OAuth as an authentication mechanism in the Connection Configuration step of the Wizard. This is a significant improvement as it removes the need for users to create EXTERNAL ACCESS INTEGRATION and SECRET objects with credentials, other notable changes in version 2.1.0 include: the adoption of a new approach to identifiers, updates to the example connector to align with the latest SDK release, the explicit specification of Java 11 as the target build version, the addition of a missing grant for the VIEWER and DATA_READER app roles on the Streamlit UI, corrections to the setup.sql script to prevent failures during application version upgrades or downgrades)
Conclusion
Snowflake’s July 2025 releases mark a deliberate evolution of the platform’s core identity. Beyond its origins as a cloud data warehouse, Snowflake is establishing itself as a comprehensive, intelligent, and secure data cloud — an essential operating system for modern data-driven enterprises.
Three dominant themes emerge from this wave of innovation:
AI, AI Continues, the infusion of intelligence is now at the heart of the platform. With the general availability of AI Observability in Cortex and new multimodal embedding capabilities, Snowflake is bringing advanced AI and machine learning directly to where the data lives. This strategy democratizes AI, allowing organizations to perform complex tasks like sentiment analysis and image similarity searches with simple SQL. The integration of Cortex Agents with tools like Microsoft Teams and Copilot is a clear signal that Snowflake aims to push data insights directly into the daily workflows of business users, not just analysts.
Security and governance, there is an unwavering focus on building a bedrock of trust through proactive security and governance. Features like Workload Identity Federation, privatelink-only enforcement, and the automatic classification of sensitive data are critical for enterprises navigating complex regulatory landscapes. These are not simply security add-ons; they are foundational enhancements designed to automate governance and embed zero-trust principles deep within the platform, making it possible to secure data at scale.
An open platform, Snowflake is doubling down on its commitment to openness and developer empowerment. The deep and continued investment in Apache Iceberg™, the ability to run Spark workloads, and the constant stream of updates to the Native App Framework, Snowpark, and various drivers demonstrate a clear vision. Snowflake is not trying to create a walled garden. Instead, it is building a powerful, interoperable ecosystem where developers can use the tools they know to build the next generation of data-intensive applications directly on the platform.
For organizations building a modern data strategy, these updates offer a compelling roadmap. They tackle key challenges: responsible AI use, data trust and security, and empowering developers to innovate freely. These enhancements keep Snowflake a key part of the modern enterprise data stack.
Enjoy the reading.
I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, & Security Architecture at Archetype Consulting. You can follow me on LinkedIn.
Subscribe to my Medium blog https://blog.augustorosa.com for the most interesting Data Engineering and Snowflake news.
Sources:
- https://docs.snowflake.com/en/release-notes/preview-features
- https://docs.snowflake.com/en/release-notes/new-features
- https://docs.snowflake.com/en/release-notes/sql-improvements
- https://docs.snowflake.com/en/release-notes/performance-improvements-2024
- https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases
- https://docs.snowflake.com/en/release-notes/connectors/
- https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc
- https://docs.snowconvert.com/sc/general/release-notes/release-notes

Top comments (0)