🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Agentic AI Meets Cybersecurity: eSentire’s Atlas AI Powered by Snowflake & AWS
In this video, Matt Marzillo from Snowflake and Dustin Hillard, CTO of eSentire, discuss integrating Snowflake's AI capabilities with AWS for cybersecurity. Matt introduces Snowflake Cortex, which offers batch processing of unstructured data and agentic systems through Cortex Search, Cortex Analyst, and Cortex Agent. Dustin explains how eSentire uses Snowflake to consolidate security telemetry data—processing 20TB daily from network, endpoint, and log sources. Their agentic system performs up to 30 tool calls per investigation, achieving 95% alignment with senior analyst decisions while reducing investigation time from 10-15 minutes. This enables eSentire to expand into new markets like India and Saudi Arabia through platform licensing, and leverage Snowflake Intelligence for internal analytics across Salesforce, ServiceNow, and Gong data.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Snowflake's AI Capabilities and AWS Integration: From Cortex to Agentic Systems
Hello, everyone. My name is Matt Marzillo. I am a Principal Partner Engineer here at Snowflake. I'm joined here by Dustin, who is a customer of Snowflake and AWS with eSentire. I will do a fairly brief introduction into what Snowflake offers for AI and how we are starting to see customers integrate Snowflake's AI services with AWS. Dustin will provide a more detailed introduction when it is his turn to present.
Let me start with a quick introduction on what Snowflake is. We have been around now for over a decade. When we first came to market, we grew significant market share by being a really great enterprise data warehouse that was purposely built for the cloud. Customers who were used to using SQL Server or Oracle on-premises wanted all the benefits of the cloud—the scalability, the low administrative overhead—but they wanted that same experience. Since then, we have legitimately grown beyond just the enterprise data warehouse architecture. We support Data Lakehouse, and we are doing a lot of work with Iceberg. We also support data mesh architecture. We have great capabilities around developing applications, data sharing, collaboration, and traditional machine learning capabilities. The important thing here is our AI story.
It is worth noting that we have been deployed on AWS since the start of Snowflake. We integrate with dozens of native AWS AI services. There is a twofold story in the way that Snowflake integrates with AWS. Number one, you deploy Snowflake on top of AWS. We are a SaaS service with S3 and EC2 running under the hood. We also integrate natively with all these different services: S3, Kinesis, Data Firehose, SageMaker, Bedrock, Glue, and Glue Catalog. The partnership between AWS and Snowflake is strong.
Now, diving into Snowflake's AI functionality means looking at Cortex. Snowflake is a true SaaS service. We do not have individual services that you spin up, but rather different functionality, and all the functionality around AI is branded as Cortex. It is important for customers to remember that Snowflake is not two things. We are not a hyperscaler building data centers, and we are not a lab building models. We have some custom models that we have purpose-built. What we are focusing on is what I think is the best experience in the market for building data-centric AI systems with your data.
This is accomplished with six different capabilities that generally fall into two different buckets. You are either going to do batch processing of unstructured data with large language models using some of our AI SQL functionality and document processing—extracting information from scanned documents, labeling images, transcribing audio files, or translating transcripts. A lot of times customers first go to production with AI using those batch processing use cases. The second use case that we see a lot of customers doing is building agentic systems centered around their enterprise data. That starts with using our hybrid search service, Cortex Search, on unstructured data, and then also using our Cortex Analyst, which is a differentiator for us. This allows you to build context on top of your structured data so you can get highly accurate results when you make a request of Cortex Analyst.
You then wrap that together inside of a Cortex Agent, so the Cortex Agent is embedded inside Snowflake. You make a request to that agent, and it will orchestrate throughout many different tools, analysts, search services, and generate a response that can then be materialized anywhere. We have a native UI experience called Snowflake Intelligence, which allows you to interact with all those Cortex agents inside Snowflake. This is sort of what that agent experience looks like. This is what we are starting to see, and what Dustin is going to talk about is their journey and how they are using Snowflake and AWS together for AI. This is sort of the visionary direction. This is where we are seeing a lot of customers move to, and it is as simple as taking structured and unstructured data, using our easy-to-use services with Cortex Analyst and Cortex Search, and then wrapping an agent around it, and then using that directly from an application, whether that is our Snowflake Intelligence, Slack, Streamlit, or something else.
Now that said, we are starting to see customers say, "We built an agent inside of Cortex. That is great. That is our data agent. But we also want to use that agent as part of a broader agentic system built inside of Agentic Core." This is sort of what we have been seeing over the last four weeks, with a lot of customers asking about this. We have guides and quickstarts that show you how to set this up. What you are doing is building all these different AI systems inside of Snowflake and then connecting to them from Agentic Core with our managed MCP server.
With one orchestrator inside of Agent Corps, you can consider all the different AI systems that you have inside of Cortex and Snowflake, along with anything else you have inside of AWS. This gives you a unified experience with a single orchestrator orchestrating across agents and services, whether they're in Snowflake or AWS.
What we see is that a lot of customers start with batch processing, using AI to supplement unstructured data, and then move towards agent systems. They integrate those agent systems with broader agentic systems that are built inside the agent core. With that, I'll turn it over to Dustin.
eSentire's Security Challenge and the Journey to Snowflake
I'm Dustin Hillard, the CTO, and I lead our technology and product teams at eSentire. The problem that we're solving for customers is that everybody needs security, but actually having a full SOC and something that's operating 24/7 like Fortune 500 companies have is very difficult to attain for most companies outside that Fortune 500. It's hard to get the talent and hard to put the technology in place.
Generative AI and agentic AI are obviously adding to this problem. There are a lot of new threat surfaces that are emerging, and the existing tools aren't handling those very well. The attacks are moving faster as well. Anthropic just had a report about how Chinese state actors are able to use these tools to automate a large portion of how people run attacks today. Shadow IT has always been a thing with Gen AI, and it's even larger now, growing the problems at an even more rapid pace from the perspective of security and IT teams having to protect environments.
On the positive side, these tools are also enabling us, and that's what I'm going to talk about next: how we're able to use these tools to strengthen the security posture of our customers. eSentire has been around for over 20 years and has around 2,000 customers. We're a leader in delivering managed detection and response, which is basically the capability of our SOC analysts plus our platform looking at security incidents, understanding if they're real, and then stopping them before they cause business damage.
This is about how we're using Snowflake to achieve that. One of the challenges we had before we adopted the Snowflake platform was that we had data from all kinds of different security telemetry sources, our own threat intelligence data, and customer history data. Each of those were siloed and difficult to manage. The first step of our journey with Snowflake was getting all that data into one place that would allow us to easily process it and do our analytics.
That also led us towards building a bunch of tools that would allow us to process that data and lead to better outcomes for our customers via our human SOC analysts. This was the first stage of the process, and we were able to actually reduce costs by moving to Snowflake compared to some of the legacy systems we were using. We also opened up a bunch of new capabilities by having all the data in one place.
Building an Agentic Security Investigation System at Scale
Our architecture has the edge, which is basically what our customers have in their own environments. The core pieces we think about there are network visibility, the agent on endpoints, and log data. Those are all the sources of data we have coming in that help us understand what's going on in our customer environments. We support a lot of the market leaders in terms of how we get that data in. Internal to our platform, we want to take that data, process it, understand and enrich it, and then come to a decision about whether a potential security event is a real security incident.
There are a couple of main components to that. We have a low-code orchestration platform and then the agentic workflow that I'll describe more in a second. The scale that we're operating at is large. On the network side, we see over 2 petabytes worth of raw network data per day. On the agent side, about 2 terabytes of metadata coming off of those endpoint systems, and on the log side, 20 terabytes per day of data.
This is all obviously a large amount of telemetry that we use to understand and process these events. We're ingesting that into Snowflake on the order of around 20 terabytes per day and about 10 petabytes total data that we're seeing. The key piece here now is that we have all that data in one normalized place, and our ability to build an agentic system to execute it on top of that.
A key piece of a successful agentic system is the expertise required and how you can encode and automate it to replicate. Our human analysis has been successful for two decades, but what we are finding now is that agentic systems can match the quality of our human analysts. We have been able to build this system with our human security experts designing the prompts and workflows that drive the agentic system and the agentic code orchestration platform.
For a typical security investigation, previously our analysts would spend 10 to 15 minutes and probably make 2 or 3 tool calls in the course of that investigation. Now, with the automated workflows we have in place, we are able to call up to 30 different tool calls and give a much more comprehensive investigation and report of what was going on in that particular scenario. This gives us a very detailed engagement with our customers, and we are also adding the ability for the customer to follow up with their own query so that they can not only have the initial agentic investigation but also do the follow-on experience that you would expect from how you are using it in a consumer perspective.
All of that is sitting on top of and enabled by the normalized data being in Snowflake. Whether you have any vendor, it is all coming in a normalized way. Our threat intelligence data and all the backend ticket information and customer context allows these agents to perform very powerful actions in a way that our human analysts cannot do at the same scale within the time constraints that we have. In order to protect our customers, we are typically trying to make a decision in that first 10 or 15 minutes so that we can contain the threat and keep it at the first dose. As it starts to get beyond that, it becomes more difficult to manage and the business damage goes up. The capability to do these deep investigations in a very short time frame is a key part of what is making this successful.
This is one more click down on what the agent looks like. If you think about the workflow that it is going through, it is creating an initial hypothesis given a security potential security incident coming in from a vendor technology or one of our own detections. It is taking that and then going through a loop of evidence collection and refining the hypothesis. It can make a call to CrowdStrike to look at a process tree or query log data to get more telemetry or threat intelligence context of what has happened in previous events in the customer environment. All of these different tools are ways to refine that hypothesis, and we basically continue iterating through that until we can get to a level of confidence to make a determination of whether this is a true security incident that needs remediation or something that is a false positive or benign.
Even after years of tuning and traditional machine learning and filtering approaches, there is still on the order of a 5 to 1 or 10 to 1 false positive to true positive ratio in the incidents that we are looking at. This helps us do a lot of work down that path and eventually deliver better outcomes with less effort. One more thing I forgot to mention on that slide is that all that work is great, but only if it is of reasonable quality. You can do a bunch of work, but if the quality does not match what our experts would have done on their own, then it is pretty much worthless.
We have done large scale internal studies and we are running this on every single investigation in our SOC today. When we look at the comparison where our most senior analysts judge the output of that agent investigation, we are at 95% alignment with what they would have chosen the outcome to be in a security investigation. This is really giving us confidence that it is of value and that all the additional work and automation that is driving further output and stronger conclusions is something that is aligning with what our experts would have done if they had the time to do it themselves.
Delivering Enhanced Outcomes: New Markets and Internal Analytics with Snowflake Intelligence
The main impact is that by using these agentic systems we are able to deliver to customers much more than ever before. A lot of talk about agentic systems is about efficiency and how we can cut down on what humans are doing or maybe need fewer people to do it. We have a very different perspective in that it is not just about reducing how many people you need to do something, but being able to deliver much more. That is what we are really excited about: the better outcomes that we are bringing to customers by doing that.
The last piece is that it has also opened up new licensing models for us. We now have a platform that can deliver this agentic quality investigation, and it does not necessarily need our SOC to do that. So we are going to new markets, and we have a partner in India where we have licensed the platform to them. They are the service provider, so it is all hosted in India and is compliant from a region perspective. The data is there, and they are providing the service with people there, so it meets the needs of that market.
The next place we're headed with that type of solution is Saudi Arabia, which has similar types of constraints. The data residency plus the agentic capabilities is allowing us to bring new offerings to service providers that wouldn't have been able to do this by themselves.
I'm seeing that this slide didn't get updated with the content that we put in it. So you can imagine what I'm about to tell you, which is that Snowflake Intelligence is awesome. Our teams have been doing the platform utilization side of things for our product use cases that are for our end customers. But as Snowflake has grown and we've become comfortable with and like the outcomes that we can get with it, we've also started now to bring a bunch of our internal data into Snowflake itself.
Things like Salesforce data, ServiceNow, finance and customer call data from Gong are all pieces that we now have in Snowflake. The Snowflake Intelligence layer that Matt was talking to you about allows those teams to execute on top of that data. They're running analytics use cases and important customer analysis that my R&D teams didn't have the time to help them execute on. But with Snowflake Intelligence now, they're able to make a lot of progress on those customary analytic use cases once we put the data in.
It's been a good journey with Snowflake Intelligence for us because it's really made those workflows accessible to non-technical teams with business critical data. It's expanded the set of problems that we're able to solve for. With that, we're at thank you. I appreciate your call coming.
; This article is entirely auto-generated using Amazon Bedrock.















Top comments (0)