🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Build a Future-Ready SOC:Transform security logging with OCSF and generative AI
In this video, AWS experts and Merck's Allan Umandap demonstrate how the Open Cybersecurity Schema Framework (OCSF) transforms security operations by creating a universal data format for diverse log sources. Merck achieved a 48% reduction in operational overhead and 47% infrastructure cost savings by implementing AWS Security Lake with OCSF. The presentation showcases how Merck reduced incident response time from a full day to just 30 minutes using agentic AI powered by OCSF-normalized data. A live demo illustrates an AI orchestrator analyzing 10 million AWS WAF logs within minutes, automatically correlating business context and generating actionable insights. The session emphasizes OCSF's role in eliminating custom parsers, enabling AI-ready data lakes, and revolutionizing Security Operations Centers through standardized schemas that work seamlessly with advanced AI agents for threat detection and response.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
The Data Challenge: Why Security Teams Are Drowning in Logs
Good afternoon everyone. I'm excited to be here with you today to talk about how you can build a future-ready Security Operations Center with AI by transforming your security logs into OCSF format. Before we begin, please welcome my co-presenters: Allan Umandap, Director of Public Cloud at Merck; Pratima Singh, Senior Solutions Architect for Financial Services and Insurance at AWS; and I am Sushovan Basak, Senior Technical Account Manager for Healthcare and Life Sciences at AWS. Allan and Pratima will be joining us later on stage to discuss Merck's story and a cool agentic AI demo. Now, before we get into the agenda, let me ask you something interesting.
What if I told you that you could reduce your incident response from a full day analysis to just 30 minutes or even less? Please raise your hand if that sounds too good to be true. I can see a couple of hands. Well, let me tell you, Merck did exactly that, and today I'm going to show you how you can achieve the same outcome at your organization using OCSF. We will first discuss the common data challenges you face with different log sources and different log formats. Then I'll introduce the Open Cybersecurity Schema Framework, or OCSF, as a solution.
We will hear Merck's real-world story about how they transformed their central logging using OCSF and how they leveraged OCSF together with AI to reduce their incident response from a full day analysis to just 30 minutes. Finally, you will witness the power of OCSF with a cool agentic AI demo, seeing firsthand how you can revolutionize your Security Operations Center with OCSF and AI. Now, before we begin, let's establish a fundamental truth: security is fundamentally a data problem. The challenge is not about a lack of data; it's about making sense of the data you have.
Let's look at the modern enterprise security landscape. You're no longer managing just one environment. You have your on-premises data center, workloads distributed across multiple cloud providers, and a fast-growing ecosystem of SaaS environments. Now imagine your security team is trying to investigate a potential breach. Your firewall logs sit on-premises, your endpoint data is in a different application, and your cloud data is scattered across multiple cloud providers. To investigate, they need to navigate three or four different systems, each with three or four different log formats. Remember, this is all just for one investigation.
Your analysts become digital archaeologists, spending hours just trying to piece together what happened. Meanwhile, real incidents slip through the cracks because you can't see the full picture. But here's the reality: your best security engineers, the ones who should be spending time building advanced threat detection algorithms, are instead writing endless data parsers. That's because every system speaks a different language with different field names, different structures, and different timestamps. They're mapping Field A in System X to Field B in System Y over and over again. Every security tool you introduce means another complex data pipeline to be managed and another custom parser to be built. It's like having a Formula One pit crew spending their whole day organizing tools instead of changing tires.
What if we could change this? Let's have a system that does all this undifferentiated heavy lifting, performing data transformation to create one single universal data format so that your security engineers can focus on what matters most: threat detection and response. The next challenge is the data tsunami, and it's real. Every application, every network device, and every endpoint is generating more logs than ever before. Sounds familiar, doesn't it? But here's the kicker: it's not just more data, it's more different data. You're drowning in information yet starving for insights. Your detection systems struggle to keep up, as does your budget for storage and compute power to process this huge volume of logs.
Introducing OCSF: A Universal Framework for Security Data
So what's the solution to all this data chaos we've talked about? Meet OCSF, the Open Cybersecurity Schema Framework. It's open source, which takes away all the nuances of a proprietary schema. It's not just a simple schema; it's a framework to render the schema. This isn't generic; it's purpose-built for your security team.
Irrespective of the source event, OCSF will always maintain the same data structure. You can customize it and extend it based on your own needs by keeping the core framework intact. And this is my favorite. OCSF is a source agnostic as well. So whether your log is coming from a third party vendor or AWS platform level logs, OCSF will always create one single universal data format. And because it's a universal data format, OCSF will always provide the same structure, same field name, same query path for your security tool to consume.
Now let's see OCSF in action. This is where the magic begins. On your left hand side, you have your event producers: your security tool, your application, your network devices. Each one traditionally speaks their own language, creating the data chaos we talked about. OCSF sits in the middle, acting as a universal data translator. But if you're wondering whether you still need to do the mapping between your event producer on the right to the event consumers, the answer is no. Not for AWS platform level logs. Amazon Security Lake does that for you.
Remember the system we wished for before, the one who should do all this undifferentiated heavy lifting of doing the data transformation to create one single universal data format? Here you go. Amazon Security Lake for you. So now, on your right hand side, your event consumers—your SIEM, your data lake, your analytical application, your AI/ML workload—they're all going to receive the data in the same consistent format. Not just that, you pay less for your storage as well, because Amazon Security Lake stores the data in a Parquet format, which further compresses your OCSF formatted log data.
You pay less for your compute as well. You know why? Because Parquet, being a columnar format, inherently provides better query performance. So your Athena query would run much faster than ever before. Lesser time to execute a query means less compute power you are using, which translates to lower compute cost. Now before I explain the key benefits, let's recap the data challenges we talked about and see if these key benefits can address all those data challenges.
First, as I said, OCSF creates one single universal data format that makes you ready to operate in a hybrid cloud and multi-vendor environment, so no more visibility gap that you had with different log sources and different log formats. It's a one-time mapping at the producer level. That means you set it up once at your producer level, not for every security tool that consumes it. No more custom parsing required at your consumer level because you already did it at the producer level. That eliminates the need for you to have that complex data pipeline we talked about. So now you can optimize your security talent on much more strategic work rather than doing data transformation.
And finally, and most importantly, it's AI/ML ready. So an AI model can work on clean, consistent data, turning the increasing log volume into valuable insight. This is how you break down your data silos and supercharge your security operation. Now let's hear from Allan how Merck has transformed their central logging using OCSF and AI. Allan, over to you.
Merck's Journey: From Traditional Central Logging to Modern Architecture
Thank you, Sushovan, for providing the background and introducing the challenges that we can solve with OCSF. Good afternoon everyone. I'm Allan Umandap. I lead the public cloud platform team at Merck. I have been with the team for more than ten years. It's always been our mission to provide an environment to our customers by delivering and enabling critical technologies to support the business needs. Through this process, we were able to support secure cloud environments for our users, which is actually capable of supporting the whole technology lifecycle. Our team believes that the path to production should be a happy path. It's our goal and mission to ensure that the path should be the easiest.
Maybe you're wondering how we are able to do this. It's through our self-service offering providing access to the data in a central logging environment and also providing clear guidance to our customers. Now let me tell you about our company, Merck & Co., Inc., also known as MSD outside of the United States and Canada. We are a global healthcare company that delivers innovative health solutions through our prescription medicines, vaccines, biologic therapies, animal health products, and technology solutions.
Our heritage speaks to our commitment. For more than 130 years, we have been able to provide hope to humanity through the development of important medicines and vaccines. This is not just our history; it is our ongoing mission.
Sushovan Basak gave us a very nice background about OCSF. I hope that from this presentation you will gather enough information to evaluate and use OCSF for your own purposes. With this diagram, I want to explain how we were able to undertake our transformation journey to OCSF. Is this diagram very familiar to you? For the architects and engineers in this room, if you have deployed and designed an AWS multi-account strategy, this diagram should be very familiar to you.
Every successful landing zone that is deployed, in my experience, would benefit from having a central logging environment. A central logging environment is a setup that helps you collect logs from multiple accounts and multiple regions into a central account, which helps you for the purposes of monitoring, analysis, and auditing. From our AWS managed account, we have CloudTrail set up to send logs into our central logging account in an S3 bucket. From our application accounts, we have AWS WAF logs and flow logs also configured to send logs into our central logging account. Our customers would then be able to access this data so that they can generate their own analysis and insights.
This setup actually performs very well and meets our key compliance requirements. However, we also recognize that there is room for improvements in simplifying this process. Opportunities like this to simplify the process basically involve reducing cost, reducing time spent on operational overhead, so that we can shift our focus more toward strategic platform initiatives.
Transformation Strategy: Four Key Business Drivers and Implementation Timeline
So how are we able to achieve this? There are four key factors from the business perspective. First, we look at opportunities to accelerate rapid incident detection. Rapid incident detection and improving our response to activities related to security and operations would help us react much faster and protect our critical infrastructure quickly. We also wanted to improve our customer experience by enabling AI and ML capability. Modernizing and building our infrastructure to be ready for AI and ML integration helps us position ourselves to leverage advanced analytics to derive business value quickly.
From the IT perspective, we wanted to optimize the cost of running our central logging environment, so we looked at opportunities to reduce our storage costs while also streamlining our processes. From the operational efficiency standpoint, we wanted to shift more to using managed services. As you have seen in the previous diagram, to continue using our central logging environment, we have to set up and use many AWS services. This actually creates a lot of operational overhead for my team. We wanted to shift that overhead back to AWS.
Also, as you increase your AWS usage, the amount of data that you are generating also increases. The increase in data creates a lot of inefficiency in using the data for query and processing. Another factor affecting this is that all this data are actually in different log formats. So how did we achieve our transformation? Back in Q3 of 2024, we conducted a comprehensive assessment of our capability, looking at the capability and value of the central logging environment.
We work closely with our AWS partners by asking for and receiving strategic input and key best practices. Back in Q1 of 2025, we started our planning, and our team actually led the deployment planning. Together, we were able to create a plan that we can execute with optimal central logging capability. In Q1 of 2025, implementation actually started. We selected and implemented AWS Security Lake and immediately gained the benefit of it automatically transforming our data into OCSF. We were also able to do some measurement in real-world performance against expectations.
In Q4 of 2024, we wanted to shift our focus to finding more opportunities to leverage our modernized central logging environment with the OCSF normalized data. We looked for use cases that can unlock AI capability so that we can use it for other purposes within our team. This is the modernized central logging environment with OCSF that we have built. As you can see, the application account still generates AWS CloudTrail, VPC flow logs, and WAF logs, but the key difference now is that we are using Security Lake to automatically transform our data into OCSF format. From our central logging environment, our users are able to access the data and use it to generate insights and run their analytics. The shift to a managed service approach gave us added benefits of not only converting logs into OCSF but also created significant infrastructure cost savings and operational efficiency. Moreover, we now have a modernized central logging account with AI-ready data.
Measurable Impact: 48% Operational Efficiency and 47% Cost Reduction
So how did we achieve the operational efficiency gain? With all these changes, we have seen a 48% reduction in time that we are spending in managing our central logging environment. Nearly half of the operational cost has been reduced. Now imagine what this means to our team. Our team can now focus more on strategic initiatives.
So how did we achieve the 48% efficiency? Well, there are actually three driving factors. First, rapid enablement. By shifting our focus from managed service and doing more evaluation and testing of built-in features and reducing our time in creating new features, we were able to remove the time spent in enabling new features and improving our central logging solution. The time was reduced from weeks to just hours. Second, there is reduced operational overhead. With the heavy lifting of log transformation done by Security Lake, we are actually able to eliminate custom code that is used for log processing. Access to the data for our key stakeholders is now available from Security Lake. Lastly, ready-to-use access. With the data getting converted automatically to OCSF in Security Lake, access to data is self-service. Immediately, our users are able to access data to generate analysis and create insights.
Now, the infrastructure benefits actually extended beyond operational efficiency to significant cost savings. The infrastructure cost reduction is about 47%. The normalization of OCSF within Security Lake actually reduces infrastructure cost by improving data storage, processing, and analysis. With the data coming from the application team automatically being generated to OCSF installed in Parquet format, it actually provides several cost-saving benefits.
Now, what are the driving factors in our 47% cost savings? First, there is the managed service. By moving to AWS Security Lake, we actually benefited from economies of scale. With this, we do not need to actually build and maintain custom infrastructure.
We were able to decommission our central logging environment and reduce our total cost of ownership. The Open Cybersecurity Schema Framework normalization uses a standard schema that eliminates redundant fields before storage. Within Security Lake, we were able to store data while eliminating the need to store it multiple times in different log formats, basically reducing our storage footprint.
With the scale of how we operate and the data we generate, our OCSF normalized data stored in Parquet format creates significant storage costs. However, the exceptional compression that Parquet provides helps offset this. Additionally, with the data being partitioned in columnar format, our Athena queries run more efficiently. With these four key factors—managed service, OCSF normalization, compressed storage, and reduced compute through optimization of our queries—we were able to generate that 47% savings, which we can actually use to invest in other innovative solutions.
AI-Powered Security Operations: From Days to Minutes with Agentic AI
Now let me shift focus from generating infrastructure costs to generating operational insights through a more interesting activity we have done recently. After we deployed Security Lake and accessed our data, we were able to work with our AI experts to explore a proof of concept that used our logs for security operations. Our goal was to test if AI could help us investigate security incidents more efficiently. With an Agentic AI prototype, we only needed to provide the WAF record ID. What it did was generate a comprehensive report or insight in less than five minutes. It analyzed the traffic and request patterns, looked at the sources including IP addresses, examined behavioral conditions, and created a detailed report that provided actionable recommendations.
The AI findings we generated were only possible because of the OCSF standardized schema. The AI was able to correlate security context, threat intelligence, and infrastructural context, quickly generating this report in under five minutes. From a security and operations perspective, this represents a fundamental enhancement in AI-driven threat analysis that can empower teams to investigate quickly, faster and more effectively than any manual approach.
Let me look deeper into this process. In the past, investigation was very manual. My team would usually derive context from historical records. We would need to learn and translate different log formats, develop complex queries, and run those complex queries against partitioned raw data, which usually resulted in long execution times. When the output was given, we had to translate it into meaningful information. With an Agentic AI prototype, the orchestrator was able to coordinate with child agents performing the same manual steps we were doing. Now it can automatically get context from historical records. It's able to quickly analyze logs because they are in OCSF format, develop optimized queries, and run them against OCSF normalized data, which is also stored in Parquet format, resulting in very nice human-readable insights.
I've shown you how we can generate security insights using AI, but now we can also do that with operational insights.
Right after production, one of our customers reached out to us and reported a spike in their cost usage. Our engineer was able to use AWS MCP servers to query our data. But remember, with OCSF having the standardized schema that applies across several AWS services, we are able to generate that information very quickly. Without OCSF universal format, we would have spent hours stitching together all these logs and finding that specific relationship to determine the root cause.
The root cause for this one was determined in less than five minutes. Our team was able to identify that it was due to a spike in KMS decrypt data events because the EU data events was actually enabled. With all these changes that we have done, modernizing our central logging environment and normalizing our data into OCSF, we were able to generate infrastructure savings, operational efficiency, and enable us to explore more AI capabilities.
This actually generated a lot of interest for my team. As our next step, we are going to put more time into finding more opportunities to increase our efficiency, reduce cost, and explore more AI capability. For us to learn what OCSF can do when it meets advanced AI, I'll pass it over to Pratima.
Understanding OCSF Architecture: Attributes, Objects, Classes, and Categories
Thanks, Alan. So you heard from Alan how a single standardized framework has been pivotal in accelerating incident response, essentially reducing the time to detect and respond. When we look at OCSF and when we look at an agentic AI solution, what is most important is the context window of your agent. When you're dealing with different types of log events, when you're dealing with diverse schemas, when you're dealing with different attributes that need to be correlated, the AI agent's context can grow and shrink depending on the information that it has to process, and that can result in differentiating responses depending on how much information it is going to produce.
When we look at an unambiguous and standardized framework like OCSF, it makes it easier to just load up that open source code onto the agent as a knowledge base and be able to guide the agent to learn from it. So OCSF provides that distinct value in the age of agentic AI. Now, how many of us have heard about OCSF before coming to this session? There are a few hands. I'm going to give you a very quick start into OCSF, and if you want to ask more questions, we can talk more after the session.
At the core of OCSF is an attribute. An attribute is a key-value pair. It has a key that identifies the information that the log event is presenting and a value that captures the information that the log event is sending through. Now, a collection of related attributes forms an object. Think of an object as what you would do if you were programmers. It's the atomic unit that defines a collection of attributes that define a thing.
Now, a collection of objects becomes a class. A class represents a single log event. Now, you could have multiple different types of sources emitting different types of log events. A single source could emit various types of log events that match up to different classes. For example, if you look at AWS CloudTrail, it emits authentication activity, it emits API activity, and it also emits audit activity as a different class.
But if you look at, say, SSH activity coming through an EC2 instance when you're SSHing into it, it's a single standard log. It is SSH activity that falls under a category called network activity. Now, a collection of multiple classes is what we call a category, and they're all related, so you would never see an authentication activity falling under a network activity category. It'll be SSH, TCP activity, HTTP activity, FTP activity, so everything related to what network resources or network sources could be generating.
OCSF is designed with the goal to be an open standard that is adaptable to any environment, any solution, or any application, while still complementing the existing security controls at hand, so you're not diminishing any value that you already have in your environment. Now, there are two sources of logs there. With a show of hands, tell me if you're able to identify where each one is coming from. Well, if you look at it, these are sources of logs coming from AWS WAF and VPC flow logs.
Now imagine you are positioned in a scenario where you have to find an indicator of compromise, for example, an IP address that is traversing your network stack, coming through layer 7 all the way to layer 4. The first thing that you have to do is make sense of that one standard outline that the VPC flow log is emitting. You would make it JSON parsable, you would add attributes to capture the values coming through the VPC flow log, and then you would have the capability to correlate them and generate some sort of business value or accomplish what you're trying to do.
That is the kind of data wrangling that you have to do today with raw log sources, which reminds us of what was discussed earlier when it came to different data types and data volumes. But now if I translate this all and map it to an OCSF class, what immediately grabs our attention is how consistent the log schema is. You will find a source IP address in exactly the same location in a WAF activity log and in a VPC flow log. All you need to remember is which table that information is collected in if you have a data lake where all the log sources are going.
You can see the source endpoint IP is exactly where the IP address will be. What's more important is that OCSF is quite opinionated in data types. An IP address will be of the data type IP address. An AWS account ID will be of type string, not a big integer in some log source and not a string in another log source, so you have to do data type transformations. Another value add is that OCSF is customizable , so you can use the construct of observables to surface certain common values that you use more often than not to correlate and grab information out of.
Live Demo: Building an Agentic AI Solution for Automated Incident Response
That means you become completely immune to the log sources under the hood or the entity that's emitting the logs. All you care about is what activity are you trying to track or what values are you trying to correlate. Now we are going to pivot into a typical triage scenario. How many security operators here get paged when there's a security event? You have to wake up at 3 a.m., right? So what do you do the first time you get an alert? The first thing you do is try to figure out what you are supposed to do with this alert. You have runbooks, playbooks, or nothing, and you try to figure your way out.
The next thing you do is establish why you should care about that alert. Is it worth waking up at 3 a.m. at night? Maybe it's the production environment and you have to respond, but if it's a sandbox environment and some developers decided to test after a big game night, then you probably don't need to respond just yet because you know it's a quarantined environment and it's not going to allow for lateral movement, so you can pass on it. That's the business context that drives the rate or the speed of response that it needs.
Once you've established the business context, you want to understand who did what, where, when, and why , and that's the meat of everything—the log analysis of what is happening in this environment and what you are supposed to do once you've established what's happening. Finally, has this happened before? You know it could have. We have historical evidence that we store through our ticketing systems and backup systems, and we want to establish whether this thing, if it were to happen again, what are the steps that we need to take.
When we put all of that into an agentic AI scenario, we can split all of these up into different agents. This gives you a customizable, unambiguous, and consistent way of operating on these logs and enhancing that log analytics workflow. We'll break it up because there's a demo coming up after this that will explain this particular flow. Say the orchestrator is the security operator, right? The first thing that hits the orchestrator is the alert. It needs to figure out what you are supposed to do in this case. Where is your runbook? Where is your playbook?
Imagine an agent that sprawls out into various sources of data in your environment and figures out where the runbook is, maybe something relevant to an application that's in the picture, maybe something relevant to a resource that's in the picture. Once it has the runbook, it wants to establish what is the business context for this particular alert. The resources in these alerts—why are they important or what is important around them? They could have permissions to confidential information, and you need to trim it right away, or they could be a sandbox environment with dummy data, and maybe this can wait for later.
So the orchestrator now triggers agents to retrieve business context. Once it has the business context, it's going to look at that contextual information and trigger log analysis. Now log analytics is one of the most human intensive tasks in an incident response scenario.
Automating human-intensive tasks in an incident response scenario can significantly impact your response time. The amount of time you spend analyzing logs either adds to or reduces your overall response time. These human-intensive, highly repeatable manual tasks are prime candidates to be handed over to an agent, freeing up your security operators to innovate, build better agents, and think of better playbooks that are more interpretable by your orchestrator. Once you have the log information, the orchestrator can interpret it and correlate it back with the business context it received.
Imagine having to manually go through lines of logs and then trying to figure out the context you saw somewhere, or trying to determine where an account came from or where an IP address originates from, all while being pushed to respond faster and faster. Of course, you also need to save the investigation history and figure out how to make this information readily available the next time something similar happens. This seems simple enough, and it's essentially what you would do in a security operations scenario.
When we put this into perspective with Merck, we built a proof of concept with them to dive deeper into AWS WAF logs. As a demo, I deployed a demo application and loaded it with a distributed load test that had about 10 million requests blocked by the WAF. Think about having to look through 10 million WAF logs, checking whether each request was blocked or allowed, putting filters in place, running Athena queries, and doing all of that while operating on raw logs. The amount of time that would take is substantial.
Here I have one orchestrator with no investigations at the moment, so I don't have any investigation history yet. The orchestrator has three agents available as tools: the log analytics agent, the business metadata agent, and the historical evidence agent. The first thing I do is tell it that AWS WAF has detected suspicious activity happening in a specific AWS account. I give it no further context. I don't know what this account means in the bigger scenario of my AWS accounts, and I also don't know what activity the WAF has detected. I just know something's happening.
The first thing the orchestrator does is trigger the playbook agent because it needs to work out what it's supposed to do. It finds out there is an account ID in play. I'm looking at WAF and trying to do something, so it then subsequently triggers two more agents. It says, I need to find the business context of that account ID because that's the only resource I have through this alert. Is it important or not? I also need to find out what happened, where, when, and why.
Slowly and surely, it looks up the backend. In my case, I've loaded up values in a DynamoDB table, but in your case, this could be an MCP of your CMDB that you connect back to your agent. Now it completes its analysis and tells me what's happening. This is the account you asked me to look up. It gives me all the information about the AWS organization, including the business criticality, data classification, and who I should be talking to if something happened in this account. All of this information is coming from data you already have. Then it looks up everything that happened in the WAF. It tells me it stopped 10 million log lines, and they were all blocked. There is suspicious activity happening. This is a DDoS kind of event that has been rate limited, and it presents recommended actions.
Mind you, I haven't had to train my agents to do any of that. I've simply put a prompt in that says look up the WAF log activity and tell me what I can do here, looking at about three hours of data because you don't want to overload the context window. It summarizes everything into a small investigation history that then goes back to my data store as investigation history. The next time I trigger this agent and give it the same account ID, it doesn't even need to look at the playbook agent because it knows through historical evidence that this account ID is meant to have the same metadata that I looked up in the last event. We did something interesting to capture the cost for that particular investigation, how many agents were triggered, and how many tokens were exchanged.
All of this happened in a matter of a couple of minutes. Imagine yourself on the other side of the computer trying to do this. You would take, if not minutes, at least a couple of hours to figure out what's happening and draft out that investigation history. Now all of this is powered by OCSF under the hood. My agents were able to interpret OCSF straight out of the box. They did not hallucinate on the different types of log schemas coming through because they knew exactly which log schema they were working with, or the framework rather, and they understood that it was coming from the WAF log. They understood what it means when we talk about WAF log activity.
Building this proof of concept was straightforward. The OCSF community, recognizing the values of versatility and adaptability, has been growing since Black Hat 2022 when it was first introduced. We have over 1,100 partners supporting it and more than 200 organizations using it. If you scan the QR code at the top, that's the open source Slack channel for OCSF. If you have any questions about OCSF, you can ask the contributors directly. They have calls that you can get added to if you want to listen in. The second QR code is for Amazon's OCSF Ready specialization partners, which we recently released. These are partners who have established their technical ability to consume OCSF logs produced by AWS services and build solutions on top of them. If you're restricted with the number of people in your business who can do this for you, you can leverage these partners.
As with everything in technology, and accelerated even more with generative AI solutions, everything changes all the time. When we built this solution, Bedrock Agents was our backend. Now it's all Agent Core. Think about how you can innovate by using Agent Core, which gives you a completely secure environment to build generative AI solutions. It's completely managed and uses constructs like identity and gateway to build isolation between the agents. Think about how you can add additional agents. What would benefit your business? What are the bespoke items in the incident response lifecycle that you would need agents for? They could be different from what I've shown you. This is a generalized use case.
Think about MCP integration now. There is a reason why you may not have everything stored in your AWS accounts. You could be using third-party accounts or third-party services. You could be using certain identity services or certain response services. You can use MCP to integrate all of those solutions into this environment so that it feeds off that information. If you use Amazon's OCSF Ready partners, you're able to consume OCSF out of the box. When we built this, Amazon QuickSight wasn't available, so Amazon QuickSight gives you a completely managed interface to interact with your generative AI solution, so you don't have to build a front end anymore. You can just hook it in through your MCP integrations with Amazon QuickSight.
The key takeaways from this session are to keep learning more about OCSF and see how OCSF can give you value with your log analytics use case and your security operations use case. Learn more about agentic AI solutions. There's an agentic AI learning resource that I've linked there. We recently released, in fact yesterday, the AWS MCP server, which is a managed interface into the AWS API MCP and the AWS knowledge bases. This can help you build agents that can directly remediate resources within your environment. You can use that to remediate lower-order environments automatically and then put human supervision or an approval workflow to remediate production-grade environments as well. I hope you learned something new today. Thank you so much for attending our session, and please leave us feedback on the session survey in the app. Thank you.
; This article is entirely auto-generated using Amazon Bedrock.




































































Top comments (0)