🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - From Code to Cloud: Building AppSec Programs with AWS (SEC222)
In this video, Daniel Begimher and Patrick Gaw from AWS present a comprehensive framework for building Application Security programs. They outline four key phases: planning (stakeholder analysis and goal setting), preparation (code scanning and communicating expectations), execution (threat modeling using the Four Question Stack framework, leveraging Amazon Q Developer and Amazon Inspector for security scanning), and scale (empowering developers through the Guardians Program and reusable security artifacts). The session emphasizes the "easy button" principle—meeting developers where they are by integrating security tools directly into IDEs and workflows. Live demonstrations showcase Amazon Q Developer's agentic capabilities with customization documents for automated security scanning, and Amazon Inspector's integration with GitHub repositories. The core message centers on distributed ownership, shift-left security practices, and making security seamless rather than burdensome for development teams.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction: Building AppSec Programs with AWS
Hello everyone. Thank you for joining us for this 4 p.m. Thursday session and welcome to SEC 222, From Code to Cloud: Building AppSec Programs with AWS. My name is Daniel Begimher. I'm a Senior Security Engineer in AWS. I've been here for over 5 years now and have been doing security for the past 13 years across different domains including incident response, application security, and recently security. Today I'm here with Patrick Gaw.
Hello everyone, Patrick Gaw. I'm a Principal Security Engineer on AWS's Global Services Security team, and I've been at AWS about 3.5 years. Prior to that, I was at a late stage startup as a VP of Security, helped build up their Greenfield Security program, and then prior to that I was an AppSec engineer and I've built and run security engineering teams and architectures. It's a pleasure to meet you all today.
Cool. So this is a level 200 breakout session. We're going to have 60 minutes where we're going to dive deep into AppSec, how you can build AppSec programs in your organizations. Hopefully by the end of this talk, you're going to get some practical tools, practical frameworks that you can go and implement in your environments, regardless of which stage you are in your AppSec program, if you already have one, or if you plan to implement a new one.
We are going to start by having a quick overview about what is application security. What exactly does AppSec mean? What is an AppSec program? Then we're going to discuss the principles of AppSec. And from there we're going to dive deep into the AppSec program roadmap and we categorized it into 4 different phases. We're going to start with preparation and planning, which metrics you need to define, which metrics you need to collect, how you're going to communicate with different stakeholders in your organization whenever you implement new AppSec controls. Then we're going to touch on execution. We're going to show you how you can really run threat modeling, how you can run code scanning, and other solutions leveraging AWS services. And we're also going to show you some demos there. And in the end, we're going to touch on scale, how you can do those things at scale, and I really hope this session will be valuable and helpful for you. Patrick.
Defining Application Security: Distributed Ownership and Shift Left
Thanks, Daniel. So before we get to the main part of the session, I'd like for us to align on a common understanding of what application security is. So application security is a set of people, processes, and technologies used to evaluate the security properties of software during all phases of the software development life cycle. So I'll call special attention now to the latter part of that definition because it gives us some unique insight into how at AWS we view application security and security. It's really along two dimensions.
The first dimension is around this notion of ownership or distributed ownership. So if you talk to anybody at AWS from Matt Garman at the very top to individual contributors, they're going to tell you security is top priority. That's the first piece and so implied in that is some notion of distributed ownership. Everybody at AWS that builds and operates services has some sense of responsibility and accountability towards securing the services that they build. It's a part of that as well. In particular, our builder teams when they build and operate a service, they're responsible for the security of their service, but they're also empowered to make security decisions on behalf of AWS. So that's the first thing, distributed ownership.
The second dimension is around this notion of shift left or integrating good security practices across the entire SDLC. So if you asked me about AppSec 10 years ago, we'd have done bolt-on security after the fact, after we pushed code to production. Today we think about integrating good security practices as early on as possible into the SDLC. So you all have probably heard the term shift left, right? So it's distributed ownership to scale. Second thing is around integrating good practices, security practices, as early on in the SDLC. So Daniel's actually going to touch upon a core mechanism that we call threat modeling, and we'll get into details and give you a high level framework and approach to do that, but that's a core part of what we do here at AWS.
Now why is it important if we're doing it right? That really helps our builders build software faster, get features and capabilities into your hands as quickly as possible while still maintaining a high bar for security.
If we think about integrating all these applications and getting best practices early on in the SDLC as possible, you're also reducing costs and reducing risk for your organizations. Let's face it, it's a lot easier to actually make changes on a simple diagram or an architectural diagram than it is to actually go back and roll back code that's insecure to start with and then also potentially deal with the fallout related to having security issues potentially being exploited. If we have those time savings and those cost savings there, we have more time and we have more money to reinvest into our program for other improvements.
Five Core Principles of AppSec at AWS: From Clear Expectations to the Easy Button
Now how do we do it? There are really five core principles that we here at AWS abide by. First is around setting clear expectations. From an organizational perspective, it's really understanding what your risk tolerance is as an organization, what your risk appetite is, and defining what that security bar looks like. Once you define that, then we clearly define what security requirements, policies, and standards we're going to drive and enforce across our AppSec program and across our portfolio of applications.
Earlier I also mentioned this notion of distributed ownership, and that service teams at AWS that build and operate their services are also responsible for the security of their services. We empower them with knowledge, so having a strong training mechanism to educate them on security knowledge. It's also teaching things like threat modeling, so a lot of our builders, as part of our standard development processes, are looking at architecture diagrams. Daniel's getting into more details, but they're going to identify risks early on in design and try to build in security versus wait for after the fact when code is pushed to production.
So the second core principle is really around having robust training and communities around empowering our developer teams to actually build securely. The third part is automation. We use automation to be able to automate undifferentiated manual processes, and then when we define security requirements and standards and policies, we use automation to consistently apply them and enforce them across our SDLC.
Now the fourth is around metrics. At AWS we incessantly measure everything. Think of metrics in terms of number of security flaws by severity levels, by different business units, business lines, because that gives us a current view of what our security risk posture is. It also gives us a basis for continuous improvement.
So the fifth one you also heard a little bit about earlier too is this notion of organization. When I say organization I mean organizational buy-in across the entire organization. Now I mentioned earlier that everyone from Matt to individual contributors like myself, for us, security is top priority, and so that's key. The other aspect of this underlying these five key principles that we use to drive our program, but the other underlying principle that I'll show you next is just based on my own past experiences.
A number of years ago when I took on a new tech team and I had to reboot the program, we had to get some visibility out there to understand where our security risks within our application portfolios were going to be. We evaluated a number of different application security platforms to be able to do things like Static Application Security Testing, Software Composition Analysis. One of our core requirements was really how effective the SAST and SCA engines were going to be. What I didn't consider, and looking at it back hindsight is 20/20, is really is this tool something that the developers are actually going to use? It was a hard lesson learned for me.
When we deployed this platform, we selected a platform and we deployed it, and it took probably three times as long as it should have. The problem was I was asking our developers to go and actually do the build and the compilation to build the software and then log into a separate platform and upload that to actually get the scans done. That was a struggle. I planned out a year in terms of planning, but it took three years to actually get 100% coverage and 100% visibility of security risk across the application portfolio.
That lesson learned translates into this last principle, and you'll see this theme throughout it in that we want to give developers the easy button. My lesson learned was we want, as we think about our AppSec programs, whether we're building a Greenfield one or we're rebooting the AppSec program that we currently have, we really want to meet developers where they are. This means things like we're not going to add extraneous process into what they're doing in terms of tool selection, in terms of how we communicate. We really want to give them the easy button.
You'll see that across a variety of different dimensions, whether it's technology or even just simple communication. A lot of this is based on my own personal experiences, but there's actually a branch of science called behavioral economics. The TLDR of behavioral studies is that humans are inherently lazy. I am inherently lazy. If I was given a choice to take action or not take any action, I would rather not take any action. That's just what it is. As I think about AppSec and as we think about AppSec, we really needed to drive this home and make it easy for them.
Planning Phase: Stakeholder Analysis, Goals, and Program Charter
In terms of a practical roadmap, there's nothing fancy here. AppSec and getting behavioral change to occur is not rocket science, really. It's not a technology problem. It's not as much a technology problem as it is a human organizational change management problem. We want to keep it simple for you all, in line with the spirit of using the easy button. It's four phases, and you can use this whether it's a greenfield or brownfield program you're trying to reboot: planning, preparation, execution, and scale. We'll dive into each of these four throughout the remainder of the session.
The planning phase is really about stakeholder analysis. It's identifying and engaging your stakeholders, understanding who they are and what their businesses are. Keeping the business objectives in mind is crucial because we as a security team really want to be a business-enabling function versus a traffic cop. It's about identifying and engaging stakeholders, understanding the high-level risks, and understanding what keeps them awake at night. From there, it's working with them to establish clear goals and metrics that we can measure and that we can achieve.
From there, it's about preparation. This involves getting code scanning across your environment and across the portfolio of applications to understand what sort of security risks you have in your source code and in third-party packages. Then we talked about communicating our expectations. It's really defining what your risk tolerance is and then defining those standards, requirements, and policies, and then communicating those expectations with those stakeholders.
The third piece is around training builders to threat model. There's never going to be enough security engineers to support the number of developers. Having been one before, it's impossible, even if I work 24 hours a day during 365 days a year. We want to really empower our builders and train them on security. We want them to identify risks in their design and build securely versus waiting until after code is pushed to production.
The execution phase is where we're in a steady state. We actually have threat modeling as a standard part of our design processes. We're finding and fixing security issues in the IDE before a line of code ever gets committed into our source code repositories. It's also deploying security testing in pipelines, knowing that we want a defense-in-depth approach to applying controls.
Lastly, scale. Once we've got the visibility and once we have an ongoing process whereby we can actually see what the security risks on our applications are going to be, then we can start looking at identifying systemic issues. Then we can turn around and build secure design patterns and enable our developers by giving them what we call golden path design patterns and packages. For example, one of the things that at Amazon we do is we want to give our customers a consistent authorization experience. We have a golden path pattern for that, and we also have all the libraries and packages that we all use as part of that design pattern. That's how we can start thinking about scale.
Lastly, it's really about seeking feedback from your program stakeholders, really understanding who your stakeholders are, and having a good relationship with them. Instead of saying no immediately, it's really about how do we get to a secure yes. So let's dive into the plan phase. As I mentioned briefly earlier, it's really about identifying and engaging your stakeholders. Take inventory of applications and understand what you're working with to start with. Understand who those owners are, who the businesses are, and also understand the operating context.
The biggest pro tip I'll tell you here is if you're rebooting a program or you're building something greenfield, getting leadership buy-in is going to be the most important thing. The second thing is find AppSec-friendly or security-friendly organizations and leaders to help you create some quick wins. Engage with those organizations and find applications that we know are high risk, high value, and high impact if we're to help there. Get some quick wins, get some momentum, and that really starts to accelerate your program after that. Most of all, it's really about reframing your thinking. We'll see some of those examples and just subtle communication differences, but again, meet developers where they are, not the other way around.
Understanding high-level risks.
When we do the stakeholder analysis, we really need to understand the applications we're working with, the business context, and the operating context to get a complete view. Some businesses are going to have different objectives than other business units that you support, so having an understanding of those subtleties is going to help you build rapport, build trust, and build relationships with your constituents. You're going to need that because as an AppSec professional, you're never going to be able to scale yourself as a human, so you need their help. Understand the parties and ask them what keeps them up at night and what are some of the challenges. Then from there, think like an adversary at a high level. Look at different business lines, maybe in different industries potentially, depending on the size of your company, and understand what the potential avenues are that a threat actor would like to compromise, either confidentiality or integrity or availability of your systems. Lastly, this is not rocket science. It's really just doing basic things like establishing clear goals and metrics and getting everybody aligned.
So out of this plan phase, you want to develop a program charter. You want to get buy-in with the leaders at the highest level. When you're rebooting a program or whenever you're building a greenfield program, you really want to bring your stakeholders along as part of that journey. You want to include them in the conversation and you want them to be involved in the definition in terms of the program vision, mission, and charter. That way they have some sense of ownership and accountability. Once you do that, get some quick wins, then you start to get some momentum, and then you really start to take off. The key is distributing ownership. So back to the initial concept of everybody at AppSec owns security, right? Our service teams are responsible and accountable for not only building and operating their services, but operating and being responsible for security.
Preparation Phase: Communication Strategy Using the EAST Model
Now out of the planning phase, you get a charter, you've brought your stakeholders along, you've got a clear vision, mission and goals, you've got buy-in from a leadership perspective, and now it's about preparation and getting visibility to your portfolio. Get testing out there. Understand what sort of security issues are in your source code. What third party package is the most vulnerable? Try to identify systemic issues and then develop mitigations for high risk, high impact applications.
So this is kind of interesting. Now we sort of switched to the communication aspect, and we talked about this notion of the easy button, right? So I like to use this framework called the EAST model. When we start to engage our stakeholders and we communicate with them, it's really about, when we think of communication, think of four letters: EAST. Keep it easy, make it exciting, make it attractive, gamify it, and make it social. It's funny, there's this psychological phenomenon called social proof. In the absence of, you know, if you're presented with a situation where you're unfamiliar and you don't know what to do, the human tendency is to look towards your peers to see what they're doing and what they're thinking. Then you have this tendency to do the same things that they do. You'll see it in the upcoming example of sort of reframing and how we communicate that actually drives a lot of behavior change, and part of that is the social proof concept. You'll see it sort of in practice in the next few slides here. And make it timely, right? Read the room as you communicate. It's being able to understand when's the best time to talk to somebody, right? Try to identify barriers to action or inaction and remove them, but it all goes back to the easy button. You can use the EAST model in your communications as you engage with the stakeholders.
So going back to my story of my lesson learned, when I was deploying this AppSec platform, and we'll call it AppSec widget, this looks like probably 80% of your corporate communications that go out. It's not quite as long, but if you read it, take a moment to read it, it's kind of boring. But the thing I'll call attention to here is that bold line, action required: enable scanning your respective source code repositories by a certain date. So we're deploying an AppSec platform. We want developers to actually start scanning their applications so we get some visibility out there. So it's boring, asking you to take a lot of actions, and making it hard, right? To me, that's a barrier of action or a barrier to actually doing something. So that's what I probably would have written 10 years ago when I was trying to reboot this AppSec program.
Now let's think about the EAST model in the context of this next email. So I'll give you a moment to sort of read it, but a couple of things to call out here. The first one is this notion of making it really easy. So if you see the call to action, we're actually using defaults if you will. So if you need to opt out of scanning for specific repositories, follow the exception process by January 31st, 2026. So we're by default enabling scanning on repositories so we can get some visibility out there, and we're not asking a lot of our developers. So we're actually forcing them to opt out of it, and it makes it easier.
Subtle reframing techniques make a huge amount of difference in terms of the way you communicate and collaborate with your developers. You also see things like the first bold line: "In fact, 90% of builders who tested it thought it was easy." That's the power of making it attractive. You want to gamify it, you want to make it exciting, you want to show that most developers are actually doing this and thought it was a cool and easy-to-use platform. Subtle reframing, as you can tell in the way you communicate, makes a world of difference.
Execution Phase: Threat Modeling with the Four Question Stack Framework
We are here. Now we've talked about the communication piece, and we're going to move to the execution phase. Daniel, thank you. We started with the preparation and planning phases, and now we'd like to discuss how you really execute, how you can leverage frameworks or some of our services in order to run the execution for your AppSec program in your organization.
Let's start with threat modeling. You want to threat model everything. Whenever it's a new feature or a new product, you want to start by trying to understand what can go wrong, what's the worst that can happen. When you start the threat model, there are several phases that you can follow and several frameworks, and I'm going to share some shortly. But first, your goal is to try to identify those risks as early as possible during your software development lifecycle. If you need to make a code change, or if you need to change the architecture or include a different service or tool, you want to make those changes on the whiteboard and not after the application is already developed and deployed.
Now, this is an example of a framework, the Four Question Stack framework. Whenever you run a threat model, you can leverage that or other frameworks out there, but the principles are pretty simple. You're going to start by asking what are you working on? If it's a new service, a new app, a new feature, or an existing service, who are our customers? What is the business logic? What exactly is the expected outcome? Next is what can go wrong? What's the worst that can happen if I have an unauthorized user? Can they exfiltrate sensitive information? Can they impact the integrity of my data? What's the risk to availability if my app won't be available for several hours, and what is the business outcome of that?
Then, what are we going to do about it? Which tools, which services can we implement in order to mitigate the risk? Maybe we can use a different software package, or maybe we can push a code change. What exactly can we do in order to mitigate or minimize the risk? In the end, we need to review it. Did we do a good job? Did we really minimize anything that we wanted to mitigate? Did we accept some risks? Can you accept the current state of the application? This process is iterative. You will probably learn from other applications, you will learn from other projects, you will learn from your security issues. You need to take those learning outcomes and put them again as a part of this threat modeling process and make sure you're asking the right questions.
Now, this is how we also work in AWS. Whenever we're shipping a new product or a new feature, a new service, threat modeling is something essential that needs to be a part of this process. If you want to take another step further about what exactly you need to do in your threat model, first, make sure you have diagrams and documentation, that you know what the data flow is, what the architecture is. If you don't know how the system looks, how can you threat model it, right? Next, you need to get your security teams, your developers, and your business stakeholders all in the same room, or virtual room, and work together in order to identify the risks and different mitigations.
Then we need to document. As we mentioned before, we want to document everything, and you can really use some tools. One example of a tool is Threat Composer. It's a tool which is available as an open-source tool in our GitHub repository. I will share a link later on. But the idea is it can help you to conduct this process and really document the information and the decisions that you make during your threat model.
Code Scanning in Action: Demonstrations with Amazon Q Developer and Amazon Inspector
Next, I would like to discuss another topic: code scanning. You want to scan your code, your repositories, for software composition analysis. I assume most of you, most of your teams, are leveraging these tools as well for development. We are going to really discuss Amazon Kiro and Amazon Inspector, and how those tools can help you to run some of those AppSec processes.
Let's start with Kiro, our agentic IDE solution. This is an example of the easy button that Pat mentioned earlier. Kiro can help your builders, your developers to write code and be more productive, and we know that by now. But it can also help you to run code scans, perform security testing, and security analysis on your code. It really meets the developers where they are, right? Talking before about the easy button, developers are using IDEs to develop code. So why not include those tools as a part of the IDE to make it easy for them and not add another process or another screen or another window that they need to open in order to follow an AppSec process.
Next, I would like to show you Kiro in action. I would like to have a short demo about how you can leverage Kiro for AppSec. Just before that, before jumping into the demo, I have a quick question. Who likes french fries? Raise your hand if you like french fries. All right, I like french fries. I probably need to eat less, but I'm really a big fan. If I weren't a security engineer, I would probably open my online shop for french fries. So that's what I did with this demo.
You can see here Fry Factory, my online shop for selling french fries, and I have different types of french fries here. You can see that we have classic fries, but we also have cheese fries. Truffle fries are my favorite actually. This is an AppSec talk, so if you go to the My Order section, you can see that with a simple SQL injection query, I can potentially get some sensitive information. This is very sensitive information about the previous orders, the amount of salt, the amount of crispiness, how much ketchup each previous order had. So this is something that I probably would like to investigate and protect.
Now, let's jump quickly into the source code. This is Kiro, the IDE that I mentioned. If I go to the source code of my Python app, I can see that I'm using Flask, and I have several dictionaries that represent my database. If you notice down there, there are some hardcoded credentials as well. So I think that by now, we all agree this app is a bit vulnerable. Now I can use Kiro and just mention those issues or ask Kiro to run a security scan for me, and that's fine. But I want actually to show you something from a different angle, a bit more advanced.
I'm going to use the Kiro customization documents. The Kiro customization documents can help me to define prompts, which can include standards or security requirements or templates, and I can reuse them across different projects or across my organization. If I go to the Kiro folder and I go to customizations, I can see different files. First, I have a file that can help me to define my documentation standards, how documentation should look like, right? This is how I want every project documentation to be structured. There's another example of threat modeling. We just discussed threat modeling, how we can automate that. So really here, I can use STRIDE, and I want to have these risk scoring metrics, with these considerations, with this template.
Another example I want to show is security standards, which may be the most relevant for this one. So I have here a document that defines my security standards. If you notice, sections two and three are directly relevant to the security issues that we noticed before. We have secrets management, which we know that we're not really storing secrets in a secure location. We saw the hardcoded credentials, and there's also the injection prevention, which we saw an example of how we can really run SQL injection against our sample app. So this customization document can be ingested into Kiro, and it will help me to run the fixes.
Great. I want actually to take it a step farther. I will use Kiro agents. Kiro agents can help me to define different agents with different roles. The input of those agents can be those customization documents. We can see here that we have a threat modeler agent. Conduct the threat modeling using STRIDE. Your role is to analyze the application and identify the security issues.
The agent really defines which actions are allowed for these agents, which specific actions this agent can take, and I'm also ingesting as a resource the same steering documents that I showed you earlier. This can really help me to just run threat modeling in an easy way.
Let's jump to our security scanner. I have another agent that's specialized in running security scans. This agent again has the prompt that says you're running SAST scans, and you need to run security scans against the project. I'm also ingesting the same resources that I mentioned before, but this time I'm also including a tool. I'm including ASH, the Automated Security Helper. You can integrate with the agents your own tools in order to run your security operations. ASH in this case is the code security scan tool that's available in our GitHub as an open source, and again I'm going to share the link for this one as well. But the idea is just to show you that with the power of Kiro agents and MCPs, you can really create end-to-end integration with your business context, the security standards, security requirements, your project, and really the role and the prompt that we're running into our agent.
Let's see how it looks in action. I'm going to call our agent to run a security scan. Since it already has the context, I don't need to provide any additional guidance. I will call our security agent, which I've defined here. The MCP is loading and I'm running a security scan with a simple prompt of run a security scan.
This will take a couple of moments, and eventually the scan will run, and it will trigger my tool, follow my security standards, follow the guidance of the agent, and it will provide me a contextual security report based on the findings that it found in the repository, based on what my tool found, based on whatever I prompted, and on the security standards and instructions. We can really see again the secrets management and the SQL injection issues.
Next, I want to really take it to the next level. I can just open the report and review it and share it with my developers, so my developers can review it, but we can even take it one step further, and we can use Kiro again to take this report and give us specific actionable actions that we need to run against our code repository. By running that, again, it's going to take a couple of moments. It's going to probably take a couple of moments, but in the end, I can see really a report with what exactly are the issues, what needs to be fixed, and how I can really implement it.
This really connects to the same easy button that Pat mentioned before, and we're going to repeat this concept. Make sure you're making it easy for the developers. I want to show you another example for another easy button, another tool that can help for running security on your organization. This time it's Amazon Inspector, our vulnerability management tool. Amazon Inspector can integrate with the different AWS accounts and run scans against your containers, against your compute resources, against your code repositories. It's really an easy button because again you're not asking anything specifically from your developers. You can integrate Amazon Inspector with your GitHub and GitHub repositories and have this centralized visibility over security issues across the organization.
Let's do it in action. Here I'm in the Amazon Inspector console, and I can just go to Code Security. I've already integrated Amazon Inspector with my GitHub repository, really with a couple of clicks, and I'm going to just create a new scan configuration. There are some parameters that I can change here, such as how often I want the scan to run. I'm just leaving it default here. It's going to be weekly and on every commit push to main. I can also pick the different security scans, so I'm going to leave it as Infrastructure as Code scanning, Static Code Analysis, and Software Composition Analysis.
I can really filter by which repositories I want to run the scan against. I'm just going to run it against all of them.
Now, in this case, it will just run on the next weekly period. I think that here I picked Monday, so instead of waiting until next Monday, we can just run an on-demand scan. After running an on-demand scan a couple of minutes later, I can go into my scan, and I can see a full detailed report of my security issues. Since it scanned both the code and the packages, I will find different types of findings.
Here we see examples for hardcoded credentials, what exactly is the issue, and how it can be fixed. Then we have another example for a package vulnerability this time, and which version will be fixed in this specific case. So Amazon Inspector is a very easy to use tool that hopefully can be beneficial for you as well.
Scaling AppSec: Empowering Developers Through the Guardians Program and Key Takeaways
After we discussed some execution steps, we showed you how you can use Kiro and Amazon Inspector, and we discussed threat modeling and how we can scale. We can, of course, scale with Amazon Inspector and Kiro as well, but I want to dive deep into the scaling mechanism. First, and I think that Pat mentioned it a bit before, we want our developers to think security. Here in AWS, developers are focused on shipping and delivering new, amazing features and products, but they all think in security. They all have security in mind, and they can measure that security. We have different programs around secure development, which I will share in a bit.
Make sure that your developers are aware of the importance of writing more secure code and make it easy for them. Make it easy for them with your different tools, with automations, and make sure you're not adding another step and making it hard to write more secure code. One example is if you have a solution or a process that the developers need to follow where it's reusable, maybe create a reusable artifact that they can consume. For example, if you have standards for how authentication should work in your organization, instead of trying to reinvent the wheel each time, make sure you have a package for authentication that they can just consume and import to their project, and it already has the security requirements and standards that you need to follow.
If you know how a three-tier web app should look like, give them a reference architecture. Give them the infrastructure as code templates. Give them already an environment built in with the security controls so it will be easy for them to start building. Another point is to make sure they have the autonomy to make decisions. Make sure your developer teams don't always rely on the security teams, which are usually centralized and can be a bottleneck. We really want to make sure that we can let our development teams, the product teams, the builders, make security decisions, and I'm going to give an example here as well.
In AWS we have the Guardians Program. Has anyone heard about the Guardians Program before? Okay, we have one hand. The Guardians Program is a program that we have internally. We already have some public resources about it as well. We're taking developers, builders on the different product teams, and we're training them on security. They are acting as the security guardians of the team. Those team members know their teammates best, know the product best, understand the business goals of their team, but they're also trained on how to do security and how to conduct threat modeling and how to follow those centralized security teams with the security requirements.
That really helped to expedite security and upset processes. Because those security guardians are helping the product team to run the threat model, helping them to guide documentation, helping them to implement security controls, we still have a centralized security team. We still need to run penetration testing and to get the sign-off from the centralized security team. But when you have that security guardian in your team, it really helps you to make sure that you get everything necessary and you really make this process more smooth.
We really touched on the different phases of how you can build an AppSec program in your organization. We started with planning, where Patrick showed you some examples of how you can define goals, define metrics, and how it's important to measure success before you even start to run the program. Define what success looks like. Then the preparation phase covered how you communicate to the stakeholders and how you train the teams. We saw the example of the email where changing a couple of words and sentences made a huge difference.
Then we discussed execution. We saw the threat modeling, which questions you need to ask, what you need to collect, how you can use Kiro steering documents, and how you can use Amazon Inspector integration with your code repositories. Then we discussed scaling. We talked about making reusable artifacts when possible, we talked about the Guardians program, and we talked about distributing ownership to your product teams.
Now if there is one big takeaway that I want you to take from this talk, it's make it easy. Don't make AppSec hard on your teams. Integrate it with your processes. Make sure it's easy for your teams to run the code scan and to get you the results to make sure that you follow the security requirements. Try to work backwards from your development teams and integrate your security requirements into the process.
I promised you some resources, so those are some resources that we discussed today. You can check the threat modeling workshop if you want to dive deeper into how to conduct threat modeling. We have the Threat Composer, the open source tool that I mentioned before, which you can use to run the threat modeling against your applications. ASH, the Automated Security Helper, is an open source tool for code security scans, and I showed an example with the MCP agent. And the Security Guardians blog if you want to hear more and learn more about the Guardians program.
I really hope this session was helpful for you. We're going to stick around if you have any questions, and please fill out the survey. It's super important for us. Thank you so much.
; This article is entirely auto-generated using Amazon Bedrock.























































Top comments (0)