🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Robust network security with perimeter protection and zero trust (NET326)
In this video, AWS experts demonstrate building comprehensive perimeter protection for applications using CloudFront, AWS WAF, Shield, Network Firewall, and Verified Access. The session features a fictional customer, SecureShop, protecting both public retail websites and private internal applications across four traffic directions: internet ingress, internet egress, east-west traffic between VPCs, and employee access. Key highlights include deploying security using Kiro CLI (AI-powered command line tool), implementing managed rules for DDoS protection and bot control, configuring private VPC origins in CloudFront, using Network Firewall with Suricata rules and AWS managed threat intelligence from honeypots analyzing 100 million signature patterns daily, and establishing zero-trust access with AWS Verified Access using identity and device posture checks. Live demonstrations show WAF rule deployment, policy testing, and traffic inspection with detailed CloudWatch logging for audit and troubleshooting purposes.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction: Building Secure Perimeter Protection on AWS for SecureShop
Today and in the next few days, you'll learn about new capabilities that allow you to build and scale applications in AWS. But remember, that's the first part of your job. The second part of the job is to secure these applications and protect them. Today we're going to talk about how to build secure perimeter protection on AWS. I'm Shovan Das, and I'm a product manager in the AWS networking team. I lead the area of zero-touch networking and VAN, and with me I have Megha Kande. She leads the CloudFront team, and she will talk about security. With me, I also have Sohaib Tahir, who is a principal solutions architect, and he will give a cool demo of how to build end-to-end perimeter protection using GenAI and Kiro CLIs. I'm pretty excited about that demo.
Today we'll talk broadly about what security means for an application on AWS versus the perimeter protection needs. Then we'll go into perimeter protection at the edge. Then we'll talk about traffic inspection within the network. And finally, we'll talk about giving end users access to applications and resources hosted on AWS, and then we'll wrap up with a cool demo.
Most of our customers have two kinds of applications or resources hosted on AWS. The first is public applications. Think of your retail website where any user can access it, or your API gateway where anybody can use those APIs and connect to your applications. You need to protect those. But you also have private applications, for example, your HR system running in AWS, your financial dashboards running in AWS, which your employees, your workforce, and your contractors access. These are private applications. Not only that, you have all these applications running on EC2, containers, and RDS instances. Your engineers, developers, and analysts access those instances and perform SSH for their daily jobs. At the end of the day, you need to protect the applications, the data in those applications, and provide robust perimeter protection.
When I think of building a perimeter, I think of four traffic directions. First is the internet ingress. That's where your users connect via the internet, but malicious actors, DDoS attacks, and bad actors can sneak in. So you need to protect that angle. Then your applications are also connecting to the internet, like downloading from GitHub or uploading code and patching. You need to ensure they are accessing authorized websites and putting in data that is compliant based on organizational needs. The third direction is your applications talking to other applications. You need to ensure that only the right applications are talking to the right application. That's to prevent lateral movement. Let's say somebody sneaks into one of your applications and then from that application they jump into some other application.
The final direction is your employees and your workforce accessing applications to get their jobs done. You want to ensure that only the right users, the users with the right privileges, are accessing the applications on a need-by-basis. Once you have done those, you have built perimeter protection. So what we'll do today in this session, we'll talk about a fictitious customer, SecureShop. They have two kinds of applications. One is the retail website SecureShop.com, and then they have a private application, FinanceCore.com. These applications are hosted in AWS, so they are hosted in VPCs. The applications are spread across multiple VPCs, so they're connected via Transit Gateway. Our customer SecureShop.com, to provide better performance, content acceleration, and caching, is also using CloudFront for that public-facing website. They also have applications and employees on-premises. Those employees and applications are also accessing through the Transit Gateway to the VPCs.
First, they will do the perimeter protection against DDoS and other attacks from the internet ingress. Then they will protect all the traffic directions coming from the internet to their application. Then they will protect the east-west traffic between VPCs. And finally, they will protect the traffic and inspect the traffic coming from on-premises and give end users access. Once they complete all these things, they will have complete perimeter protection. The rest of the session will focus on how to protect all these traffic directions and all these different ingress and egress points. Now I'll hand it over to Megha, who'll talk more about CloudFront and security at the edge.
CloudFront and AWS Edge Services: Scale and Infrastructure for DDoS Protection
Thank you, Chauvin. Hi everyone, just checking if you can hear me all right. Thank you. How many of you are familiar with CloudFront? That's a fair bit. As you know, CloudFront is the content delivery network for AWS. It is a reverse proxy, and when you think about a CDN or content delivery network, you often think about performance, acceleration, and caching at the edge. However, CloudFront also plays an important role in protecting your applications from large-scale attacks, DDoS attacks, and that is what I want to talk about in this session. My name is Megha Kande. I lead the CloudFront product management team, and let's jump into this.
Let's think about the customer that Chauvin talked about, SecureShop. They have an application with some public-facing resources that take in a bunch of traffic from the Internet. Now let's look at the anatomy of this traffic that may be coming to the SecureShop application. Over half of this traffic is bots today, and we know that this bot volume is growing every day with more sophisticated bots coming forth. This is top of mind for SecureShop because not all bot traffic drives business growth, but all bot traffic drives infrastructure cost. So this is something that SecureShop wants to monitor and control pretty closely.
Second, from what is top of mind for SecureShop is the growing volume of DDoS attacks. We know that every year the volume of DDoS grows pretty significantly. In the last two years, DDoS attack volumes have grown 400 percent. What does this mean for an application? When you think about a 20 plus TBPS attack that is hitting a particular cloud provider, you have to be concerned about the availability and resiliency of your own application, so that's top of mind.
Finally, SecureShop is an e-commerce application. There are some common threats they have to worry about all the time. Is somebody creating a fraudulent account? Is a legitimate customer account being taken over? Is there some malicious script, cross-site script, that is coming to their application that may expose customer data and make them lose customer trust? All of these things are top of mind for SecureShop, and they want an application security tool that has the scale to prevent attacks, has the coverage to prevent different types of attacks, and is easy to use. Not every application builder is a security expert, and there is no need for them to be.
So how do we solve for this? This is where the AWS edge and CloudFront help. It starts with scale. CloudFront is deployed at the edge of the AWS network. We have 750 different points of presence. These are racks of servers that are deployed across 750 different locations in 100 plus cities in 50 plus countries. This is a pretty widespread surface that can absorb incoming requests and diffuse them. Essentially, when there is a large volume of traffic that comes into CloudFront, one of the basic things that CloudFront does is spread the traffic across its edge points of presence.
On each of those points of presence, on the same server as CloudFront, you have the AWS Web Application Firewall running. The Web Application Firewall can then take that diffused traffic and figure out whether this is legitimate traffic that should access the application or should be blocked. Along with this, at the edge you also have the Layer 3 and Layer 4 Shield that prevents Layer 3 and Layer 4 DDoS attacks. This is the scale, and when you think about the edge services for AWS, you have CloudFront, you have AWS WAF, you have Shield that are running on this scaled-out infrastructure. That is sitting in front of your applications. This application could be an Application Load Balancer point in the region. It can be an on-premises application that is running in your own data centers. The edge services provide that layer of protection to your services.
Multi-Layer Security at the Edge: From Network Connection to Application Protection
Let's take a deeper look at what functionally goes on here. Edge services take in all the traffic coming from the Internet to your applications. This could be legitimate customer traffic or malicious attack traffic. The first thing that happens is a network connection is established, which is monitored by AWS Shield. When this network connection is being established, Shield examines the IP addresses that a certain request is coming from. AWS constantly maintains a list of malicious IP addresses by looking at all the traffic that comes to AWS. This is a list of known offenders which, according to our databases and data, have launched an attack.
In addition to this, there are also honeypots that AWS has. Sohaib will talk about that a little bit later in this session. If a request is seen coming from this set of known offender IPs, it is blocked at the edge by Shield. That's the first layer. At this stage at the network layer, you're setting up a TCP connection, which is subject to SYN flood attacks. Shield protects against that and examines every packet going through the network layer to make sure that it is not corrupt and is configured correctly using technologies such as checksum. These simple things all come together to make sure that your network layer connection is protected.
Once the network connection is established, CloudFront then starts to terminate the TLS connection at the edge and sets up an HTTP connection. For setting up TLS, CloudFront, working with our security team within AWS, provides a set of advanced ciphers that you can configure for your TLS handshakes. Recently we also launched support for post-quantum compute-enabled keys. This makes sure that the keys you use in your TLS handshake cannot be harvested now and then reused later for account takeover.
TLS handshake is one thing, and another capability that we launched recently is Mutual TLS with client applications. Mutual TLS is a technology that has been around for some time. With CloudFront, you can now have a completely managed version of it where you can control and create a trust store on CloudFront, issue certificates to your clients, and then make sure that the TLS handshake establishes a two-way authentication between CloudFront and your client applications. If SecureShop, for example, has a point of sale somewhere, that machine can authenticate itself to CloudFront before all the data is then sent over the machine and a transaction is then enabled.
Once this HTTP session is set up, your application is open to talk to a client. At that point, there is a new attack surface where incoming requests could be malicious. This is where AWS WAF steps in, which gives you a set of configurable rules to make sure that the requests coming to your application are legitimate and are the requests that you want your application to take. This could include things like rate limiting, how many requests can come to your application. You can look at bot prevention and DDoS management at layer 7.
This is about how you establish connections and how you examine requests coming into the AWS Edge. But there is also a connection established between CloudFront and your application that is running in an AWS data center, and we pay a lot of attention to that connection. Ultimately, if your application endpoint in the region is exposed to the Internet, that could be another attack surface. Of course, you can configure WAF and Shield on the application in the region, but CloudFront also allows you to limit the requests that are hitting the application that are served by the application, such that those requests are coming from CloudFront. This authentication mechanism exists as one of the mechanisms that we built way back when we launched CloudFront.
One such mechanism, developed about 17 years ago, was an OAuth token that CloudFront sends as a signature to S3 buckets. But recently, in the last year, we've also developed a way to connect to an ALB, a network load balancer, or EC2 instances that are in private VPCs. If you have a way to shield your application from the Internet and expose it only to connections from CloudFront, you are removing another attack surface from your applications. This represents all the different layers of security that services at the edge provide for your application protection.
AWS WAF Managed Rules: Simplifying Security with Bot Control and Layer 7 DDoS Protection
However, this is a lot to manage. One of the things that's top of mind for us is making sure that these configurations are easy to use and easy to apply. AWS offers AWS WAF with a set of managed rules. These managed rules are constantly curated so you don't have to worry about different types of attacks. You get a packaged rule that you can apply to your application pretty quickly.
Let me walk through a few examples. There are many more, but to secure SecureShop, which is an e-commerce application, there are a few that are pertinent. The first one is a set of baseline rules. For example, there are top 10 security risks identified by the OWASP standard, and we have a managed rule that constantly provides protection against all of those risks. Another example of a basic rule would be checking whether your incoming request is well-formed and whether your URI is correct. All of those things can be applied through a simple basic rule.
Second is a set of managed bot control rules that CloudFront offers. Here you can configure a common bot rule which checks that an incoming request is actually from either a legitimate user or from a set of verified bots. Alternatively, you can have a very targeted bot rule set for your application. For example, in this case for an e-commerce application, there can be cart stuffing bots which can be prevented from accessing your application using a targeted bot rule set.
Other rule sets that are available include DDoS protection rules, which I'll talk about in more detail because we made some interesting changes here. There are also rules that prevent account fraud and account takeover. With these sets of managed rules, AWS WAF makes sure that you have basic security configurations that are available and easily applied.
Let me discuss the DDoS managed rule in more detail because it has evolved as DDoS attacks have evolved over the last few years. We recently launched a Layer 7 DDoS protection managed rule in AWS WAF. This rule is faster, acting within a few seconds versus taking a few minutes in the past. It's more accurate with fewer false positives compared to the previous rule set because it integrates many signals from AWS Madpot.
The new rule also allows you more granular control over when the rule is applied. When we say granular control, what does that mean? First, it lets you set configurations like whether you want to examine every request or only challenge requests that have suspicious DDoS indicators as predicted by the rule. This is efficiency versus coverage. It also allows you to configure certain parameters like excluding certain IP addresses because they are trusted or including certain more vulnerable URI patterns.
You can then configure your action, so you can say that if this is a DDoS request, you want to challenge it versus block it. You can have those settings and pick and choose. Anything that is blocked or examined is logged using CloudWatch. This new DDoS Layer 7 rule is available with Shield Advanced.
Private VPC Origins: Securing Applications with CloudFront VPC-to-VPC NAT
Shield Advanced is a bundled, subscription-based product that we have. With Shield Advanced, you have 50 billion requests that can be examined using layer 7 DDoS protection. This is one of the things that we introduced this year. The other factor is security for your origins that are running in private VPCs. I talked about this briefly, but I wanted to double-click into this.
Imagine there is an application, so in the case of SecureShop, they have their home page along with the login and sign-in page that they are running on EC2 instances in a region, and it is fronted by an application load balancer. Now they want to make sure that this home page is not exposed on the Internet. It doesn't have a public IP, but at the same time, they want to make sure that this is fast and can reach global customers for SecureShop with as low latency as possible.
So now with the support for private origins, VPC origins within CloudFront, SecureShop can go to the CloudFront console and quickly configure a VPC Origin connection using CloudFront. What happens behind the scenes is CloudFront sets up a parallel VPC and connects the CloudFront VPC with the customer's VPC using a VPC-to-VPC NAT. All of this is done behind the scenes, completely managed by CloudFront with no additional costs. Then the CloudFront distribution is configured on top of this, so your application is not exposed on the Internet. It can only be accessed by CloudFront over a private ENI connection, and you have removed another layer of attack from your applications.
To recap, using the edge services, using CloudFront, using AWS WAF, and using Shield, the AWS services are able to provide a suite of tools to customers that are scalable, have complete coverage to protect against different forms of attacks, and are easy to configure. I would now hand it over to Shoeb, who is going to show us a cool demo on how easily you can set this up.
Demo: Deploying SecureShop Retail Website Security with Kiro CLI and AI
Thanks, Michael. Good evening, everyone. Megha talked about a lot of cool security features. In this demo, we are going to deploy some of those security features using our AI service called Kiro CLI. Kiro CLI is basically what used to be called QCLI, and now it's Kiro CLI. It's an AI agent that can reside in your command line terminal, and then you can use it with natural language prompts to interact with your AWS resources.
In this demo, we are deploying a retail application for SecureShop from scratch, but you can use Kiro to deploy these edge security features on your existing applications as well. Let's look into our demo architecture. We have a fairly simple architecture here. We have a static retail SecureShop website hosted on Amazon S3, called the Amazon S3 Origin. Our goal here is to implement caching and some of the security protections such as protection against DDoS attacks, SQL injection, cross-site scripting, and so on.
We also want to enforce only HTTPS connections to our application. This is internet-facing, so we only want internet users to be able to use encrypted connections. We also want to enable Origin Access Control to make sure that only authenticated requests are coming from CloudFront to the Amazon S3 origin. Finally, we also want to implement some WAF rules.
What I did was start with a very simple prompt with Kiro. I basically told Kiro to deploy this retail web application in my AWS account. Kiro knows the best practices, security best practices, and AWS best practices, so you can actually interact with Kiro and ask, "This is my type of application and I want to deploy security best practices." It will deploy all those rules. I gave it a prompt, and my initial prompt is available if you scan the QR code. Kiro came up with a plan, deployed the application on an S3 bucket, and then configured the CloudFront distribution along with the WAF rules to protect the application against these threats.
I had to do some back-and-forth prompts. I will say that you have to sometimes break larger prompts into smaller prompts so that the AI is more narrow in terms of the API calls it's making and the resources it's looking into.
I did some of that and once I did that, Kiro was able to come up with this final output where I have my application hosted on CloudFront. I'm using the CloudFront DNS provider name here. You can also use your own DNS using Route 53. This is a demo, so CloudFront DNS is enough. You can see that there's an S3 bucket where my retail static page was hosted, and then Kiro has enabled a bunch of web application firewall rules out of the box.
For example, SQL injection managed rule, cross-site scripting, and then it also deployed a rate limiting rule that can help you absorb a lot of traffic in DDoS attacks. So any time there are more than 200 requests per 5 minutes, it will block those additional requests. And then finally it also deployed a geo-blocking rule to only allow requests from the US in this scenario. It's also looking for bad requests in my HTTP headers. I'm also shipping the logs into CloudWatch, which is one of the best practices. So Kiro implemented that as well so that I can look into the log data for troubleshooting purposes and also for understanding my traffic patterns to adapt my WAF rules.
This is what our final retail website looks like for SecureShop with some product catalog where users can log into it. I also asked Kiro to add some demo traffic or give me the option to generate some attack simulations, so it added these buttons. Once I did that, I was able to generate and simulate some attacks on my publicly available website. Looking at some of the metrics, you can see that I was able to generate around 1,400 requests, and 51 of those were blocked. You can also see some of the top attacks. These metrics are available in the CloudFront console. So if you're using CloudFront, you can just quickly go and look into these. And then you can also see finally that I'm only receiving requests from the US because I have a geo-blocking filter going on.
These are the list of managed rules and custom rules that Kiro deployed behind the scenes to achieve these protections. I have some managed rules, for example, the SQL injection rule, the known bad input rule, and others. And then I also have custom rules which include the rate limit rule and the geo-block rule. By using Kiro and some of the security protection and features that Megha talked about, we are able to secure this SecureShop retail website. As you can see on the screen, it's a secure website only supporting HTTPS connections. We have a WAF policy which has all the different rules that we talked about. We configured a protocol policy that is redirecting all the HTTP traffic into HTTPS, a very simple configuration that Kiro did. And then finally we have Origin Access Control enabled on Amazon S3 Origin to only allow authenticated requests from CloudFront to our S3 bucket.
This brings us towards the end of the demo. You can deploy all of this through the console as well, but now you have options to use AI services. If you want to learn more about how you can use some of our other AI services for deploying secure applications or any of our other networking and content delivery services, you can scan the QR code. There's a blog on this page or this slide, and there's also a Twitch video that you can watch. And today we also launched an AI agent for security in preview. So if you are a security expert and you want to use AI for that purpose, you can also look into that.
AWS Network Firewall: Protecting East-West and North-South Traffic with Managed Rules
With that, I will hand it over to Shovan for walking us through network traffic inspection. Great demo. You touched upon two of my favorite things: simplifying customers' lives. First, using Gen AI with Kiro to build and test the environment, and then the second one was managed rules. I will talk more about managed rules in the context of network firewall. In the previous section, you saw how we can use CloudFront and edge security products to protect your internet ingress. Now customers have traffic within the network. For example, they have east-west traffic, traffic coming from one VPC going to another VPC. They have north-south traffic, traffic going from a VPC to the internet. So how can we use network firewall to protect both internet egress as well as the traffic between applications, the east-west traffic?
That's where the network firewall comes into the picture. SecureShop is getting popular. It has more users around the world. It has more traffic now. It has more employees, so there is also east-west traffic. From a network firewall perspective, it's a managed service, so no matter how much your traffic grows, you don't have to do any infrastructure management. Network firewall will scale behind the scenes and handle all your traffic patterns. A network firewall gives essentially three things to you. First, it is based on Suricata, so you can use Suricata rules. You can write Suricata rules to inspect your traffic. You can write stateful rules. For example, you can say that you just want encrypted traffic, HTTPS traffic from your egress VPCs.
Then you can use it to do protocol checking and all those layer 3 inspections. It essentially behaves like an IDS and IPS. It also provides you logs that you can use to monitor your traffic patterns and how your network firewall is behaving. You can use it for audit purposes, troubleshooting purposes, or for compliance checks.
From SecureShop's perspective, they have traffic between VPCs, traffic coming from the internet, going to the internet, and traffic from on-premises. All this traffic comes to the VPC through gateways. A network firewall can act as a bump in the wire. It is integrated with all the AWS gateways natively. For example, it is integrated with Transit Gateway, and you can use it as a bump in the wire for all traffic going in and out of the Internet Gateway or traffic coming from your on-premises through Direct Connect or through VPNs.
Not only that, you can also drop a firewall endpoint in your subnet, and traffic going from the subnet will go to the network firewall endpoint. It will get inspected and then it will go to other subnets or to the internet gateway. When it comes to manageability, network firewalls are integrated with AWS Firewall Manager and your AWS Organization. We already saw that SecureShop is using east-west and north-south traffic patterns, so they have multiple firewall endpoints.
Not only that, they have different environments: staging, fraud, development, and QA. At the end of the day, these are all different AWS accounts. If you're a security admin managing multiple endpoints across multiple accounts, you want a single pane of glass where you can manage all your rules and policies for all your network firewalls. Firewall Manager provides you that through AWS Organization integration. When it comes to logging, Firewall provides logs in S3, Kinesis Firehose, and CloudWatch Logs. You can export these logs and send them to your analytics provider. They will analyze the traffic patterns and give you analytics.
For example, you can see which rule was enacted, where traffic was coming from, and what traffic is getting blocked. You can use all this insight to write more fine-grained policies or rules. Many times, customers tell me that if they know a threat vector, they can write a Suricata rule, but what about when they don't know a threat vector? What about emerging threat vectors? That's when managed rules come into the picture. We have a group of honeypots we call mad pots. They continuously scan the internet, analyze all the threats, and create managed rules for you. They constantly update the managed rules, and you can use those managed rules on your network firewall.
We use this honeypot network to protect Amazon.com, and we provide the same infrastructure to our customers. Through that, you can use active threat defense to protect your applications so that you are protected against all emerging threats. What's unique about our honeypot network is its scale. We have a global footprint, and just like CloudFront and WAF, it's deployed across our global footprint. We have tens of thousands of sensors. These sensors mimic your TCP applications like Telnet and SSH, and your HTTP applications. They behave like human clients and act like decoys in the network.
AWS Client VPN: Providing Secure Remote Access with Authentication and Device Posture
The network understands the signature and signature patterns, then updates the rule. Through that, you get the protection. At our scale, in a given day we analyze more than 100 million signature patterns and constantly update these firewall rules, so you get the same scale and benefit out of it. In the beginning, we talked about four directions you should protect, so we covered three directions. Now the fourth direction is human access or your employee access to this application. When we talk about human access, there are two different use cases. The first is that most of our employees interact with the application through a browser. These are your contractors and your HR people, and they don't do SSH. For them, a browser is fine.
Then you have your engineers who actually want to telnet or SSH into your instances and do maintenance and patching. These are two different use cases, and we'll talk about both.
Most of our customers access AWS network resources through three different patterns. The first is VPN into your on-premises network. Through the on-premises network, they will connect through Direct Connect or Site-to-Site VPN into AWS resources. In this case, access management is provided by your on-premises provider, and we provide the connection back to AWS. However, you can use network firewall to inspect the traffic coming into AWS through Direct Connect or VPN.
Many customers ask why they should go through their on-premises infrastructure at all. Why should they manage that infrastructure? Can they directly VPN into their AWS network instead? For that use case, we provide AWS Client VPN. Then there are customers who say that most of their employees use browsers, so they want to provide them with ease of use through a standard browser and internet-based access, just like any public application. From a security perspective, they want a better security posture with more granular controls. They want to use identity and device posture to provide per-application-based access. For that, we have built AWS Verified Access. Essentially, they are asking for zero trust, and we are providing them zero-trust access.
Let's first jump into AWS Client VPN. In Client VPN, the user needs a client, which we provide. They deploy it on their laptops, and from those laptops they connect to the Client VPN endpoint running in an AWS region. Once the user is authenticated, they can access all the resources in your network in AWS. Let's talk about how we authenticate the user in Client VPN. We have three authentication patterns, and based on your flexibility and needs, you can choose one of them. The first is certificate-based authentication. You can put a certificate on a user's device, and using that certificate they are authenticated to Client VPN and then access your network.
The second is Active Directory. Many of our customers use Active Directory, so we support Active Directory-based integration. If you're using an identity provider like Okta or Ping that supports SAML, you can use SAML-based authentication as well. Based on this authentication, the user gets access to the network, and you can write granular policies. You can restrict user access to a subset of your network or to a segment of your network. In this case, I'm restricting the user to a /32, and the user belongs to the engineering department.
Many customers ask if they can use device posture. We provide a Lambda Connection Handler, and you can write custom rules in the connection handler. We will run both the identity-based authentication and device posture check through this connection handler. If both checks pass, then the user gets access to the network. In this case, SecureShop is using OS versions and client version checks. They're checking the device posture, and based on that, the user gets access to the network.
Many customers also ask if they can use Client VPN to provide access to their on-premises resources. They don't want to maintain two VPN solutions, or they want to provide access to other clouds. What happens is that your users get authenticated at the VPN endpoint, and then through Direct Connect or Site-to-Site VPN, that request goes back to your on-premises infrastructure. Based on that, you can have on-premises-based access. We just launched Direct Connect multi-cloud connector, so with that you can also provide access to other clouds. In a nutshell, Client VPN can give you access not only to AWS but to other resources outside of AWS.
AWS Verified Access: Implementing Zero Trust with Identity and Device Signals
Now let's talk about AWS Verified Access, or AVA. AVA is built on zero-trust principles. Let's talk about that because zero trust can mean different things to different people. It's important to understand our perspective and how we define zero trust. From our perspective, zero trust is essentially two things. First, apart from network location, you should use non-network-based signals to provide access to a resource or to a network. In the case of SecureShop, they can use identity-based signals as a non-network signal. They can also use device posture. The more signals you use, the better your authentication mechanism becomes, and the better your security posture becomes.
The second thing is continuous verification. Let's take an example. If a user wants to access application one, the zero-trust principle says to check device posture, check identity, run your policy, and then give access to application one. Moments later, if the same user wants to access application two, then pull in the same signals—identity and posture—because something might have changed in that brief moment. Application two might have a different set of policies, so run those policies and then give the user access to application two. These are the two pillars of our zero-trust principles, and we built AWS Verified Access on these foundations.
AWS Verified Access is a reverse proxy. All HTTP requests come to AWS Verified Access, which terminates the request. Then it pulls down the claims from your identity provider and your device management provider. We have standard-based integration, which means you shouldn't need to change your identity provider or device MDMs just to move to a zero-trust architecture. You can continue with your existing IT investments and use AWS Verified Access to provide zero-trust access.
We pull the signals, run the policies, and if the policy allows, we connect the user's request back to your VPC. A customer might have hundreds of VPCs and thousands of applications. From an administrative perspective, you only need one single AWS Verified Access instance, and we will connect it on the back end to your application, VPC, or ALB wherever the application is running. We do the back-end stitching and provide point-to-point connectivity, unlike some other solutions that require you to use connectors or publishers. From a security perspective, you simply write policies and grant access, and we take care of the plumbing.
We have partners across the identity space, MDM space, and partners who consume our logs and provide analytics. If your identity provider or MDM is not represented, let us know. We will work with them to get them onboarded. We have standard-based integration and can work with partners to get them integrated. Let's talk about a few key AVA capabilities. First, AWS Verified Access provides an internet endpoint to your private application. In that sense, it is exposed to internet-based threats. The first thing you need to build is a security perimeter. We provide WAF integrations and all the things discussed earlier, such as DDoS protection, SQL injection prevention, cross-site scripting protection, and bot control. All of these are provided on AVA. You first filter the traffic to provide a broad network perimeter, then use AVA's identity and device management-based zero-trust access, and finally provide user access to your applications.
The second thing I want to touch on is that AVA also provides you with rich logs. In enhanced logging mode, you can dump all your claims. Later, you can use those claims to write policies. You can write policies, tweak policies, simulate the policies, and see the effect of those policies before checking them in. Many of my customers have said that using this advanced logging and the policy editor, they have cut down the policy writing time from days to a few hours. It is quite powerful and quite useful.
One other common question is when should you use Client VPN and when should you use Verified Access. The answer is that based on your use case, you will use both to provide network access. Use Client VPN to give SSH, Telnet, and other access to your engineers. For your employees who need HTTP-based access, use AWS Verified Access. It will give you granular access based on device posture and identity.
Demo: Securing FinanceCore Application with Verified Access, WAF, and Network Firewall
Shovan talked about a lot of cool features of Amazon Verified Access. Let's apply some of them to a real-world use case. Going back to our SecureShop customer, they also have an internal application, a financial application we are calling Finance Core. In this demo, we are going to apply some of the AVA security features to this Finance Core application to provide secure remote access for internal users. In our demo architecture, we have a user that browses a domain hosted on Route 53. This is called financescore.secureshop.com. You can see it points to our Verified Access endpoint. On the Verified Access endpoint, we have configured a bunch of things. We have WAF rules, specifically AWS Managed WAF rules which can provide protection against bot control and common threats such as SQL injection. The Verified Access also requires the user to authenticate against AWS IAM Identity Center. This is our trust provider. You can also use a device trust provider in addition to this, as discussed earlier, but for demo purposes we are only using the IAM Identity Center as a trust provider.
Once the user is authenticated successfully, they can navigate to the Finance Core application running behind a private Application Load Balancer. The Finance Core application also needs to access the internet to download security patches and communicate with third-party APIs, so it requires outbound internet egress. We deploy AWS Network Firewall to protect that egress traffic as well.
We ship logs from AWS WAF, Verified Access, and Network Firewall into CloudWatch so we can use them for troubleshooting, analyzing access patterns, and for auditability purposes. With this, I will switch to a live demo. Let me log into my machine. Hopefully the demo gods are kind today. I use Terraform to deploy all of this architecture that I showed you. Let's first navigate to our Finance Core application. When I try to navigate to the Finance Core application, it redirects me to log into my account using IAM Identity Center like I showed in the architecture. I finally remember my password.
Now I'm browsing to the Application Load Balancer that is behind Verified Access. This is my Finance Core dashboard. You can see this is a financial application for internal employees looking at revenue numbers and similar metrics. I'm running this in US East One. This is my EC2 instance for the Finance Core application, and this is all behind Verified Access like I showed you previously.
Let's look into what is happening behind the scenes. I want to look into Verified Access. Verified Access is in the VPC console, so I will navigate to our VPC console and then go to Verified Access. Let me make this a little bigger. If I select this, you can see this is the general information about my Verified Access instance. I have a Finance App Group so I can apply group policy to only allow access for users from that group.
This is my trust provider. I'm using IAM Identity Center as a trust provider. You can also attach a device trust provider, but I'm not going to do that in this demo. I also have logging going to CloudWatch, and I'm passing the detailed log. I'm also passing the trust context into CloudWatch that I can use as part of policy tests. Finally, I have a Verified Access policy or Web ACL assigned to Verified Access. I'll come back to it later to look into the rules. Let's first look into a feature within Verified Access called Launch Policy Assistant.
In order to fine-tune your policies and make sure they are working the way they should, you can use this feature. I can use my email address in this scenario. I select my endpoint. In an actual production environment, you may have different endpoints—you may have an endpoint for production, non-production, and so on. I only have one because I have a demo environment. Then I'm going to use my last authorization result to apply this policy. It takes a little while.
You can see that Verified Access was able to extract trust context from my trust provider, in this case IAM Identity Center. This is all the information that was passed by IAM Identity Center to Verified Access. You can see my email address, my hostname, my IP address. The one I'm interested in is the group, which is this. Right now I have a pretty open group policy which is allowing access to any authenticated user. I want to narrow it down to the Finance group. I only want people who are part of the Finance group to be able to access this application.
I can change the policy. I actually pre-created the policy. I use Terraform for generating this policy as well. You can see I'm using the group ID, which is this, which matches this, to narrow down this policy.
This matches the policy, and I want to test this policy to make sure that it works. It seems like it works, so I'll hit next and then I can simply apply this policy to my Amazon Verified Access instance. That was the launch policy assistant. You can use it for troubleshooting and for fine tuning your policy. The other thing I want to talk about is Verified Access. I have also deployed Verified Access again through Quiro. I'm going to jump on Quiro, which is this. I preloaded the context for my finance core application into Quiro in order for us to interact with this architecture.
I'll say show me WAF rules on finance core. I want to mention that you have to name your resources really well because this is a large language model. The better your naming convention is, the better it will perform in response to your prompts. This is something to keep in mind. You can see that I ran this prompt right away and it returned some rules to me. Before I do that, let me show you in the console. I have the same rules over here, but you can see that when Quiro responded to the same request, it ran some described calls and showed me the rules. It's also telling me what the rules are doing with a very brief summary, so it has that knowledge.
If you're not ready to use Quiro for deploying production applications or making changes to your resources, you can still use it for network discovery like this, which is a very safe use case. I will ask what other WAF rules I should deploy for this financial application. It has knowledge, so it knows the security best practices. It knows AWS security best practices as well as the well architected framework, so it is suggesting me some options. You can see it gave me a lot of different rules. I'm not going to go through all of them, but I'm going to pick the first one and say add this first managed rule. This is more like I'm making changes to the resources, so Quiro takes a little bit more time.
While this is going on, I'm going to jump on my console again, and you can see it was able to do it. If I refresh my page, I should be able to see that rule. You can see that this was a rule I added, the SQL injection rule set, which is exactly like this. So you can see that with a few prompts I'm able to make changes to my resources and apply some additional security protections for this internal financial application. The other thing I had was network firewall. I also want to show you how the network firewall is working in this scenario. I want to see the route table because I'm passing all of my traffic through the network firewall endpoint.
I'll say show me the finance core VPC route table. I'll ignore unused route tables and show me in Cisco format, just for fun. So again, this is a discovery call, so I'm learning about the routes in my VPC for this finance core application. It's running a bunch of described calls behind the scenes and you can see that it started to spit out the route table I have in place. You can see that I have a route table associated to my finance core application and these subnets, so it's sending all the traffic to the VPC endpoint, which is the network firewall endpoint. Then I have firewall subnets where my network firewall is actually deployed, so that subnet has a route to the Internet gateway, rightly so. I also have a route table assigned to the Internet gateway for the return traffic to go through the network firewall as well. You can see that I also have routing set up for that as well.
If I say show me the traffic flow on both sides, Quiro will be able to find that for me in a bit. You can see that it's showing me how the Internet traffic from internet users will basically interact with this finance core application. I'm not going to go through everything, but you can see that it's identifying everything like the WAF rules, the network firewall rules, the routing, everything, and it's showing me both inbound and outbound. This is a very quick way to verify your setup. Now I want to go to network firewall to show you some of the rules in place in the console. Network firewall is also part of the VPC console, so I'll quickly jump into that. This is my network firewall finance core, and I have a bunch of different things configured here. The one that we are going to look into is the policy.
I have configured a bunch of custom rules and AWS managed rules that I showed and talked about. The AWS managed rules I've enabled here are protecting me against known malware domains. The custom rules that I've created using Suricata are making sure that I can only send encrypted traffic out to the internet. I'm blocking any traffic going on port 80, which is HTTP traffic. I'm also blocking any traffic going out on port 22 because this is a financial application. It should not be making any SSH connections out to the internet, so it's blocking all of these traffic patterns.
Let's test these things. I'm going to go to my EC2 console. I have my FinanceCore application running in the subnet, so if I connect to it using the private IP, I've copied some commands so I can run them. The first thing I'm going to do is call Amazon.com over HTTP. This should be blocked. As you can see, this was blocked by the network firewall. Then if I try to curl HTTPS, it should work. You can see this worked. Finally, if I try to connect to a known malware domain, it should also be blocked based on my network firewall rule. You can see that also happened here.
With that, this brings us towards the end of the demo. You can also use other queries to show you the blocked traffic in the logs. Suricata can actually go and make those describe calls against my CloudWatch log group and show me those entries in a summarized, easily consumable fashion. It takes a little bit, but you can see now it started to show me some of that data, what IP addresses are allowed, what times were blocked. There's a lot of data I'm not going to go through, but just to show you that it can consume the log data and give you summarized output.
Conclusion: Complete Perimeter Protection Architecture for SecureShop Enterprise
So with that, this was the console demo. Coming back to our slide deck, we covered a lot of ground in this session. We talked about different services that you can use for building robust perimeter protection and zero trust. Let's apply all of those things that we learned along this session to our SecureShop enterprise architecture. The first thing I want to do is protect our internal applications, so we deploy Amazon Verified Access with identity and device trust provider integrations. Then we enable web application firewall on both CloudFront and Verified Access to protect both our public facing applications like the retail application we showed and our internal applications like the FinanceCore application against common threats.
We then deploy Network Firewall to inspect north-south and east-west traffic. North-south traffic is the traffic coming from the internet towards our application and also from on-premises over Direct Connect to our application. East-west traffic is flowing between, for example, production to non-production, one application to a different application residing across different VPCs. We're using Network Firewall to protect against and inspect all of those traffic patterns. Finally, we filter and inspect internet-bound outbound traffic using AWS Network Firewall, and optionally we can also enable Route 53 DNS Firewall for DNS protection and egress traffic protection as well. All of this helps us create a secure network perimeter for the SecureShop enterprise customer.
This brings us towards the end of our session. Thanks for joining us. Please take a moment to take the session survey in the mobile app and help us improve our future sessions. Enjoy the rest of re:Invent and keep reinventing. If you have any questions, we are going to be in the back, so please come chat with us. Thank you.
; This article is entirely auto-generated using Amazon Bedrock.




























































































































Top comments (0)