DEV Community

Cover image for AWS re:Invent 2025 - Advanced VPC design and new capabilities (NET340)
Kazuya
Kazuya

Posted on

AWS re:Invent 2025 - Advanced VPC design and new capabilities (NET340)

🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.

Overview

📖 AWS re:Invent 2025 - Advanced VPC design and new capabilities (NET340)

In this video, AWS Principal Networking Specialist Solutions Architects Alexandra Huides and Andrew Gray present over 150 networking features and launches from 2025. Key announcements include NAT Gateway regional availability mode, Network Firewall Proxy with explicit proxy capabilities, VPC encryption controls with monitor and enforcement modes, and Transit Gateway native Network Firewall attachment. They cover VPC Lattice enhancements like custom DNS for resources and configurable IP addresses for resource gateways, Cloud WAN advanced routing controls with BGP manipulation, and AWS Interconnect Multi-Cloud for private connectivity between AWS and Google Cloud. Additional highlights include Route 53 Global Resolver with anycast IPs, security group referencing across Cloud WAN, VPC Route Server for BGP-based routing updates, cross-region PrivateLink for AWS managed services, and comprehensive IPv6 support across 75% of AWS services. The session demonstrates reference architectures for multi-region connectivity, on-premises integration, and SaaS provider access centralization.


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

Thumbnail 0

Introduction: AWS Networking Innovation Overview at re:Invent

Thank you so much for joining us. Thank you for giving up your happy hour today on day one plus two of re:Invent. I'm Alexandra Huides, a Principal Networking Specialist Solutions Architect with AWS Networking Services, and I'm accompanied today by Andrew Gray, also a Principal Networking Specialist Solutions Architect. We are going to be talking to you about all the new feature capabilities and launches that happened across AWS networking throughout the past year and all the cool new architectures that you can build with them.

Thumbnail 60

Thumbnail 70

Before we begin, this is a 300-level session, so we will go fast and we will go deep in some areas. We will also go wide because we need to talk about all networking services. We will let you know when animations end so you can take a photo of the slides. You will see that icon on slides indicating when to take photos. Q&A will be after the session, so keep that in mind, not during our presentation.

Thumbnail 90

Thumbnail 100

As you are accustomed to, we are going to talk a bit about foundations so that everyone has the concepts and knowledge about all the things that make up AWS networking. We are going to start with the AWS backbone, the AWS global backbone made up of regions and availability zones. These numbers I am hoping are still up to date since this morning. The coolest number there is more than 9 million kilometers of fiber. That is quite impressive, and this is what powers and helps from an infrastructure perspective to host all your workloads in AWS.

Thumbnail 110

Thumbnail 130

Thumbnail 140

Foundational AWS Networking Components: VPC, Connectivity, and Security

From a hosting perspective, you have Amazon VPC, which hosts your compute and your workloads and your instances. Amazon VPC is a regional construct. Within the Amazon VPC, you can spin up subnets that are containers for your workloads. Subnets can be IPv4, can be IPv6, can be dual stack, and they are availability zone level constructs, so per availability zone.

Thumbnail 150

Thumbnail 160

From a security perspective at the VPC level, you have network access control lists, which are subnet-level access controls and stateless firewall filtering. You also have security groups, which are your stateful firewall filtering. Last but not least, you can insert AWS Network Firewall into the picture to get you traffic inspection, decryption, and advanced filtering controls for your traffic.

Thumbnail 170

Thumbnail 190

From a connectivity perspective, if we look at VPC connectivity, you have internet connectivity with the help of the Internet Gateway and the Egress-only Internet Gateway. Of course, you have IPv4 and IPv6 flows and constructs that help serve each and every one of them. From an IPv4 perspective, you have the NAT Gateway, which is a regionally available construct with availability zone level deployment. For IPv6, to accommodate the same type of egress-only flows that do not accept ingress, you have the Egress-only Internet Gateway.

Thumbnail 210

Thumbnail 230

Now, if we talk about application scaling, because it is not just about application connectivity but application scaling as well, our Elastic Load Balancer suite encompasses three load balancers: Application Load Balancer, Network Load Balancer, and Gateway Load Balancer. We are going to see those deployment models in more depth.

Thumbnail 250

If we talk about connectivity to AWS services, we have AWS PrivateLink, which gives you two types of endpoints in your VPC. Gateway endpoints do not have interfaces in your VPC. They connect you to Amazon S3 and Amazon DynamoDB, while interface endpoints have interfaces in your VPC. They are deployed in a subnet or a set of subnets in multiple availability zones, and they require you to update DNS. There are no routing updates as in the case of gateway endpoints.

Thumbnail 270

Thumbnail 290

From an extended application connectivity perspective, it is not just about connectivity to AWS managed services but also connectivity to your own applications, so app-to-app connectivity. This is where AWS PrivateLink started, with the capability of supporting customer-managed services. You can have this capability both within region and across regions, which we launched last year. If we go towards application connectivity at scale, this is where Amazon VPC Lattice comes into the picture with a scalable, highly available, managed service for that, which allows you to connect to your service network, services, and resources.

Thumbnail 320

Thumbnail 350

Thumbnail 360

In a secure manner using our policies, we're going to dive a bit deeper into VPC Lattice and into reference architectures. Now, if we step down from the application level and move towards connectivity at the metric level, we have various options for you to connect your VPCs on AWS. The first one is VPC peering, and you can have this intra-region and cross-region as well. VPC peering is not extremely scalable, so from a routing perspective, you can scale your VPC connectivity in a region using AWS Transit Gateway, which is a regional routing hub that allows you to connect many VPCs in a region up to 5,000 with a public-facing quota, and allows you to also pair multiple transit gateways together intra-region and cross-region. Just keep in mind that Transit Gateway relies a lot on static routing, so it's not an intelligent, dynamic-driven router or routing hub in AWS.

Thumbnail 380

Now, if we want that dynamic global connectivity and we want to expand across multiple regions, AWS Cloud WAN is here. We actually got a question last year around this: it's still not clear what's the difference between Cloud WAN and Transit Gateway. That's actually not a big difference. Cloud WAN comes as a fully managed service that extends globally and is globally available. It allows you to have this dynamic connectivity across segments that is a fully managed type of connectivity. You don't have to manage static routes. The intelligence under the hood is attached to core network edges, which are like transit gateways. So you can choose: Transit Gateway is a DIY, do-it-yourself type service where you can configure routing and static routes, whereas Cloud WAN is your global, fully managed service, and we're going to dive a bit deeper into it today.

Thumbnail 440

Thumbnail 460

Not all workloads live in AWS, and if you're familiar with some of the recent launches, you will see that not all workloads just live on-premises, but they also live in other clouds, and we're going to talk about that from a hybrid connectivity perspective. We have Site-to-Site VPN, which is our IPsec-based VPN connectivity method, and Direct Connect which allows you three flavors of virtual interfaces: private virtual interface, transit virtual interface, and public virtual interface.

Thumbnail 500

From a remote access perspective to give remote access to your end clients without a VPN—that's the whole point—you can use AWS Verified Access and integrate this with device management, device posture, and identity for your end workers and end users to access your public-facing resources on AWS, including that authentication and authorization piece. Now, all of this is foundational, and we've talked through it. There's been a lot of innovation in 2025. Andrew, take us away.

Thumbnail 520

Thumbnail 530

Thumbnail 540

Thumbnail 570

Amazon VPC Innovations: NAT Gateway Regional Mode and Network Firewall Enhancements

All right, thank you. So we're going to break off these innovations into a few different sections. The first one is going to be Amazon VPC. Obviously, we do a lot of development work across many different parts of it. Let's start with internet connectivity. We talked a little bit at the start about this, and we have our standard VPC here with some EC2 instances and our NAT gateways. This is how you've been configuring NAT gateways for quite some time inside AWS. We are now launching NAT Gateway in regional availability mode. What does this do for us? It makes configuration, maintenance, and support much simpler. Instead of having one NAT gateway per availability zone, which we call zonal NAT, you now have this regional construct. It doesn't need to live in its own subnet and doesn't need to do any of the additional configuration, and it updates itself automatically. If you add an additional availability zone, the regional gateway expands for you. It also has some additional benefits in terms of scalability with public IPs. This is the construct that we see people going forward with, especially as you're going through new deployments.

Thumbnail 600

On the security front, we have had for quite some time the AWS Network Firewall. So we have again our typical architecture here with some EC2 instances, and we're pointing them up to the Network Firewall endpoint. If you haven't worked with Network Firewall before, it's based on gateway load balancer. So we have the endpoints that go over to a Network Firewall.

Thumbnail 610

Thumbnail 620

They go over to a Network Firewall object that AWS manages, and then you can send the traffic on out. Here we're showing it with a regional gateway. That's fine, but now we are supporting multiple VPC endpoints. So in the previous diagram we had everything inside one VPC, but now we can have multiple VPCs all come back to the same Network Firewall.

Thumbnail 630

Thumbnail 640

Why would you do this? It makes it so you have one single Network Firewall policy that controls a number of VPCs, and we recommend customers doing that. But still consider whether you want to split it up by prod, by dev, or by whatever security segmentation that you want to do. And if you don't want to separate out the Network Firewalls, which you might not want to do if you're in a test environment and you've got thousands of these, you can still reference the individual endpoint inside your policy, so you can still get that segmentation that you may need for smaller-grained usage.

Thumbnail 670

Network Firewall itself has had a number of upgrades and improvements launched in the past year. The active threat defense is pretty interesting. This is where we are enabling some of the AWS intelligence to be brought into your Network Firewall, so you get to see all that. But you can also pull in partner managed rules. So this is where you can reach out to another security partner and have those rules get automatically updated into your firewall as well. There are a number of console and monitoring and all sorts of additional features there, and feel free to go read the many blog posts that we've launched about Network Firewall on this one.

Thumbnail 700

Thumbnail 710

Thumbnail 730

Thumbnail 740

The big one though is proxy. Network Firewall Proxy is exactly what it sounds like. So for the longest time, if customers wanted to have fairly controlled access to things like external websites, they were deploying their own proxies, whether it's Squid or whatever. Managed Firewall Proxy is now where we provide that service for you. So how does this all come together? It looks very similar to a Network Firewall deployment. We now have a proxy endpoint. It goes to a proxy instance that gets tied with NAT gateway, and this is all an explicit proxy.

Thumbnail 750

Thumbnail 760

Thumbnail 770

Thumbnail 780

Why do we care that it's an explicit proxy? It means you don't need to change the route tables. You're configuring the clients to use this proxy explicitly, so this makes it a much simpler configuration deployment. Of course this all works with various levels of inspection. So for those who may or may not be familiar with proxy, the client, like I said, will connect up. We can do pre-DNS inspection, which is where we process the rules against something before any kind of connection goes out. Pre-request, we can do post-request. And we can also tie this in with PCA, and this allows us to have it handle TLS traffic as well. So this way your clients trust the root certificate over on PCA and your clients can work with us as well.

Thumbnail 800

Thumbnail 810

Thumbnail 820

Thumbnail 830

Thumbnail 840

Thumbnail 850

The overall packet flow on this one, like I mentioned, the proxy, the clients will do its proxy connection setup to the gateway, and you do that HTTP CONNECT that's part of the proxy protocol, and there's where you send in the host that you're connecting to, and this is where we do that first step of inspection. Assuming that passes, then the proxy will make the outbound connection for you, and then your client can do the individual request, and this is where the request policy comes in. So you can do things like say, hey, you're allowed to get to here but maybe not post or whatever you want to do, or you can get slash but not a subdirectory, whatever you want to do. Assuming that passes, proxy will send the request out. We'll get the response back and we can inspect it again so you can do things like say, hey, I don't want requests or responses with the word the in it or whatever you want to do, and then assuming that passes, it all comes back to the client.

Thumbnail 870

Thumbnail 880

Thumbnail 900

This diagram does assume you've configured the TLS inspection component that we mentioned earlier using PCA. Otherwise, the traffic is encrypted and you lose a couple of those capabilities because we can't see inside the encryption. So, as before, we mentioned each VPC can have its own configuration and endpoints in a distributed environment, but much like what we just did with firewall, you can have distributed endpoints that all come back to the same proxy. So this is very useful if you have a use case where you want to keep your VPCs completely isolated. You don't want them to have any kind of attachments or internet gateways or anything else, but you still want to have just a modicum of external access to get to things like package repositories or update repositories or things along those lines.

Thumbnail 930

You can of course make this part of a hybrid environment where you have Cloud WAN or Transit Gateway providing your interconnectivity services, and then you put the proxy firewall in a centralized VPC. We see customers using this if you're in a position where you already have a central VPC where you're putting all your networking resources, or you just want to keep the client VPCs as clean and simple as possible and let all the networking complexity happen elsewhere.

Thumbnail 940

Thumbnail 950

Thumbnail 970

VPC Route Server and Transit Gateway Improvements

Next is VPC Route Server. This is an interesting launch, and what it gives is the ability for instances inside your VPC to make BGP-based routing updates to your VPC routing table. Why would you do that? You can do a number of different things with this. We've seen customers doing any kind of anycasting or making their own failovers or anything along those lines. This plays in with a couple of the other features that we'll talk about here in a bit, but it's an interesting service that fits a nice niche. One of the use cases that I mentioned is the floating IP.

Thumbnail 980

Thumbnail 990

Thumbnail 1000

So in this particular case, we have two EC2 instances in an active-standby configuration. We have an IP address that is not part of our VPC, but it is still being routed to our active instance. Our active instance goes away, and we can through Route Server push out that update and send all the traffic over to our other instance.

Thumbnail 1020

Thumbnail 1040

Thumbnail 1050

We've talked a lot about some of these interconnects, and just to come back to this Transit Gateway IPv4 and IPv6, this has been our deployment model for quite some time. You have a central Transit Gateway route table, and you're sending all your traffic to the various attachments. That's just been the way we've done things. But it does make things a little complicated when you want to start adding firewalls to it. So here we have two VPCs that are connected together by Transit Gateway, and we want to add in inspection. Well, the way we've had to do that in the past is we create a new VPC, a security VPC. We attach that to the Transit Gateway. We set up some route tables, we set up some endpoints, we set up some ENIs, we set up some more route tables, all that good stuff. You don't need to do that anymore. Now you can do Network Firewall as a native attachment to Transit Gateway. This spares you the time and effort to maintain, build, and deploy that separate inspection VPC. You can just tie to Network Firewall directly, and we handle all the pieces inside for you.

Thumbnail 1080

Thumbnail 1090

Next up is on Transit Gateway. One of the problems that we frequently hear from customers is that Transit Gateways frequently operate on a sender-pays model that works for a lot of cases but not in all of them. So now with Transit Gateway you can do flexible cost allocation. What does that mean? You can do things where the traffic flows that Transit Gateway is doing the data processing for can be billed either to the source, the destination, or the Transit Gateway owner. This can potentially help settle things like if you're having to do back-charging between different departments or anything along those lines where you're trying to understand the true cost of a workload. You can then put all the data processing charges assigned to that workload to make that math easier.

Thumbnail 1130

Thumbnail 1140

Enhanced VPC Security: Security Groups and Encryption Controls

VPC security. We're back here with our private subnet and public subnet. We have NACLs. We've had NACLs for a very long time, and we have security groups. NACLs, or Network Access Control Lists, are primitive. They're on the subnet boundary, but they're rather limited in terms of rules—you can only have 40 rules. That's where security groups come in. Security groups work on an individual ENI basis and allow you up to 1,000 rules. So customers use this for application or micro-segmentation, any number of reasons.

Thumbnail 1160

Thumbnail 1170

Thumbnail 1190

One thing that we talk about is security group referencing. Security group referencing, if you haven't used this before, is a feature where you can say instances that are assigned to one security group get referenced in another one. We recommend customers do this, for example, if you have your database servers, you can say, "Hey, everything that's in the security group that's my web front end is allowed in on this port." That way you don't need to go through and update additional security lists. It's all propagated automatically through AWS.

Thumbnail 1200

The other question that we get a lot is encryption in transit. A lot of customers are worried about what is going on actually inside AWS. Is my data in the clear? Is it not? And we're very open about

Thumbnail 1230

it. You can visit our website and read all policy documents and standards. For this discussion, we will simplify and focus purely on instance-to-instance encryption. We have AWS Nitro, which handles encryption between Nitro instances. This works in the same region and across VPC peering.

Thumbnail 1240

Thumbnail 1250

Thumbnail 1260

Thumbnail 1270

However, problems arise when we consider other scenarios. What happens with non-Nitro instances? What happens with transit gateway? What about everything else and load balancers? We are giving you a control called VPC encryption controls, which is really valuable. With VPC encryption controls, you can put your VPC into two different modes. There is a monitor mode and an enforcement mode. In monitor mode, you can turn this on whenever you like. Inside your VPC, if you look at the console, it will tell you everything that is nonconformant, and you can mitigate. You can say, for example, I am going to make an exception because this is an internet gateway and it does not make sense for it to have transit encryption. Monitor mode gives you a chance to do a dry run first.

Thumbnail 1310

Thumbnail 1330

In addition, inside your VPC flow logs, we now add an additional field. We tell you if it is Nitro encrypted, if it is application encrypted, or neither, or both. Monitor mode is great, but as you go through and mitigate everything, you can turn on enforcement mode. Enforcement mode ensures that you cannot launch anything inside your VPC that does not support encryption. This is fantastic because it means we do not have to worry about these types of questions anymore, such as whether this path is encrypted or not. It is all encrypted at that point.

Thumbnail 1350

Thumbnail 1360

Thumbnail 1380

Application Networking with AWS PrivateLink and Amazon VPC Lattice

I have talked a lot about networking, so let us discuss applications. With application networking on AWS, application owners, network owners, and security owners have started to come closer together. Part of the application networking suite on AWS includes a few services. We will start with AWS PrivateLink and connectivity to AWS managed services. We covered this somewhat at the beginning, where you have private link endpoints in the form of gateway endpoints or interface endpoints. Connectivity to AWS services translates into a DNS change in your VPC on the Route 53 Resolver. From the gateway endpoints perspective, these do not require any DNS changes. It is only a route in the route table.

Thumbnail 1390

Thumbnail 1400

Thumbnail 1410

We have an exciting launch. We worked with many of you based on feedback from last year regarding support for cross-region private connectivity for custom services, and now we support cross-region private connectivity for AWS managed services. How does this work? It is very simple and straightforward, exactly as you would expect. You can now create interface endpoints in your VPC in a different region for AWS services that are across regions. You have full control over which services you allow to be created across regions, and you can control that at the AWS organization level. This is very important.

Thumbnail 1430

Thumbnail 1440

Thumbnail 1450

The other thing that I am very excited about, and some of you have already mentioned, is comprehensive IPv6 support. We have launched IPv6 support for gateway endpoints for S3 and DynamoDB as well as for interface endpoints. There is no more excuse to not adopt IPv6 now. The second service in our application networking suite is Amazon VPC Lattice, which is purpose-built for internal application-to-application connectivity on AWS. I want to go through a bit of history here.

Thumbnail 1470

Thumbnail 1480

Thumbnail 1500

You as developers and application owners have your services and applications that are deployed on certain compute types. You can have various services in your environment. In addition to services, you can also have data sources, resources, and TCP applications. The idea is that from your perspective as a developer, you want certain things to talk to each other and certain things not to talk to each other. Mostly, security wants certain things not to talk to each other.

Thumbnail 1510

Thumbnail 1520

Thumbnail 1530

Thumbnail 1540

Thumbnail 1550

These applications that you own are hosted either in VPCs or if you run Lambda, you can host it in Lambda, not necessarily in a VPC. VPCs are your network-level boundaries, but they don't help you express the intent that A needs to talk to B and C needs to talk to D. This is where Amazon VPC Lattice comes in. In VPC Lattice, you have a couple of constructs that I'm hoping everyone is already familiar with by now. You have VPC Lattice services, which are your HTTP and HTTPS type applications at layer seven, the application layer. You also have VPC Lattice resources, which are your TCP applications. These TCP applications could be databases, for example, but could also be applications that listen on a TCP port that are not HTTP-level applications.

Thumbnail 1560

I talked about intent and how we bring these things together to talk to each other. This is what the VPC Lattice service network comes to offer. It's a logical boundary that helps you bring together things that need to talk to each other. A super common question I'm getting is whether you should have just one Lattice service network. The answer is no. You can have as many as you want. Just design them in a way that makes sense for you, keeping in mind the principle that things that need to talk to each other should be grouped together.

Thumbnail 1590

Thumbnail 1600

Thumbnail 1610

How do you express that intent in Lattice? Through associations. Everything in Lattice is an association from this perspective. You can associate services to a service network, so you have service associations. You can associate resources to a service network, so you have resource associations. You can associate VPCs to a service network, and you can associate service network endpoints to a service network. If you go into the console, you'll actually see all of these names as associations. These two types of associations—for VPCs or service network endpoints, and for services and resources—give you the path to either be consumed if you're a service or a resource, or to consume things if you are a client in a VPC.

Thumbnail 1650

Thumbnail 1670

Thumbnail 1680

Thumbnail 1690

Now, really interesting and important are the auth policies in VPC Lattice, which are IAM-based and allow you to control granularly exactly that intent. You can say A should talk to B but B should not talk to C. These are baselines, and at the end of the day, from a developer perspective, this is what you need: the ability to have your applications talk to each other. If we dive a bit deeper and look at some of our services from a client-service perspective, the whole communication path starts with a DNS request. You as a client service or the client service is trying to access a service by doing a DNS query, and the answer to that is VPC Lattice IPs. That's what draws traffic to the service network instead of routing it to your VPC peering, transit gateway, or Cloud One.

Thumbnail 1710

Thumbnail 1720

Thumbnail 1730

Thumbnail 1740

In order for this to happen, you have to manage DNS. You have private hosted zones associated with your VPC that tell you those custom names and how they're mapped to the VPC Lattice-generated FQDNs. But we have a very exciting launch to simplify how DNS management works. We now support custom DNS for resources, which means that when you create a resource, you can now define a custom DNS for that resource. This means that you don't have to manage private DNS anymore or private hosted zones associated with client VPCs. Let's take, for example, my resource here—new features update for re:Invent 2025. I've defined this as a custom domain name, and now I have the level of control both at the resource association to the service network, at the service network level, and at the client level to propagate that DNS private hosted zone into the client VPC.

Thumbnail 1770

Thumbnail 1780

Thumbnail 1800

From the client perspective, there's nothing that you need to configure anymore on the VPC for DNS management. It all works. Keep in mind for those of you who are security conscious and need to control how these private hosted zones are propagated that the service network-level DNS manipulation and the VPC client VPC are always under your control as the owner of those resources. A second very important improvement in VPC Lattice is around configurable IP addresses for the resource gateways.

Thumbnail 1810

Advanced VPC Lattice Architectures: Cross-Region, On-Premises, and SaaS Integration

By default, your resource gateways would take a /28 per availability zone to use for traffic to your resources. Now you can specify how many IP addresses those resource gateways have and what those IP addresses are. Let's move toward advanced architectures because these are constructs that all of you have been familiar with for quite some time, and here are some of the most interesting ones.

Thumbnail 1840

First, when do you use SNA versus when do you use SNE? SNA stands for Service Network Association, and SNE stands for Service Network Endpoint. The difference between them and when you would use them is exactly like the difference between an S3 interface endpoint and an S3 gateway endpoint. The gateway endpoint cannot be accessed from outside of the VPC, same as the Service Network Association. The Service Network Endpoint can be accessed outside of the VPC, so that's when you would make the decision of using that.

Thumbnail 1870

Thumbnail 1880

Now the second one is providing connectivity to applications that are on premises. Your applications are on AWS and you want to bring in some applications from on premises. How do you do that? A super common scenario is to have a transit VPC. For those of you who have been around before 2018 when Transit Gateway was created and launched, this is not that transit VPC. There is no VGW here that you need to manage. It is a transit VPC in the sense that it has IP addresses that are routed toward on premises, so they are non-overlapping. Everything to the left of that VPC can be overlapping because Lattice accommodates overlapping IP addresses.

Thumbnail 1940

You have to have that transit VPC with a resource gateway that allows you to target these resources on premises. How do you bring them into the service network? By means of resource configurations. And on resource configurations we support custom DNS names. Hence you can give them your custom names that are used by your developers for your on-premises resources. Now if we dive a bit deeper into the other flow, you may ask well how do I do the other way around? I have a client on premises. How do I get it to be able to talk to these resources or services that I have on AWS?

Thumbnail 1980

Well, the Service Network Endpoint provides you with an FQDN for every single resource or service that you have associated with your service network. If you take that FQDN and you map it into your DNS configuration on premises, you could have an inbound resolver endpoint if you wanted to in that VPC or you could have hosted zone delegation to your on-premises DNS servers. You can absolutely make sure that DNS name is used by your clients on premises and everyone talks to that Service Network Endpoint. Also keep in mind that both on Service Network Associations and Service Network Endpoints you can configure security groups, so you can filter what traffic can get to the Service Network Endpoint.

Another interesting one is that the Service Network Endpoint is a layer 3 construct. It has ENIs in your VPC. A super common question we get is, can I put a firewall in front of that? Yes, you can absolutely do that if you want to, but keep in mind that you have the auth policies in Lattice. Make use of those. There is no data processing cost for auth policies. There is no hourly cost for auth policies, so make use of them. And most importantly, do you need to use SigV4? Do you need to use IAM roles inside your traffic to be able to use auth policies? The answer is no.

Thumbnail 2060

Thumbnail 2070

Thumbnail 2080

You can use auth policies even if you are not signing your traffic. You will not have the principal ID as a condition key in the auth policy, but you can still have condition keys like source IP or source VPC or source account. So keep that in mind. Now another interesting architecture that we have been hearing about a lot and customers have been trying to build is cross-region service-to-service communication. Lattice does not currently support native cross-region VPC services or VPCs or resource attachments, so you do need to have this transit VPC that brings you layer 3 connectivity between the regions. These are the only two VPCs, or if you have many regions, one VPC per region that has to have non-overlapping IP space because these are the ones that connect to each other in your layer 3 domain.

Now in each region you have the Service Network Endpoint which gives you layer 3 ingress into the service network.

Thumbnail 2110

Thumbnail 2120

Thumbnail 2130

In the opposite region, you have resource configurations that are targeting the services or resources in that remote service network that you want to bring into your region. You can use custom DNS to have everything managed from a DNS perspective. Centralizing private link endpoints—how many of you are doing that today with Transit Gateway? Probably a lot. You can absolutely do that with VPC Lattice, and you can actually have DNS managed for that for all your client VPCs because of custom DNS on resources. So every endpoint is actually a resource in VPC Lattice.

Thumbnail 2150

Thumbnail 2160

Thumbnail 2170

Thumbnail 2180

Thumbnail 2200

Thumbnail 2210

How many of you folks are consuming SaaS provider endpoints? Probably a lot of you. Can you centralize SaaS provider access? You can absolutely do that. You can have resources that point to VPC endpoints for SaaS providers that you want to offer to your clients. Very importantly, can you inspect that traffic? Can you inspect the traffic that's going towards your SaaS providers? And can you do traffic filtering? The answer is absolutely yes. The resource gateway is a layer 3 construct in your VPC that has elastic network interfaces. If you put a network firewall in there and you configure routing appropriately, your traffic will be inspected through the firewall from the resource gateway towards the VPC endpoints for your SaaS. That's a pretty cool architecture, and it's actually used by many of you.

Thumbnail 2220

Thumbnail 2230

Thumbnail 2250

Elastic Load Balancing and API Gateway Updates

From a seamless integration perspective, Oracle databases at AWS rely on VPC Lattice for exposing services and resources to your Oracle database deployments. That's a cool one, but not everything is about service-to-service connectivity. So Andrew, elastic load balancing—sure. What was mentioned earlier is that we have different load balancers. The Application Load Balancer is the one that handles all of our layer 7 work, so HTTP and HTTPS, and it has had a number of new launches in the past year.

Out of all of these, the ones I like the most are the target optimizer because this has been a frequent request from customers. They want their ALB to actually load balance based on the number of concurrent requests. You may have some requests that are short and some that are long. ALB will now manage between those for you and help keep that load more evenly distributed in workloads where you have different response times. The other one is health check logs, which has been a very frequent ask to see what the ALB is seeing in terms of all your clients. Now you can store that in an S3 bucket, and that is actually free of charge besides the S3 bucket fees. You can turn it on and keep an audit of what happened to each of your targets, what the response code was, and all of that, so you can go back and see if everything is doing well or if you had problems.

Thumbnail 2330

ALB has also introduced post-quantum key management, so you will see that fairly frequently now across a lot of different AWS services, but it's worth mentioning here. We always say that we will be ready for post-quantum before you folks need to think about it. The other load balancer that we tend to talk about in the same breath is the Network Load Balancer. This is our TCP and UDP layer 1, and you will notice there are some similarities here. Access logs can now be vended out. The one I like here though is the weighted target groups. The use case for this one is you can do things like having a bunch of fairly powerful instances that cover your baseload, but then you want to come up with very small ones in your auto scaling groups. You can assign the weights on those lower and have NLB properly manage that. Again, we have some post-quantum stuff here that everybody likes to see, but the bottom one is really interesting as well. NLB now supports QUIC, which is sometimes called HTTP/3 or some different acronyms, in pass-through mode. What this does is leverage an IETF draft where you encode the server ID inside the QUIC connection ID, and NLB will use that to guide its sticky session decision.

Thumbnail 2400

Amazon API Gateway has also gained a very interesting feature called the developer portal. The developer portal basically does automated work to pull in all your various APIs and services and put them together in a portal for you.

Previously, you would have had to do this by hand or create your own automation, but now this is a fully managed solution. It pulls in your APIs and is very customizable. You can go in and adjust the colors, names, certificates, or whatever you want to do. You can also put in the access control that you've come to expect from most AWS services, backed by our normal identity services.

Thumbnail 2440

From API Gateway, we have a couple of other bits and pieces here. Response streaming has been implemented to improve the time to first byte, which is what TTFB stands for. We've also extended the response timeouts to 15 minutes and support payloads larger than 10 megabytes. People seem to love using API Gateway for very large things, so this got implemented. We have more TLS security policies, and you can now integrate API Gateway with private ALBs, which helps a lot from a security perspective.

Thumbnail 2490

AWS Cloud WAN: Global Connectivity and Advanced Routing Controls

You promised no AI mention, so I said we're not going to talk about AI, but you have to mention this Agent core. There's MCP support in API Gateway. Let's get out of that and talk about global and hybrid connectivity. As we've talked about a couple of times, the first one is going to be Cloud WAN.

Thumbnail 2500

Thumbnail 2510

Cloud WAN, we mentioned earlier, all the foundations. You start with the core network, which is the container for everything, and then you expand that into each of the regions that you're interested in. Cloud WAN deploys a core network edge. Once you've done that, you can see this all in a JSON policy file, and we're going to keep coming back to this concept because everything in Cloud WAN is defined in a JSON policy file. This is great from an auditing perspective for seeing what the differences are. You can do policy rollbacks inside the UI. What we found is that especially security folks love having a single document that says these behaviors are what's being actually implemented. You can put human names in places, and your entire policy is in one place, very easy to audit, very easy to control, and very easy to understand.

Thumbnail 2550

As part of that, the next item inside Cloud WAN is network segments. Network segments you can define to be anything. We've got customers that do it by security zones, by regions, and some customers are doing it for where they have different policy domains or different legislation that they have to obey. It's completely up to you. Cloud WAN supports a number of these segments, so feel free. We've got customers that are doing every combination you can imagine.

Thumbnail 2580

What connects into a network segment? Well, you can connect lots of things into a segment. The first one, of course, is going to be VPCs. VPCs attach into a segment, and that is controlled by a tag that you put on the attachment. Again, easy to audit and control. You can be assured as to where that is going.

Thumbnail 2600

So now if you start doing this multi-region, this is where Cloud WAN's other big feature comes in. It is global, so the same policy applies worldwide. You don't have to worry about any additional replication or anything like that. It's all global. So your VPCs here, you've got the same policy here. We're saying some of these VPCs attach to segment A, some to segment B, and they get the same behavior worldwide without you doing anything additional.

Thumbnail 2630

Thumbnail 2660

You can also connect in what we call tunnel-less connect. We've had a number of features in the past that were tunneled in one way or another, whether it's IPsec or GRE tunnels, in order to do dynamic routing. You no longer need to do that with Cloud WAN. You can create instances that are speaking BGP to the Cloud WAN and control the route tables that way. Again, you can use this for things like different services. I've seen customers using this for logging and other kinds of failover. The sky's kind of the limit here, but it's been a lot of interesting use cases that have come up from this.

Thumbnail 2670

Thumbnail 2700

Then the next thing that you can add into Cloud WAN is Transit Gateway. One of the biggest questions I usually get is whether Transit Gateway has to go into one segment. No, it is determined by the Transit Gateway route table, not the Transit Gateway as a whole. You put in an entry in the route table and say go to the segment. This means that you can still maintain some control at your Transit Gateways, whether you're migrating to Cloud WAN or you're adding it in. You're using Cloud WAN for global and Transit Gateway for local. Use whatever makes sense for you. This capability makes it so you don't have to worry about having to dump your entire Transit Gateway into one segment.

Thumbnail 2710

Thumbnail 2740

Thumbnail 2750

Another thing you can add to a segment is a Direct Connect Gateway. This is where your DX links come in for external connectivity, and this comes in as another attachment. I do see customers frequently creating their own segment for Direct Connect. They call it external, on-prem, or whatever works for them. That is very common. You gain BGP routing capabilities, so BGP routes get advertised via your Direct Connect links, and you can work with them inside the cloud. You can connect multiple Direct Connect Gateways, and customers do this if they have things like they want to advertise the same blocks or other advanced architectures where this may apply. A more common scenario is where you have multiple Direct Connect Gateways tied into multiple segments. This happens if you have, for example, production and development on-premises environments and you want to tie them into their respective segments inside Cloud WAN.

Thumbnail 2780

Cloud WAN now supports security group referencing, which we mentioned earlier. It now works inside Cloud WAN as well. Inside Cloud WAN, you just toggle a switch that says you are allowed to do these references. You can have a security group that controls your instances in VPC A and you can reference it in what we are calling Security Group B here. My use case is microsegmentation, but it is also very helpful if you have security folks who say they do not want you opening 10/8 or giant blocks. Now you can say only the instances assigned to the security group can access this. That is very helpful from a security perspective.

Thumbnail 2810

Thumbnail 2820

Thumbnail 2850

This is the feature I am personally most excited about in terms of Cloud WAN. I came from native routing and worked with routers for a very long time, so advanced routing controls really address some of the issues customers have been raising. They say this is all great, but I need more control. I need more capabilities. I need to be able to do really fine-grained controls with this. Advanced routing controls give you a lot of features. You can do route filtering now. You can say this VPC does not need these prefixes, or you do not even want this space routed to this VPC. You can configure summarization between segments and between this and on-premises. You can configure path manipulation, and we will go through an example of that here in a bit. If you are used to your normal routers and physical devices, this is very similar to route policies or route maps or whatever your vendor calls them. You now gain a lot of those same capabilities inside Cloud WAN.

Thumbnail 2890

Thumbnail 2900

Thumbnail 2910

Thumbnail 2920

Thumbnail 2930

Thumbnail 2940

Thumbnail 2950

What does that look like? Inside the console here, and we are going through the console, but remember you can do it in JSON as well. You create a routing policy, and it will ask you a few straightforward things. Assign it a number, assign it a name, and determine whether this is going to be inbound or outbound. All fairly straightforward questions. Here we are just defining it as 100, and then once that is done it will show up inside the console and now you will be able to add rules. Inside the rules you can click create a policy rule, and this is where it gets pretty interesting with the various rule components. If you are used to route policies, you will see your normal condition and what you are doing with it. That is fair enough. So what else can we do here? There are a lot of actions you can take. We can manipulate, we can block, we can allow, we can manipulate AS paths, and we can do all the things you can see in the entire list up there. It is your normal set of things you are used to seeing from traditional routers, but now it is all being brought into the cloud. What are the conditions? Again, it is the same thing you are used to. You can match prefixes, you can match AS paths, you can match MEDs, and you can do all this sort of manipulation here. Now you have the capabilities to do so many things to determine where your primary and backup paths are going for remote regions, for example. You can say I do not want paths from this internal AS to come into Cloud WAN, or I only want these, or whatever you want to do. This feature has a lot of interest from networking engineers who have been used to that and have been asking us for more controls. Well, here you go.

Thumbnail 2990

And there is the real example. Like I mentioned, it is all in JSON. The UI will help you create it, but once it is in JSON, you can copy and paste it to make it a lot easier to make multiples of these. But remember what I said earlier. Cloud WAN gives you the ability to roll back policy versions.

Thumbnail 3020

Thumbnail 3040

So if you make a change you don't like, you just tell it to roll back to the other one and it changes everything back for you. You don't have to worry about trying to undo things. You just go back to where you were and everything's good, and you can try again. The other part of this is where the routing policies actually attach. Inside each attachment, you define policy tags. You also select what region it is available in, and you configure the routing label here. This label is just another kind of internal name that you're going to be associating with the network policy that you create there.

Thumbnail 3060

Thumbnail 3070

Now you can go through and apply the routing policy label onto the attachments. This is that final interconnect piece where you're tying in the policies to specific attachments. Once you apply that label, of course, we will tell you that you applied this label and here's the policy that's getting applied, just giving you that positive confirmation that it's doing what you expect it to do. So that's great, but how do I know what all Cloud WAN is seeing and getting?

Thumbnail 3090

Thumbnail 3110

By very frequent and vocal request, we are now giving you the RIB view. So you can now see all the routes that Cloud WAN is learning, where it's learning them from, all the various information, the AS path, all that other good stuff that you can now see inside Cloud WAN, including all your backups. A couple of reminders about the advanced routing controls: there are a few things where it doesn't make sense for some of these things to work on others, but generally speaking, this works pretty broadly and it gives you the capabilities that you've been looking for for some of these really fine-grained routing decisions.

Thumbnail 3150

Hybrid Connectivity: Site-to-Site VPN and AWS Interconnect Multi-Cloud

I've talked a lot about internal routing. Now let's go for a hybrid approach, right? And we have Site-to-Site VPN and a number of very cool launches about it. The first one is high throughput tunnels. Now you can have up to 5 gigabits per second tunnels, and you can do ECMP on Transit Gateway and Cloud WAN. The Eero partnership was just launched where you can have Eero devices on your own premises as a VPN concentrator for a large scale of small throughput VPN tunnels. This is really important for folks using Site-to-Site VPN.

Thumbnail 3160

Thumbnail 3170

Thumbnail 3180

Thumbnail 3190

For Direct Connect, we've talked a bit about at the beginning around use cases for Direct Connect and how customers are using it for connecting to their hybrid workloads. A very cool launch that happened on Sunday is AWS Interconnect Multi-Cloud. There's also another flavor of AWS Interconnect which is called Last Mile. It's a gated preview with Lumen, so we're really excited about that as well. AWS Interconnect Multi-Cloud is launched in partnership with Google, and we will have other cloud providers coming in 2026.

Thumbnail 3210

Thumbnail 3220

The idea is how do we provide private connectivity between AWS and your environments and other clouds. Before, you had to go cable all those things up. You had to go build your high availability and resilience using either your own router or a customer router. Now you just go through the console and it's just a matter of clicks or through automation. Everything is taken care of for you under the hood. AWS and Google Cloud have cabled large amounts of capacity, and what you get is essentially an interconnect that you get to attach to your Google Cloud router on the Google Cloud side and to the Direct Connect gateway on the AWS side.

Thumbnail 3230

Thumbnail 3240

Thumbnail 3250

Thumbnail 3260

Thumbnail 3270

There's a lot of deep dives and reference architectures that we've gone through in terms of regional versus global level constructs with Cloud WAN. There's a dedicated session for this re:Invent event that happened this morning, so you're going to find it on YouTube in about 48 hours. The most important thing here is the collaboration between the clouds. You interact with each of the clouds. You create an interconnect. AWS and Google Cloud provision that capacity, and everything is fully managed. Then you go and you accept the interconnect using the activation key that you're provided, and that's all you need to do under the hood.

It is highly available, highly scalable, and encrypted connections between the two cloud providers. There's no third party in between. We're diving quite deep into this in the other session that I mentioned. Now, keep in mind that for preview you get for free a one gig connection.

Thumbnail 3310

Thumbnail 3320

Thumbnail 3330

When it goes to general availability, you will be able to scale up and down capacity to whatever numbers you want. Other call providers are going to be coming in 2026, and we are working closely with Azure on that. From an advanced architecture perspective, you remember this, right? Cloud One, or some sort of layer 3 connectivity with VPC Lattice, with your resources living on-premises or in another cloud, and now through Interconnect you have layer 3 connectivity.

Thumbnail 3350

Thumbnail 3370

Additional Launches and 2025 AWS Networking Feature Recap

A couple of super cool launches that we are going to breeze through are DNS Global Resolver from Amazon Route 53. This is a launch that just happened, I think on Tuesday. We have launched it, and it allows you to not have to manage inbound resolver endpoints for your workloads and DNS delegation. It gives you anycast IPs in IPv4 and IPv6 that you can configure on your workforce fleet, on your on-premises devices, and they are fully controlled, fully managed, and secure. So give it a try.

Thumbnail 3390

Thumbnail 3410

From a content delivery perspective, Amazon CloudFront now supports Mutual TLS for clients. This was a feature that was highly requested from our customers, and we have released flat rate pricing without overages for a set of services that customers use for content delivery, website, and security. We have created these bundles that you can find on the website, and they are fully managed in this bundle. You get different tiers, from basic to pro, and you get to pay a monthly fee without overages as I mentioned.

Thumbnail 3430

From an IP address management perspective, IPAM had a lot of cool launches this year. One of the ones that I am most excited about is the public IPv4 allocation policies. It is not IPv6, but still, here you can integrate with prefix lists, and you have the ability of specifying certain IPAM pools for certain use cases that you want those IP addresses to be allocated. You can automatically populate prefix lists, for example, to get you started with IP allow listing for your partners, and we have a blog published describing that solution, so I would highly recommend you go through it.

Thumbnail 3470

Last but not least, IPv6 adoption. I wanted to call out these super cool stats. Over 75 percent of AWS services now support IPv6, and we are on track to get to a higher percentage by the end of this year, so stay tuned. Over 100 services launched IPv6 support just this year. So if you had an excuse around AWS not supporting IPv6, now is the time to do it. Really exciting.

Thumbnail 3510

Thumbnail 3530

Thumbnail 3540

Thumbnail 3550

For those of you who know how we end this presentation, it is usually with recapping all the things that we have launched throughout the year in AWS networking. I do not know if you have looked at the session last year, but this is how we ended 2024. These were the features and launches all on a big slide that were there for you to take away. Now in 2025, we have launched over 150 features and integrations in the AWS networking space. Are you all curious how this diagram looks in December 2025? Well, I am too. This is how it looks. You have a task to spot the differences and let us know.

Just to call out a few of the ones that we have gone through today: Network Firewall Proxy, VPC encryption controls, Cloud One advanced routing policies, security group referencing on Cloud One, VPC Lattice support for custom DNS names and configurable IP addresses on the resource gateways, Route 53 Global Resolver, and bundled pricing for web and content delivery. I do not even know if I captured everything. I said 150 earlier, so there are 150 plus. Well, that was it. Thank you so much for sticking around with us. Please fill in the session survey and please meet us outside for stickers.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)