DEV Community

Cover image for How to Implement AWS Network Firewall in a Multi-Account Architecture Using Transit Gateway
Cristhian Becerra
Cristhian Becerra

Posted on • Edited on

How to Implement AWS Network Firewall in a Multi-Account Architecture Using Transit Gateway

TL;DR: This article will guide you through the implementation of AWS Network Firewall in a multi-account, highly available architecture (spanning 3 Availability Zones) in conjunction with a Transit Gateway to connect VPCs across different accounts within an AWS Organizations structure. The architecture follows a Hub & Spoke pattern, where the Hub Account is the Networking account, and the Spoke accounts include Development, QA, Production, and DevOps. 🛡️


Table of Contents:


Introduction

When I had to implement AWS Network Firewall for the first time at work, I’ll be honest — I was a bit nervous. I thought it would be more complex. However, the implementation turned out to be relatively straightforward if you’re comfortable with networking and routing.

Even so, my initial network configurations for deploying Network Firewall were neither the best nor the most organized, but eventually, I got there — a proper and well-structured implementation of AWS Network Firewall, aligned with AWS’s recommended best practices. 🥳

And honestly, I’d like others to avoid making the same mistakes when setting up AWS Network Firewall for the first time — especially when it comes to routing, which was the most challenging part for me — and to lose the fear of the service and networking in general. That’s why I’m writing this article: to share how to properly configure the network and routing, or at least, how I did it myself.

To give a bit of context, in a multi-account environment, AWS Network Firewall plays an important role by providing centralized network protection, enabling consistent security controls and traffic inspection across all accounts.

This implementation follows a model of centralized ingress, centralized egress, and centralized inspection, ensuring that inbound, outbound, and inter-VPC traffic is securely routed through a single inspection layer managed by the Networking account.

Why Implement AWS Network Firewall

You probably already know about several types of firewalls in AWS, such as Security Groups (SGs), Network Access Control Lists (NACLs), and the Web Application Firewall (WAF) — and you might be wondering: why do I need yet another firewall service?

Well, AWS Network Firewall is a more robust service designed to solve a different use case. Let’s start by clarifying that AWS WAF can only be deployed on specific AWS services like ALB, API Gateway, and CloudFront, but not openly on any Elastic Network Interface (ENI).

On the other hand, Security Groups and Network ACLs exist independently within each VPC, each with its own decentralized security rules, which can become increasingly difficult to manage over time.

Imagine you have multiple VPCs, or even several AWS accounts, and you need to apply the same stateless and stateful firewall rules across all of them. It wouldn’t make much sense to repeatedly deploy the same SGs, NACLs, and WAFs in every VPC or account, right? And even if you did, maintaining and managing those configurations would quickly become chaotic. 😵

Moreover, both SGs and NACLs have a relatively small limit on the number of rules they support, and as mentioned earlier, WAF only works with specific services. So, how do we solve this?

That’s where AWS Network Firewall comes in. This service allows you to deploy a firewall in the form of a virtual service, with its own ENIs, where you can route all inbound and outbound traffic to be inspected before reaching its final destination. This gives you all the benefits of stateless and stateful firewalls, from Layer 3 to Layer 7. 🤩

And while, as an alternative, you could deploy your own security appliance —or a third-party one— behind a Gateway Load Balancer for inspection, the difference is that, in that case, you would be fully responsible for managing it, whereas AWS Network Firewall is a fully managed AWS service that offers many valuable features.

In summary, Security Groups and NACLs may be sufficient at a small scale, with just a few VPCs, but as your environment grows and you start managing multiple VPCs and AWS accounts, you’ll want —or even need— to centralize your network inspection. And that’s exactly why you should learn how to implement AWS Network Firewall.

Here’s a comparison table between the different types of firewalls in AWS, to help you better understand the differences among them.

Security Group Network ACL WAF AWS Network Firewall
Protection at EC2 instance level Subnet level Endpoint level (ALB, CloudFront, etc.) VPC level based on routes
Stateful or Stateless Stateful Stateless Stateless Both
OSI layer Layer 3/4 Layer 3/4 Layer 7 Layer 3–7
Features IP, Port, Protocol filtering IP, Port, Protocol filtering Application layer filtering Stateless/ACL L3 rules, stateful/L4 rules, IPS–IDS/L7 rules, FQDN filtering, Protocol detection, Large IP lists
Flows All ingress/egress flows at instance level All ingress/egress flows at subnet level Ingress only from internet to API Gateway, ALB, CloudFront All ingress/egress flows at perimeter of VPC (e.g., IGW, VGW, DX, VPN, VPC–VPC)

What You’re Going to Build

As I mentioned earlier, the implementation is not particularly complex, but it does require a solid foundation in networking and routing. When this guide says “Create a Transit Gateway,” it assumes you already know how to do that. And while this guide does not include ready-to-deploy Infrastructure as Code, it does provide a detailed step-by-step list appropriate for this level. Most importantly, it covers the routing configuration — which, in my experience, is the most challenging part. 😉

This implementation consists of five components you’ll be building:

  • AWS Transit Gateway shared through AWS RAM and connected to all VPCs.
  • Ingress VPC with an Internet Gateway for centralized inbound connectivity.
  • Inspection VPC with AWS Network Firewall deployed in high availability, using the VPC or TGW attachment type for centralized inspection.
  • Egress VPC with NAT Gateways in high availability for centralized outbound connectivity.
  • Workload VPCs for Development, QA, Production, and DevOps environments.

Don’t worry if you can’t fully visualize the solution yet. In the implementation steps section, you’ll find architecture diagrams for each of these five components, as well as the final diagram of the complete solution.

Prerequisites

For this implementation, it’s recommended to have multiple AWS accounts and the appropriate permissions in each of them.

If you don’t yet have multiple accounts, you can create them using AWS Organizations and access them through IAM Identity Center instead of individual IAM users, if you prefer.

If you don’t have access to multiple AWS accounts, you can deploy all the VPCs within a single account; in that case, you should simply skip the step of sharing the Transit Gateway through AWS RAM.

You’ll need permissions for Amazon VPC, AWS Transit Gateway, AWS RAM, AWS Network Firewall, and optionally CloudWatch Logs, EC2, and Session Manager, among others, to perform the actions described in this guide.

I’ll be deploying in the N. Virginia (us-east-1) region, and the following diagram shows what my multi-account structure looks like at the beginning: one account dedicated to network resources (Hub) and four accounts for workloads (Spoke).

AWS Organization Accounts


Implementation Steps

The following sections provide a step-by-step walkthrough of the network setup process, showing how each component, including the Transit Gateway, shared VPCs, workload VPCs, and AWS Network Firewall, integrates to create a secure and scalable foundation for inter-account connectivity and inspection.

Set Up the Transit Gateway

  1. Create a Transit Gateway with default settings.

  2. Create two transit gateway route tables:

    • tgw-default-association-rtb
    • tgw-default-propagation-rtb
  3. Modify the Transit Gateway and assign:

    • tgw-default-association-rtb as the Default Association Route Table
    • tgw-default-propagation-rtb as the Default Propagation Route Table
  4. In AWS RAM, create a Resource Share with:

    • The newly created Transit Gateway as the Shared resource
    • The Spoke accounts as the Shared principals
  5. (Optional) Create a TGW Flow Log for the Transit Gateway to monitor and log the traffic flowing through it.

Transit Gateway

Notes for Subnet Creation and Availability Zone Selection:

  • Connectivity Subnet Creation: When creating a VPC that will have connectivity with the Transit Gateway (TGW), it is recommended to create a /28 subnet exclusively to host the Elastic Network Interfaces (ENIs) deployed by TGW. This follows best practices for efficient resource management.

  • AZ Selection: Choose the same Availability Zone (AZ) ID, rather than the AZ name, to avoid charges for inter-zone traffic. For example, in this case, we are using use1-az1, use1-az2, use1-az4. It's important to note that us-east-1a and us-east-1b are unique aliases for each AWS account. To ensure that you deploy your resources in the same AZ, use AZ IDs instead of aliases.

  • Network Firewall Support: AWS Network Firewall is not supported in us1-az3, which is why we chose to use use1-az4 instead.

  • AZ Identification: It is recommended to add the AZ ID in the name tag of the subnet and associated route table, and any associated network appliances for easy identification and management.


Build the Ingress VPC

  1. Create a VPC (e.g., named ingress VPC, CIDR 10.20.0.0/20) with:

    • 3 public subnets (e.g., /22)
    • 3 private subnets for TGW connectivity (/28)
    • 1 Internet Gateway
  2. Create a Transit Gateway attachment between the Transit Gateway and the Ingress VPC, selecting the Subnet IDs of the 3 TGW connectivity subnets.

  3. Configure the TGW Connectivity Subnet Route Table (this route table will only have a local route):

    • ingress-vpc-tgw-connectivity-rtb

      Destination Target Description
      10.20.0.0/20 local Ingress VPC
  4. Configure the Public Subnet Route Table (this route table needs a 0.0.0.0/0 route to the Internet Gateway and routes to the CIDRs of the Spoke VPCs to go through the TGW attachment):

    • ingress-vpc-public-rtb

      Destination Target Description
      10.20.0.0/20 local Ingress VPC
      10.21.0.0/16 tgw Development VPC
      10.22.0.0/16 tgw QA VPC
      10.23.0.0/16 tgw Production VPC
      10.24.0.0/16 tgw DevOps VPC
      0.0.0.0/0 igw Internet access
  5. Optional: Create a VPC Flow Log for the VPC to monitor and log the traffic flowing through the Ingress VPC.

Ingress VPC


Build the Inspection VPC

  1. Create a VPC (e.g., named inspection VPC, CIDR 10.20.16.0/20) with:

    • 3 private subnets for Network Firewall (e.g., /22)
    • 3 private subnets for TGW connectivity (/28)
  2. Create a Transit Gateway attachment between the Transit Gateway and the Inspection VPC, selecting the Subnet IDs of the 3 TGW connectivity subnets. Enable Appliance Mode Support.

    • Note: Enabling Appliance Mode on the Transit Gateway attachment ensures that all traffic from a session flows through the same firewall in the same Availability Zone, preventing inter-AZ traffic from being inspected by the firewall. This setting is important to maintain consistent traffic inspection and improve performance in multi-AZ architectures.
  3. Manually edit the Transit Gateway Route Tables:

    • Remove the propagation of the Inspection VPC attachment in the TGW Default Propagation Route Table.
    • Remove the association of the Inspection VPC attachment in the TGW Default Association Route Table.
    • Associate the TGW Default Propagation Route Table with the Inspection VPC Attachment.
    • In the TGW Default Association Route Table, create a static route to 0.0.0.0/0 towards the Inspection VPC Attachment.
  4. In the AWS Network Firewall console:

    • Choose Create firewall.
    • In Describe firewall: Add a name and (optionally) a description.
    • In Attachment Type: Choose VPC and select the Inspection VPC.
    • In Firewall subnets: Select the 3 private subnets (not the connectivity subnets).
    • In Associate Firewall Policy: Choose Create and associate an empty firewall policy and specify a new firewall policy name.
    • Create the firewall; this will take a few minutes.
  5. Configure the TGW Connectivity Subnet Route Tables (one per AZ). In each route table, add a route to its corresponding AWS Network Firewall VPC endpoint (of type Gateway Load Balancer Endpoint) per AZ:

    • inspection-vpc-tgw-connectivity-rtb-use1-az1

      Destination Target Description
      10.20.16.0/20 local Inspection VPC
      0.0.0.0/0 nfw-use1-az1 Firewall endpoint for use1-az1
    • inspection-vpc-tgw-connectivity-rtb-use1-az2

      Destination Target Description
      10.20.16.0/20 local Inspection VPC
      0.0.0.0/0 nfw-use1-az2 Firewall endpoint for use1-az2
    • inspection-vpc-tgw-connectivity-rtb-use1-az4

      Destination Target Description
      10.20.16.0/20 local Inspection VPC
      0.0.0.0/0 nfw-use1-az4 Firewall endpoint for use1-az4
  6. Configure the Private Subnet Route Table (this route table needs a 0.0.0.0/0 route to the Transit Gateway attachment):

    • inspection-vpc-private-rtb

      Destination Target Description
      10.20.16.0/20 local Inspection VPC
      0.0.0.0/0 tgw Transit Gateway
  7. Optional: Create a VPC Flow Log for the VPC to monitor and log the traffic flowing through the Inspection VPC.

Inspection VPC

After having implemented the Inspection VPC multiple times, AWS finally released a new feature for AWS Network Firewall: the Transit Gateway Firewall Attachment. With this feature, the firewall connects directly to the Transit Gateway, removing the need for an Inspection VPC. Interested? Check out the Bonus section below. 😀


Build the Egress VPC

  1. Create a VPC (e.g., named egress VPC, CIDR 10.20.32.0/20) with:

    • 3 public subnets (e.g., /22)
    • 3 private subnets for TGW connectivity (/28)
    • 1 Internet Gateway
    • 1 NAT Gateway per AZ for high availability
  2. Create a Transit Gateway attachment between the Transit Gateway and the Egress VPC, selecting the Subnet IDs of the 3 TGW connectivity subnets.

  3. Manually edit the Transit Gateway Route Tables:

    • In the TGW Default Propagation Route Table, create a static route to 0.0.0.0/0 towards the Egress VPC Attachment.
  4. Configure the TGW Connectivity Subnet Route Tables (create one route table per AZ, and in each table add a 0.0.0.0/0 route to its corresponding NAT Gateway):

    • egress-vpc-tgw-connectivity-rtb-use1-az1

      Destination Target Description
      10.20.32.0/20 local Egress VPC
      0.0.0.0/0 natgw-use1-az1 NAT for use1-az1
    • egress-vpc-tgw-connectivity-rtb-use1-az2

      Destination Target Description
      10.20.32.0/20 local Egress VPC
      0.0.0.0/0 natgw-use1-az2 NAT for use1-az2
    • egress-vpc-tgw-connectivity-rtb-use1-az4

      Destination Target Description
      10.20.32.0/20 local Egress VPC
      0.0.0.0/0 natgw-use1-az4 NAT for use1-az4
  5. Configure the Public Subnet Route Table (this route table needs a 0.0.0.0/0 route to the Internet Gateway and routes to the CIDRs of the Spoke VPCs to go through the TGW attachment):

    • egress-vpc-public-rtb

      Destination Target Description
      10.20.32.0/20 local Egress VPC
      10.21.0.0/16 tgw Development VPC
      10.22.0.0/16 tgw QA VPC
      10.23.0.0/16 tgw Production VPC
      10.24.0.0/16 tgw DevOps VPC
      0.0.0.0/0 igw Internet access
  6. Optional: Create a VPC Flow Log for the VPC to monitor and log the traffic flowing through the Egress VPC.

Egress VPC


Build the Workload VPCs

  1. Create a VPC (e.g., named development VPC, CIDR 10.21.0.0/16) with:

    • 3 private subnets for workloads (e.g., /20)
    • 3 private subnets for TGW connectivity (/28)
  2. Create a Transit Gateway attachment between the Transit Gateway and the VPC, selecting the Subnet IDs of the 3 TGW connectivity subnets.

  3. Configure the TGW Connectivity Subnet Route Table — this route table will only have a local route:

    • development-vpc-tgw-connectivity-rtb

      Destination Target Description
      10.21.0.0/16 local Development VPC
  4. Configure the Private Subnet Route Table — add a 0.0.0.0/0 route to the Transit Gateway attachment:

    • development-vpc-private-rtb

      Destination Target Description
      10.21.0.0/16 local Development VPC
      0.0.0.0/0 tgw Transit Gateway
  5. Optional: Create a VPC Flow Log for the VPC to monitor and log the traffic flowing through the Workload VPC.

Workload VPC

This is the diagram corresponding to the Development VPC, but as I mentioned earlier, I have four accounts dedicated to workloads. Therefore, I repeated this same step for the QA VPC (CIDR 10.22.0.0/16), the Production VPC (CIDR 10.23.0.0/16), and the DevOps VPC (CIDR 10.24.0.0/16).


Configure the AWS Network Firewall

This article primarily focuses on configuring the base network architecture and the routing required to use Network Firewall. But at a high level, AWS Network Firewall is a managed service that allows you to protect your network by filtering traffic to and from your VPCs. It provides a stateful firewall and intrusion detection and prevention system that can be deployed inline with your network traffic, inspecting and filtering packets in real time.

  1. In the Firewall, enable Alert and Flow logging, and configure CloudWatch as the log destination by creating the necessary log groups.
  2. Optional: Edit the previously created empty Firewall Policy to:
    • Add AWS Managed rule groups
    • Create and add Stateless rule groups
    • Create and add Stateful rule groups

Review the Final Architecture

The deployed architecture follows a centralized inspection model using AWS Transit Gateway and AWS Network Firewall to control and monitor traffic between shared and workload VPCs.

  1. The Transit Gateway Default Association Route Table should look like this:

    • tgw-default-association-rtb

      CIDR Attachment Resource Type Route Type
      0.0.0.0/0 Inspection VPC VPC Static
  2. The Transit Gateway Default Propagation Route Table should look like this:

    • tgw-default-propagation-rtb

      CIDR Attachment Resource Type Route Type
      10.21.0.0/16 Development VPC VPC Propagated
      10.22.0.0/16 QA VPC VPC Propagated
      10.23.0.0/16 Production VPC VPC Propagated
      10.24.0.0/16 DevOps VPC VPC Propagated
      10.20.0.0/20 Ingress VPC VPC Propagated
      10.20.32.0/20 Egress VPC VPC Propagated
      0.0.0.0/0 Egress VPC VPC Static
  3. If the route tables do not appear as expected, detach and remove any incorrect associations or propagations, then re-associate and re-propagate the attachments with the appropriate route tables until the tables reflect the correct configuration.

The following architecture diagram illustrates the final multi-account setup, where traffic from the spoke accounts (Development, QA, Production, and DevOps) is routed through the centralized inspection layer in the Networking account.

Deployed Architecture

Is the diagram not displaying in high resolution? No worries — you can download the architecture diagram in PDF format to view it in more detail. 🔍


Validate Connectivity and Traffic Inspection

  1. Optionally, deploy EC2 instances in the Workload VPCs in private workload subnets across different accounts.

    • Connect using Session Manager and try to ping between the private IPs of the instances.
    • Ensure the security groups allow inbound ICMP and outbound HTTPS (443) to the internet (for Session Manager).
    • The pings should be successful.
  2. Optionally, deploy a client VPN such as OpenVPN or WireGuard on an EC2 instance in a public subnet in the Ingress VPC.

    • Connect through the VPN and try to ping the private IPs of the EC2 instances in the Workload VPCs.
    • Ensure the security groups allow inbound ICMP.
    • The pings should be successful.
  3. Check the CloudWatch Log Groups to verify the Flow type Firewall logs. You should see that all test traffic is successfully passing through the Network Firewall.

    • The Alert type Log Group should be empty for now, since the Firewall Policy has not been configured yet or the test traffic does not match any alert rules.

Bonus: Network Firewall Transit Gateway Attachment

During re:Inforce 2025, AWS announced the new Network Firewall Transit Gateway Attachment feature, which replaces the traditional “Inspection VPC” pattern, simplifying deployment and improving traffic visibility.

Instead of step "4. Create Inspection VPC", you would do "4. Create Network Firewall Transit Gateway Attachment".

  1. In the AWS Network Firewall console:

    1. Select Create firewall.
    2. In Describe firewall: Add a Firewall name and description (optional).
    3. In Attachment Type: Choose Transit Gateway and select the Transit Gateway.
    4. In Availability Zones: Select the 3 Availability Zones you have been working with.
    5. In Associate Firewall Policy: Select Create and associate an empty firewall policy and specify a new firewall policy name.
    6. Create the Firewall; it will take a few minutes.
  2. Manually edit the Transit Gateway Route Tables:

    1. Remove the propagation of the Network Firewall TGW attachment in the TGW Default Propagation Route Table.
    2. Remove the association of the Network Firewall TGW attachment in the TGW Default Association Route Table.
    3. Associate the TGW Default Propagation Route Table with the Network Firewall TGW Attachment.
    4. In the TGW Default Association Route Table, create a static route to 0.0.0.0/0 towards the new Network Firewall TGW Attachment.
  3. The Transit Gateway Default Association Route Table will look as follows:

    • tgw-default-association-rtb

      CIDR Attachment Resource Type Route Type
      0.0.0.0/0 AWS Network Firewall Firewall Static
  4. The Transit Gateway Default Propagation Route Table will remain unchanged in this scenario.

Network Firewall Transit Gateway Attachment


Cost Structure

The costs to consider in this implementation are as follows:

  • AWS Transit Gateway (TGW)
    • Hourly cost per VPC attachment: ~ USD 0.05 per hour for each attachment.
    • Data processing cost: ~ USD 0.02 per GB processed.
  • AWS Network Firewall (NFW)
    • Hourly cost per endpoint (active): ~ USD 0.395 per hour in the U.S.
    • Data processing cost: ~ USD 0.065 per GB of inspected traffic.
  • NAT Gateway (NATGW)
    • Hourly cost: ~ USD 0.045 per NAT Gateway
    • Data processing cost: ~ USD 0.045 per GB of data passing through the NAT Gateway
  • Data Transfer (egress / inter-AZ / inter-region, etc.)
    • Internet egress: standard egress rates (~ USD 0.09/GB for the first volumes, etc.)
    • Cross-Availability Zone transfer (cross-AZ): ~ USD 0.01/GB
    • Other transfer charges apply depending on source and destination.

It’s worth mentioning that the prices listed correspond to October 2025.💲

Monthly Cost Estimate

Based on my implementation and assuming minimal traffic, the cost structure would include the following components:

  • 7 Transit Gateway attachments
  • 3 NAT Gateways for high availability
  • 3 Network Firewall endpoints for high availability
  • Modest inspection and outbound traffic, say 500 GB/month (as a starting point)

Calculations:

  • Transit Gateway — attachments
    • 7 attachments × USD 0.05/hour = USD 0.35/hour
    • Monthly hours (~ 24 × 30 = 720 h) → 0.35 × 720 = USD 252/month
  • Transit Gateway — data processing
    • Assuming all 500 GB go through TGW: 500 × 0.02 = USD 10/month
  • Network Firewall — endpoints
    • 3 endpoints × USD 0.395/hour = USD 1.185/hour
    • 1.185 × 720 hours = USD 853.20/month
  • Network Firewall — data processing
    • 500 GB × USD 0.065 = USD 32.50/month
  • NAT Gateways — hourly
    • 3 NATGWs × USD 0.045/hour = USD 0.135/hour
    • 0.135 × 720 = USD 97.20/month
  • NAT Gateways — data processing
    • Assuming 200 GB out of the 500 GB go to the Internet
    • 200 GB × USD 0.045 = USD 9.00/month
  • Data transfer (egress, etc.)
    • Assuming 200 GB go to the Internet
    • 200 × USD 0.09 = USD 18.00/month
    • Also, assume 100 GB cross-AZ → 100 × USD 0.01 = USD 1.00/month

Estimated Total:

Component Estimated Monthly Cost (USD)
TGW — attachments 252.00
TGW — data processing 10.00
NFW — endpoints 853.20
NFW — data processing 32.50
NAT — hourly 97.20
NAT — data processing 9.00
Data transfer (egress + cross-AZ) 19.00
Estimated Total ~ USD 1,272.90

Lessons Learned

You probably just saw the cost estimate and got a little scared, right? Don’t worry — this is also due to the cost of high availability. In my example, I used 3 AZs, but your use case might not require that much availability for AWS Network Firewall, and you could deploy it in just 1 or 2 AZs since, in any case, an AZ failure is quite unlikely. 😎

If you decide to reduce the number of AZs, you should keep two important considerations in mind. The first is the Appliance Mode, mentioned earlier, which becomes especially relevant to avoid asymmetric traffic if resources are not evenly distributed across all AZs. Additionally, you should clearly understand the concepts of Fail Open and Fail Closed, which basically define the default action if all AZs where Network Firewall is deployed were to fail (which is unlikely, but could happen if you use only one AZ). Fail Open means that if Network Firewall fails, all uninspected traffic will pass freely, whereas Fail Closed means that if it fails, no traffic will pass. It’s important to evaluate the trade-off between security and availability in these scenarios and make the right decision.

On the other hand, another way to control Network Firewall costs is to enable only the necessary rules. It might be tempting to add all AWS Managed Rule Groups, but remember that each one has a cost, consumes capacity units, and not all may actually apply to your use case.

Keep in mind that the most important part of implementing AWS Network Firewall — besides blocking traffic — is that it provides actionable information and insights. In other words, the findings, metrics, and logs shouldn’t remain unused. Integrate Network Firewall with AWS Security Hub to gain high-value insights and perform remediations, or even automate them through an event-driven approach using Amazon EventBridge, AWS Lambda, or other services.

Conclusion

In conclusion, implementing AWS Network Firewall in a multi-account architecture with Transit Gateway provides a robust, scalable, and secure network infrastructure. By following the steps outlined in this article, you can effectively deploy and configure Network Firewall to protect your VPCs across different accounts within an AWS Organizations structure. The Hub & Spoke pattern, combined with the high availability of 3 Availability Zones, ensures that your network is resilient and capable of handling various traffic patterns and security requirements. Additionally, the use of VPC Flow Logs and Network Firewall logging allows for detailed monitoring and analysis of network traffic, enabling you to identify and respond to potential security threats promptly.

What’s Next

In the next section, you’ll find official resources and documentation for the services mentioned, along with some interesting points related to the topics covered — in case you want to keep learning or dive deeper to evaluate whether they truly apply to your use case.

I also invite you to try this implementation in your own AWS account. Remember that if you don’t have multiple accounts, you can deploy all the VPCs within a single account and still experiment with AWS Network Firewall. Let me know in the comments what you thought of this guide or if you discovered something interesting during your implementation. ✍🏻

Although I hadn’t mentioned it before, I implemented this entire multi-account architecture using Terraform. However, the infrastructure-as-code is currently highly customized to the specific needs of the client for whom it was deployed. Later on, I’ll be modularizing and generalizing this code for different scenarios, with the goal of being able to share it with the community.

Before I go, I want to highlight the great potential of this centralized architecture based on AWS Transit Gateway. Building upon this design, we could extend it by adding a VPN connection to an on-premises corporate data center; deploying an application in a workload VPC on an EC2 instance internally exposed through a Network Load Balancer (NLB); and placing a public Application Load Balancer (ALB) in the Ingress VPC, which points to the private IP addresses of the NLB, allowing clients to access the service through the ALB’s public DNS. The following diagram illustrates, in a simple and summarized way, this possible design extension.

Transit Gateway design extension

Official Resources

Top comments (0)