<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: friday963</title>
    <description>The latest articles on DEV Community by friday963 (@friday963).</description>
    <link>https://dev.to/friday963</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/friday963"/>
    <language>en</language>
    <item>
      <title>AWS Security Groups for Network Engineers</title>
      <dc:creator>friday963</dc:creator>
      <pubDate>Fri, 31 May 2024 01:41:24 +0000</pubDate>
      <link>https://dev.to/friday963/aws-security-groups-for-network-engineers-ja7</link>
      <guid>https://dev.to/friday963/aws-security-groups-for-network-engineers-ja7</guid>
      <description>&lt;h2&gt;
  
  
  Hello and welcome, Network Engineers!
&lt;/h2&gt;

&lt;p&gt;In this blog post, I hope to explain the basic functionality of AWS's security groups, how they are applied in production, and draw some comparisons to networking features that we, as network engineers, already understand.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Security Groups (SGs) in AWS?
&lt;/h2&gt;

&lt;p&gt;SGs are like a firewall or an ACL, if you may, that is applied directly to a networking interface. In AWS, these network interfaces are called "ENIs," therefore I'll refer to them by their proper naming convention going forward. SGs, like most network filtering constructs, allow you to define traffic allowed ingress or egress (there is an implicit deny for anything not explicitly granted an allow). They are also stateful, which means we are free from having to explicitly allow ephemeral (outbound) ports when a client initiates a conversation on a port we've allowed on the ingress. In simpler terms, this means that if we've allowed a client to initiate a conversation on port 443, the server can respond on an ephemeral port; we do not need to configure that functionality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where can we use SGs?
&lt;/h2&gt;

&lt;p&gt;In AWS, there are many services that directly place an ENI in our VPC (Virtual Private Cloud). To name just a few services that you'll likely understand based on their naming convention:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon EC2 (Elastic Compute Cloud) Instances&lt;/li&gt;
&lt;li&gt;Amazon RDS (Relational Database Service) Instances&lt;/li&gt;
&lt;li&gt;Amazon Elastic Load Balancers (ELB)&lt;/li&gt;
&lt;li&gt;Amazon Elastic File System (EFS)&lt;/li&gt;
&lt;li&gt;Amazon EKS (Elastic Kubernetes Service)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many more, but these are just a few services that ultimately end up creating a network interface in a VPC that you, as a consumer, will directly interact with.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do SGs work?
&lt;/h2&gt;

&lt;p&gt;SGs are evaluated in a top-down manner, looking at the source, destination, and port to determine if an allow or a deny should occur. SGs operate like most rules-based traffic filtering mechanisms; there is little left to explain beyond this.&lt;/p&gt;

&lt;p&gt;The only caveat to mention related to how SGs work because it's unique to them is that SGs can act as a source or destination for other SGs to reference. This cool feature means that you can do more "intent"-based traffic filtering and stop filtering solely based on IP. Take, for example, a fleet of EC2 (compute) instances acting as web servers, all using the same SG with a rule that allows ingress on port 443 (let's call the security group "webServerSecurityGroup-123"). Let's also say there is a group of database servers that should only ever allow the web servers to make SQL queries against them (let's call the database security group "databaseSecurityGroup-456"). Instead of creating unique SGs that have explicit IPs defined, we can simply refer to both in our source and destination declarations.&lt;/p&gt;

&lt;p&gt;In the ASCII table below, notice the webserver security group allows inbound traffic on port 443. On the outbound, the only conversation that the web server can INITIATE is to the host with the database security group applied to it and only on port 3306. On the database security group, we only allow traffic from one source on port 3306, and that is from any host with the web server security group applied to it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+--------------------------------------------------------------------------------------------+
|  webServerSecurityGroup-123                                                              |
|--------------------------------------------------------------------------------------------|
|  Inbound Rules                                                                            |
|--------------------------------------------------------------------------------------------|
|  Rule #  | Type             | Protocol | Port Range | Source                           |
|--------------------------------------------------------------------------------------------|
|  100     | HTTPS (443)      | TCP      | 443        | 0.0.0.0/0                        |
|--------------------------------------------------------------------------------------------|
|  Outbound Rules                                                                           |
|--------------------------------------------------------------------------------------------|
|  Rule #  | Type             | Protocol | Port Range | Destination                      |
|--------------------------------------------------------------------------------------------|
|  100     | MySQL/Aurora     | TCP      | 3306       | databaseSecurityGroup-456        |
+--------------------------------------------------------------------------------------------+

+--------------------------------------------------------------------------------------------+
|  databaseSecurityGroup-456                                                               |
|--------------------------------------------------------------------------------------------|
|  Inbound Rules                                                                            |
|--------------------------------------------------------------------------------------------|
|  Rule #  | Type             | Protocol | Port Range | Source                           |
|--------------------------------------------------------------------------------------------|
|  100     | MySQL/Aurora     | TCP      | 3306       | webServerSecurityGroup-123       |
+--------------------------------------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Drawing comparisons between networking ACL's and SGs
&lt;/h2&gt;

&lt;p&gt;As Network Engineers, more likely than not, you've dealt with ACL's. I'm going to draw some comparisons and point out where they differ because they do differ, however, I hope that if you still aren't clear how SGs work this will drive home the point.&lt;/p&gt;

&lt;p&gt;As I stated earlier, SGs are applied to network interfaces; similarly, traditional ACL's can be applied to a physical or virtual interface to protect/filter traffic. In both instances, they are evaluated in a top-down approach, looking at each entry in the list looking for a matching source/destination/port and the action to take. One major difference is the stateful behavior of a traditional ACL vs. an SG, which is stateful by design.&lt;/p&gt;

&lt;p&gt;If you're curious about how to further explore their functionality, please check out my GitHub repo for a few examples. You can pull the code down and deploy these examples in your environment to get hands-on with them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/friday963/networklabs/tree/main/security_groups"&gt;https://github.com/friday963/networklabs/tree/main/security_groups&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS Transfer Family for Network Engineers</title>
      <dc:creator>friday963</dc:creator>
      <pubDate>Sun, 11 Feb 2024 22:41:32 +0000</pubDate>
      <link>https://dev.to/friday963/aws-transfer-family-for-network-engineers-4fmf</link>
      <guid>https://dev.to/friday963/aws-transfer-family-for-network-engineers-4fmf</guid>
      <description>&lt;p&gt;In this article I hope to demonstrate how a Network Engineer could leverage the AWS product line to securely transfer files to and from the cloud from their on-prem infrastructure using traditional transfer protocols like SFTP, FTP, FTPS.  AWS Transfer Family is a robust solution providing an efficient and secure means of transferring files to and from any host capable of being a client of one of the protocols above, in this example routers &amp;amp; switches. This allows for easy retrieval of configuration files, logs, or any other data stored you may need to push or pull from your physical infrastructure, streamlining network management tasks. In the simplest terms possible, this is an (FTP,SFTP,FTPS) server in the cloud.&lt;/p&gt;

&lt;p&gt;In this demo I'll be using Containerlab to deploy a containerized version of Arista EOS (simulating my on-prem router) and Terraform to deploy out the required AWS infrastructure.  If you want to get the code and a break down of what each piece of Terraform is doing, find it below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/friday963/networklabs/tree/main/transfer_family"&gt;https://github.com/friday963/networklabs/tree/main/transfer_family&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy AWS Infrastructure
&lt;/h2&gt;

&lt;p&gt;Run the your &lt;code&gt;init&lt;/code&gt;, &lt;code&gt;plan&lt;/code&gt;, &lt;code&gt;apply&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;friday@ubuntu:~/code/networklabs/transfer_family$ terraform init
friday@ubuntu:~/code/networklabs/transfer_family$ terraform plan
friday@ubuntu:~/code/networklabs/transfer_family$ terraform apply 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy Containerlab instance
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;friday@ubuntu:~/code/networklabs/transfer_family/containerlab_configs$ sudo containerlab deploy -t topo.yml 
[sudo] password for friday: 
INFO[0000] Containerlab v0.47.2 started                 
INFO[0000] Parsing &amp;amp; checking topology file: topo.yml   
INFO[0000] Creating docker network: Name="clab", IPv4Subnet="172.20.20.0/24", IPv6Subnet="2001:172:20:20::/64", MTU='ל' 
INFO[0000] Creating lab directory: /home/friday/code/networklabs/transfer_family/containerlab_configs/clab-SFTP_Sample_Lab 
INFO[0000] config file '/home/friday/code/networklabs/transfer_family/containerlab_configs/clab-SFTP_Sample_Lab/router/flash/startup-config' for node 'router' already exists and will not be generated/reset 
INFO[0000] Creating container: "router"                 
INFO[0000] Running postdeploy actions for Arista cEOS 'router' node 
INFO[0024] Adding containerlab host entries to /etc/hosts file 
INFO[0024] Adding ssh config for containerlab nodes     
INFO[0024] 🎉 New containerlab version 0.50.0 is available! Release notes: https://containerlab.dev/rn/0.50/
Run 'containerlab version upgrade' to upgrade or go check other installation options at https://containerlab.dev/install/ 
+---+-----------------------------+--------------+--------------+------+---------+----------------+----------------------+
| # |            Name             | Container ID |    Image     | Kind |  State  |  IPv4 Address  |     IPv6 Address     |
+---+-----------------------------+--------------+--------------+------+---------+----------------+----------------------+
| 1 | clab-SFTP_Sample_Lab-router | 202444f34875 | ceos:4.30.3M | ceos | running | 172.20.20.2/24 | 2001:172:20:20::2/64 |
+---+-----------------------------+--------------+--------------+------+---------+----------------+----------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Log into router and generate private/public SSH key
&lt;/h2&gt;

&lt;p&gt;After logging in, I'm dropping into the shell so I can interact with the underlying system to generate that SSH key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;friday@ubuntu:~/code/networklabs/transfer_family/containerlab_configs$ ssh admin@172.20.20.2
Warning: Permanently added '172.20.20.2' (ED25519) to the list of known hosts.
(admin@172.20.20.2) Password: 
router&amp;gt;en
router#bash
Arista Networks EOS shell
[admin@router ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/admin/.ssh/id_rsa): 
Created directory '/home/admin/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/admin/.ssh/id_rsa.
Your public key has been saved in /home/admin/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Vh2tdtedpI/5qX9g6FolSO70YXukTqqsd7jfHCr5dao admin@router
&amp;lt;TRUNCATED&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Collect the public key
&lt;/h2&gt;

&lt;p&gt;First we need to retrieve the public key from the router as seen below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[admin@router ~]$ cat /home/admin/.ssh/id_rsa.pub 
ssh-rsa Vh2tdtedpI/5qX9g6FolSO70YXukTqqsd7jfHCr5dao admin@router
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Proceed to AWS console to configure the SFTP user.
&lt;/h2&gt;

&lt;p&gt;Search &lt;code&gt;transfer family&lt;/code&gt; in the console and click into your instance.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8zuuwoltjobimu1ytqo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8zuuwoltjobimu1ytqo.png" alt="transfer family instance" width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here, find your user.  Notice the bottom of the screen &lt;code&gt;transfer_user&lt;/code&gt;, click into this.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kirwzoub22a1h0apxyr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kirwzoub22a1h0apxyr.png" alt="Image description" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that you're in the user console, find the &lt;code&gt;Add key&lt;/code&gt; button to add your public key.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxnlagm17zh01zca3b9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxnlagm17zh01zca3b9m.png" alt="Image description" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now paste the key and click &lt;code&gt;Add key&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9vxie0fue4xxb2heza0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9vxie0fue4xxb2heza0.png" alt="Image description" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Move files between router &amp;amp; SFTP server
&lt;/h2&gt;

&lt;p&gt;At this point we are ready to start transferring files.  Here I'm jumping to flash to get to some interesting files for transfer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[admin@router ~]$ cd /mnt/flash/
[admin@router flash]$ ls
AsuFastPktTransmit.log  SsuRestore.log        aboot        debug             if-wait.sh        persist   startup-config
Fossil                  SsuRestoreLegacy.log  boot-config  fastpkttx.backup  kickstart-config  schedule  system_mac_address
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next you'll notice I'm running &lt;code&gt;sftp -i /home/admin/.ssh/id_rsa  transfer_user@34.225.236.228&lt;/code&gt; in my situation, since I have no DNS I cannot actually SFTP to the FQDN that amazon created for me.  In any other situation I would be using the FQDN provided.  If you're following along you also probably lack a DNS server.&lt;br&gt;
&lt;strong&gt;DON'T FORGET TO INCLUDE THE KEY LOCATION IN YOUR SFTP CALL&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[admin@router flash]$ sftp -i /home/admin/.ssh/id_rsa  transfer_user@34.225.236.228
Warning: Permanently added '34.225.236.228' (RSA) to the list of known hosts.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is how I got an IP for the endpoint that was created for me.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;friday@ubuntu:~/code/networklabs/transfer_family$ nslookup
&amp;gt; s-0a4da29.server.transfer.us-east-1.amazonaws.com
Server:         127.0.0.53
Address:        127.0.0.53#53

Non-authoritative answer:
Name:   s-0a4da29.server.transfer.us-east-1.amazonaws.com
Address: 34.225.236.228
Name:   s-0a4da29.server.transfer.us-east-1.amazonaws.com
Address: 44.212.239.132
Name:   s-0a4da29.server.transfer.us-east-1.amazonaws.com
Address: 184.73.175.221
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last few things to note in this output is the remote working directory.  This was configured in my terraform as the directory I wanted to be dropped into upon logging in.  What's occurring here is that I'm interacting with an S3 bucket with the same path seen below &lt;code&gt;/network-logging-bucket-2073/router_1&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[admin@router flash]$ sftp -i /home/admin/.ssh/id_rsa  transfer_user@34.225.236.228
Warning: Permanently added '34.225.236.228' (RSA) to the list of known hosts.
Connected to transfer_user@34.225.236.228.
sftp&amp;gt; pwd
Remote working directory: /network-logging-bucket-2073/router_1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From there I'm able to put or get files from that home directory.  First I &lt;code&gt;put startup-config&lt;/code&gt; then I &lt;code&gt;get important_configuration_file.cfg&lt;/code&gt; from the remote server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sftp&amp;gt; put startup-config 
Uploading startup-config to /network-logging-bucket-2073/router_1/startup-config
startup-config                                                                                                                     100%  870    10.0KB/s   00:00    
sftp&amp;gt; ls
important_configuration_file.cfg.txt     startup-config                           
sftp&amp;gt; get important_configuration_file.cfg.txt 
Fetching /network-logging-bucket-2073/router_1/important_configuration_file.cfg.txt to important_configuration_file.cfg.txt
sftp&amp;gt; exit
[admin@router flash]$ ls
AsuFastPktTransmit.log  SsuRestoreLegacy.log  debug             important_configuration_file.cfg.txt  schedule
Fossil                  aboot                 fastpkttx.backup  kickstart-config                      startup-config
SsuRestore.log          boot-config           if-wait.sh        persist                               system_mac_address
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Take away
&lt;/h2&gt;

&lt;p&gt;In conclusion, I hope you were able to gain insight into the Transfer Family product and how you could leverage it to transfer files to and from your on-prem infrastructure if needed.  It really is an easy product to set up and provides a slick interface for getting you secure durable storage for your networking object storage needs.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>arista</category>
      <category>containerlab</category>
    </item>
    <item>
      <title>AWS Resource Access Manager (RAM) for Network Engineers</title>
      <dc:creator>friday963</dc:creator>
      <pubDate>Fri, 27 Oct 2023 19:46:55 +0000</pubDate>
      <link>https://dev.to/friday963/aws-resource-access-manager-ram-for-network-engineers-1ecg</link>
      <guid>https://dev.to/friday963/aws-resource-access-manager-ram-for-network-engineers-1ecg</guid>
      <description>&lt;p&gt;In this post I want to explain and provided examples of why a network engineer working with AWS may need to understand AWS RAM (Resource Access Manager). I recently was dealing with an issue at work, specifically related to cross account resource sharing.  It was my first encounter with AWS RAM and the need to share access to a TGW (Transit Gateway) across multiple AWS accounts. In the post below I'll be using purely network related examples because my target audience are network engineers, but this applies to anyone who needs to share access to resources between accounts.&lt;/p&gt;

&lt;p&gt;First, for those who are unfamiliar with transit gateways.  They basically allow you to create a hub and spoke topology in the cloud.  The TGW has its own route table and allows you to connect separate virtual private clouds (VPC) to it.  Once you've connected a VPC to a TGW you have the ability to facilitate communication inter-VPC, out of AWS, back on prem, ect.  There are a lot of things you can do, but at a high level its just a centralized router that bridge a bunch of VPC's together or route you back on-prem. Further down in the post you'll see me speak about attachments.  Just know that in order to connect a VPC to a TGW you'll need to "attach" it to the TGW and that's all that is being referenced when you see the word attachments.&lt;/p&gt;

&lt;p&gt;Next lets set the context for this illustration, perhaps your organization has multiple accounts and each account is owned by a particular IT team whether it's a development team, security, sysadmins, etc.  In this case, lets say the network team owns all the network transit from on-prem into AWS and within AWS itself between VPC's.  In this case it would make sense that network services may own multiple transit gateways which certainly could control all your north/south/east/west traffic.  We need to provide access to one of our transit gateways to a core IT service so they can properly route traffic in and out of their VPC.  Without such access they would essentially be cut off from any of the required services being offered on-prem or in the cloud by other IT teams.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Order of operations&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the consumer account(this consumer account will simulate a sister team that needs access to all other company resources).
&lt;/li&gt;
&lt;li&gt;Create a Transit Gateway and a VPC in the network services account. &lt;/li&gt;
&lt;li&gt;Create a VPC in the consumer account.&lt;/li&gt;
&lt;li&gt;Share the resource between accounts.
&lt;/li&gt;
&lt;li&gt;Request attachments to network services TGW and accept incoming connection attempt. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Walk Through&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consumer account creation.&lt;/strong&gt;&lt;br&gt;
Notice I have two accounts under the root OU.  Friday which is my main account (but will be referred to as &lt;code&gt;NetworkServices&lt;/code&gt; from here on out), and &lt;code&gt;ConsumerAccount&lt;/code&gt; which is our sister team's account.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EqW5vE1W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9u71isfj4x61oq4sy4dw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EqW5vE1W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9u71isfj4x61oq4sy4dw.png" alt="Accounts Page" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a transit gateway in the network services account.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AcMnI1cW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whltuyib62ckqe33md9j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AcMnI1cW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whltuyib62ckqe33md9j.png" alt="Network Services TGW" width="800" height="353"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Create a VPC in the Network Services account.&lt;/strong&gt;&lt;br&gt;
This image does not depict the VPC, but instead is an image of the transit gateway route table.  This is enough to illustrate the fact that our VPC has been created, is attached to the TGW and its CIDR has been learned by the TGW route table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v7bswj8J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6wo18ois0w1qf579iqys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v7bswj8J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6wo18ois0w1qf579iqys.png" alt="Network Services TGW Route Table" width="800" height="352"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Create consumer VPC.&lt;/strong&gt;&lt;br&gt;
The image below depicts the creation of our &lt;code&gt;ConsumerAccount&lt;/code&gt; VPC being created.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hjJ12TuN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qtbeki6kvlrgeng4unxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hjJ12TuN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qtbeki6kvlrgeng4unxl.png" alt="Consumer VPC" width="800" height="113"&gt;&lt;/a&gt;&lt;br&gt;
Notice the screen shot below. Without RAM our &lt;code&gt;ConsumerAccount&lt;/code&gt; cannot see a transit gateway available even though the resources are in the same region and ultimately under the same root OU.  The administrative segregation between these accounts blocks them from sharing any resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XCWxRuql--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/veqvmhi67nd3q38dkvos.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XCWxRuql--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/veqvmhi67nd3q38dkvos.png" alt="Consumer Account TGW" width="800" height="206"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Resource sharing&lt;/strong&gt;&lt;br&gt;
The first image is just a glimpse of what a shared resource looks like from the perspective of the sharing account, in our example this is from the perspective of the &lt;code&gt;Network Services&lt;/code&gt; account.  We see its being shared by the &lt;em&gt;Shared by me: Resource share&lt;/em&gt; heading on the AWS page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4xVZMpTY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/clg3hpk6sd922bcz391h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4xVZMpTY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/clg3hpk6sd922bcz391h.png" alt="Provider Shared Resource" width="800" height="156"&gt;&lt;/a&gt;&lt;br&gt;
And below is an example of what the &lt;code&gt;ConsumerAccount&lt;/code&gt; would see in their account, a pending invite to accept a resource share.  This is a resource needs to be accepted by account before it can be used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--n-z1vhXR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/941w3vxxtebhtlp2nwx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n-z1vhXR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/941w3vxxtebhtlp2nwx0.png" alt="Consumer Shared Resource" width="800" height="236"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Accept Shared Resource&lt;/strong&gt;&lt;br&gt;
At this point the resource has been shared from the &lt;code&gt;Network Service&lt;/code&gt; account.  The &lt;code&gt;ConsumerAccount&lt;/code&gt; then needed to accept that resource so it can be used in their account.  Now the &lt;code&gt;ConsumerAccount&lt;/code&gt; just needs to request to be attached to the &lt;code&gt;Network Services&lt;/code&gt; TGW and &lt;code&gt;Network Services&lt;/code&gt; needs to accept that TGW attachment invite.&lt;br&gt;&lt;br&gt;
See that our consumer is requesting to attach to the TGW.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S6bGAVfX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hvsjm8a5x7y3b0nmhykw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S6bGAVfX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hvsjm8a5x7y3b0nmhykw.png" alt="Image description" width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Network Services&lt;/code&gt; will see that pending invite and must accept it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RlL_7eEX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/131j64pwm9nopxtadnyp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RlL_7eEX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/131j64pwm9nopxtadnyp.png" alt="Image description" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lastly, the only part we care about, it routes!  In this final image we now see the route table of our transit gateway.  The TGW route table knows the CIDR block of our (&lt;code&gt;Network Services&lt;/code&gt;) VPC, it also knows about the CIDR of our &lt;code&gt;ConsumersAccount&lt;/code&gt; and will be able to route traffic between these accounts which otherwise would have no routing capabilities between each other.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iDZaxUPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g609zaiouko9srvanwin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iDZaxUPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g609zaiouko9srvanwin.png" alt="Image description" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, this is how AWS facilitates accounts to "own" a product or service but share it between accounts as if that consumer account owned the resource.  &lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>networkengineering</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS Route Tables</title>
      <dc:creator>friday963</dc:creator>
      <pubDate>Wed, 24 May 2023 02:58:57 +0000</pubDate>
      <link>https://dev.to/friday963/aws-route-tables-1dhh</link>
      <guid>https://dev.to/friday963/aws-route-tables-1dhh</guid>
      <description>&lt;p&gt;Welcome to this guide on AWS Route Tables, in this blog post we will delve into the fundamentals of AWS route tables. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article assumes basic knowledge of VPCs, AZ's, subnets and other basic network constructs.  If unfamiliar it may be difficult to follow along with all the content.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Route Table basics&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Route tables have three different types of routes, &lt;em&gt;local&lt;/em&gt;, &lt;em&gt;static&lt;/em&gt; and &lt;em&gt;propagated routes&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Local routes are automatically instantiated with a VPC, static routes are configured by humans, and dynamic or propagated are added dynamically.  Examples of propagated routes could come from &lt;code&gt;VGW's&lt;/code&gt;(Virtual Private Gateway) (for example, routes learned from on-prem) or in the case of a transit gateway could come from the &lt;em&gt;attachments&lt;/em&gt; hanging off of the transit gateway.&lt;/li&gt;
&lt;li&gt;Every VPC created is built with a "&lt;code&gt;Main&lt;/code&gt;" route table.  This is the default route table for a particular VPC and every subnet in the VPC will be associated with it upon initial creation.&lt;/li&gt;
&lt;li&gt;A VPC must have at least one &lt;code&gt;main&lt;/code&gt;route table and up to 200 (at the time of writing) custom route tables per VPC.&lt;/li&gt;
&lt;li&gt;Cloud route tables can be likened to traditional route table, where all routes are in the default route table. Similarly one cloud route table may be used to do routing between every subnet in a VPC. Alternatively cloud route tables could be utilized like VRF's and each subnet could have its own individual route table and therefore its own completely isolated routes separate from a main route table.&lt;/li&gt;
&lt;li&gt;Subnets and route tables have a one to many relationship.  One subnet can only be associated with one route table.  But one route table can have many subnets associated with it.&lt;/li&gt;
&lt;li&gt;Every subnet MUST be associated with one route table, either custom or the main RT.
&lt;/li&gt;
&lt;li&gt;If you disassociate the subnet from a custom route table it will automatically be associated with the main RT.  You cannot actually disassociate a subnet from the main route table.  Think of the the main RT as the catch all, if a subnet has no where to go the main route table will accept it.&lt;/li&gt;
&lt;li&gt;Regardless of the whether we're talking route tables on traditional routers or cloud routers, the concepts remain the same.  A route table always analyzes the destination of the traffic if the destination is in the route table, it forwards the traffic to a target.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Cloud Route Table
| Destination | Target |
| ----------  | -------|
| 0.0.0.0/0 | igw-123 |
| 192.168.2.0/24 | 192.168.2.1 |
| 52.95.154.0/23 | vpce-123 |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the table above we see an example of a cloud route table with an array of different target types.  An internet gateway as a default route out of the VPC, an actual IP address as a destination for traffic to the subnet &lt;code&gt;192.168.2.0/24&lt;/code&gt;, and lastly a VPC endpoint as a target for a public S3 endpoint &lt;code&gt;vpce-123&lt;/code&gt;.  Notice it differs from a traditional route table in that we can route to logical constructs like an &lt;code&gt;IGW&lt;/code&gt; (Internet Gateway) or a &lt;code&gt;VPC endpoint&lt;/code&gt;, not just physical end points and remote IP's.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Traditional Route Table
| Destination | Target |
| ----------  | -------|
| 192.168.0.0/24 | eth1-1 |
| 192.168.1.0/24 | 10.120.10.3/32 |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In contrast with the cloud route table example above, here we see an example of a traditional route table for a physical router or switch.  We see a physical interface as a target for outbound traffic to the &lt;code&gt;192.168.0.0/24&lt;/code&gt; and an actual IP as the target for a destination of &lt;code&gt;192.168.1.0/24&lt;/code&gt;. This is what we are normally used to but cloud route tables offer more than we are used to as actual endpoints.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-- As mentioned, cloud route tables operate very similarly to traditional route tables.  Its really just the type of targets that can be used to forward traffic to. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ingress and Egress Route Tables&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ingress and Egress Routing is an important topic and not something to be overlooked.  Similar to traditional routing, when setting up routing or switching you can configure the network and your routing protocol or switching will handle ingress and egress without much intervention (obviously this is a gross over statement but true enough for the example).  However when you need fine grain control over how traffic is traversing your network you need to do more specialized routing and switching.  This too is something we need to considered in the cloud.  Left at its defaults, you can probably fairly easily maintain connectivity between endpoints, but when you need to do anything beyond the basics you really need to understand both ingress and egress routing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;With cloud routing we need to be more cognizant of the direction of our traffic and ensuring that both directions are accounted for.  It would seem that cloud routing does take more consideration in determining the flow of traffic if you have specific routing requirements. Read on to find out more about ingress and egress routing...&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Route tables can be associated with more than just subnets.  Route tables can also be associated with other logical constructs like &lt;code&gt;IGW&lt;/code&gt; and &lt;code&gt;VGW&lt;/code&gt;.  This is great news for engineers because the route tables associated with subnets can only direct traffic in one direction (egress from the subnet).  However route tables associated with internet gateways and virtual private gateways means we also control traffic in the opposite direction (ingress into our VPC).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Lets look at an example&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the first example we have a conundrum. We want traffic inspected by a firewall for ingress and egress traffic from the private subnet &lt;code&gt;10.0.0.0/24&lt;/code&gt;. (Ignore the second private subnet for this example.)  The outbound routing has been set up correctly.  We see a route table with the &lt;code&gt;local&lt;/code&gt; route and a default route pointing to the FW's &lt;code&gt;ENI&lt;/code&gt;(elastic network interface).  So anything destined for an unknown destination will route into that &lt;code&gt;ENI&lt;/code&gt;.  Traffic returning from that source will not be forwarded to the FW though.  The traffic will be analyzed and found to have a local route and forwarded directly to the target/host residing in &lt;code&gt;10.0.0.0/24&lt;/code&gt; altogether skipping the FW.  Ultimately we will end up with an asynchronous path to the internet and one that is only inspecting outbound traffic, not return traffic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mFsnmAv4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/18h4bjvvg65l5n10g5tm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mFsnmAv4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/18h4bjvvg65l5n10g5tm.png" alt="Image description" width="660" height="676"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In this second example we're able to resolve the issue above. The private subnet on the right has a default route to the &lt;code&gt;nat-gateway&lt;/code&gt; in the public subnet.  The private subnet on the left has a default route to the &lt;code&gt;ENI&lt;/code&gt; of a firewall.  Moving to the public subnet we see it has a default route to the internet through the &lt;code&gt;IGW&lt;/code&gt;.  At this point from an egress standpoint everything is set up properly.  But ingress we would see an issue.  The reason we would see an issue is because the IGW would have only seen a single route to "&lt;code&gt;local&lt;/code&gt;" and would have natively routed directly to the end hosts who initiate outbound connections.  What we desire is for a certain subset of hosts to have traffic inspected inbound and outbound on the firewall.  Its only when we add a more specific route into the IGW route table to point to the ENI of the firewall for the &lt;code&gt;10.0.0.0/24&lt;/code&gt; subnet that we get the desired in and outbound traffic flow.  With this end to end set up, traffic for the private subnet &lt;code&gt;10.0.0.0/24&lt;/code&gt; bound for an its default route is sent and inspected by the FW, upon returning to the VPC the IGW sees the ingress traffic originating from somewhere in the &lt;code&gt;10.0.0.0/24&lt;/code&gt; subnet and forwards it to the firewall for inspection.  The other private subnet gets less scrutiny simply traverses the &lt;code&gt;nat-gateway&lt;/code&gt; and back to the hosts natively.  No additional route manipulation required.  The point being here that certain situations lend themselves better to specific ingress routing, if you require something specific to happen to your traffic upon re-entry into your VPC you need to have a &lt;code&gt;VGW&lt;/code&gt;or &lt;code&gt;IGW&lt;/code&gt;set up with its route table configured to handle routing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pVCyPwKv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oppxxzf4vykiuuhv9pob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pVCyPwKv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oppxxzf4vykiuuhv9pob.png" alt="Image description" width="797" height="767"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The above examples only mention ingress and egress routing inbound and outbound from a VPC but this could also very well be intra-VPC traffic that you need to control.  In that case you'll need route tables to again control traffic to and from a destination where there is a requirement for specific routing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Route Table Priority&lt;/strong&gt;&lt;br&gt;
There are three overarching ways that routes are prioritized within an AWS route table.  The concepts are not that different from traditional routing and the administrative distances associated with different route types.  In traditional routing we have static and connected routes with certain administrative distances.  Then depending on the routing protocol used we have different admin distances that contend with each other.  I want to make a special note about the way &lt;code&gt;BGP&lt;/code&gt;works and the &lt;code&gt;path selection&lt;/code&gt; algorithm because it is similar to another concept that we'll cover below.  When analyzing BGP routes we know there is a hierarchy within the protocol itself for path selection (Weight, Local Pref, Originate, AS Path, Origin, MED, eBGP &amp;gt; iBGP ... ect). This concept also exists for a subset of routes learned  dynamically or via propagation within AWS.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Priorities&lt;/u&gt;:&lt;/p&gt;

&lt;p&gt;Below is the way routes are prioritized in an AWS route table, MOST preferred to least preferred. Notice 1 - 3 are the high level route priorities.  The table further breaks down prioritization when a route is learned via propagation.  1 is the highest priority within a propagated route, 4 is the lowest.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Longest prefix&lt;/li&gt;
&lt;li&gt;Static routes&lt;/li&gt;
&lt;li&gt;Propagated routes&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Route type&lt;/th&gt;
&lt;th&gt;Route name&lt;/th&gt;
&lt;th&gt;Priority&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Propagated route&lt;/td&gt;
&lt;td&gt;DX gateway&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Propagated route&lt;/td&gt;
&lt;td&gt;VPN static&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Propagated route&lt;/td&gt;
&lt;td&gt;VPN BGP&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Propagated route&lt;/td&gt;
&lt;td&gt;AS_PATH&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;What you should take away from this is that there is a hierarchy and based on how routes are learned or statically defined will determine how they are routed. &lt;/p&gt;

&lt;p&gt;Lets dive into a few examples.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Two routes, one is a /24 the other a /25.  The longest prefix match wins.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;| Destination | Target |
| ----------  | -------|
| 192.168.0.0/24 | eni-123 |
| 192.168.0.0/25 | eni-123 | &amp;lt;- Wins due to longest prefix match
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;In this example we've got one route dynamically learned the other is a static route.  Both are the same prefix length so the static route cannot be superseded by the longest prefix match.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;| Destination | Target |
| ----------  | -------|
| 192.168.0.0/24 | eni-123 | &amp;lt;-- Dynamically learned
| 192.168.0.0/24 | eni-456 | &amp;lt;-- Static route: Higher priority over the route above it.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Finally we'll deal with propagated routes.  This is much more similar to BGP and the path selection algorithm.  In this example we've learned the same route from our on-prem router.  All higher priority criteria matches (ie. they both have the same prefix length which would have made a difference in prioritization).  So we're left with two routes both learned via propagation, now we need to figure out how they were learned.  In this case we've got a route learned via a DX connection and a route learned via BGP over a VPN tunnel.  If you look above you'll notice DX trumps routes learned via BGP over a VPN.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;| Destination | Target |
| ----------  | -------|
| 192.168.0.0/24 | vgw-123(to DX connect) |&amp;lt;-Wins due to priority
| 192.168.0.0/24 | vgw-456(to VPN) | &amp;lt;- Lower priority than DX
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In conclusion, I hope you gained some new understanding of AWS route tables, along with the fundamental principles of routing and path selection priorities. Feel free to share your thoughts, questions, and experiences in the comments below. &lt;/p&gt;

&lt;p&gt;Cheers!&lt;/p&gt;

</description>
      <category>networking</category>
      <category>aws</category>
      <category>terraform</category>
      <category>learning</category>
    </item>
    <item>
      <title>AWS Network Load Balancer, Terraform, and Go, as a cloud network engineer.</title>
      <dc:creator>friday963</dc:creator>
      <pubDate>Thu, 11 May 2023 22:20:48 +0000</pubDate>
      <link>https://dev.to/friday963/aws-network-load-balancer-terraform-and-go-as-a-cloud-network-engineer-10ag</link>
      <guid>https://dev.to/friday963/aws-network-load-balancer-terraform-and-go-as-a-cloud-network-engineer-10ag</guid>
      <description>&lt;p&gt;As a network engineer, it's important to stay up-to-date with the latest technologies and tools in the industry. NLBs are a critical component in modern cloud networking infrastructures, allowing for efficient distribution of incoming traffic across multiple targets, improving application availability and scalability. As traditional network engineer its important we understand how these logical cloud constructs work and how they are built.&lt;/p&gt;

&lt;h2&gt;
  
  
  TLDR;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You can pull down this project and build an internet facing network load balancer that serves content to two instances running a Go server. Once the infrastructure is stood up, start the client application with the DNS FQDN of your load balancer.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;.\windows_client.exe -remote_host golang-nlb-123456.elb.us-east-1.amazonaws.com -go_routines 1000&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;./linux_client -remote_host golang-nlb-123456.elb.us-east-1.amazonaws.com -go_routines 1000&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check out CloudWatch to investigate EC2, ENI flow logs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The desired outcomes from this project:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Utilize Go to create a simply client/server application.&lt;/li&gt;
&lt;li&gt;Use terraform to stand up the infrastructure.&lt;/li&gt;
&lt;li&gt;Put load on the NBL and servers.&lt;/li&gt;
&lt;li&gt;Use CloudWatch to view incoming traffic.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Creating a Go client server application
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;This was written in Go because of the popularity of the language. Its becoming more and more popular in cloud native application in addition to network automation projects (If you are a network engineer looking to get into automation python is probably the first language to pick up, if you're still interested in another language after that, Go is probably what you should study).&lt;/li&gt;
&lt;li&gt;The first step was to see if a simple Go server could be created that listens on port 80 and returns a basic message to the caller/consumer.&lt;/li&gt;
&lt;li&gt;After the server had been written a client needed to be written, in this case go routines were chosen as a way to do some load testing.(for those unfamiliar, a go routine is Go's way of doing concurrency, similar to "threads" or "threading". However they are more light weight than threads. They can also maintain state between each other which is not traditionally something threads do well.) By using go routines you are able to asynchronously hit the load balancer with many requests.&lt;/li&gt;
&lt;li&gt;Find the corresponding code here:
&lt;a href="https://github.com/friday963/aws_nlb_server/blob/main/client/client.go"&gt;Client code&lt;/a&gt;
&lt;a href="https://github.com/friday963/aws_nlb_server/blob/main/server/server.go"&gt;Server code&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Creating the supporting infrastructure with IaC and Terraform
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Once the basic server was working it was time to start working on the actual infrastructure.
The high level objective was a Network Load Balancer (NLB) that takes requests from the internet and distributes those requests across a few hosts.
In order to accomplish the outcome a few things needed to be built:&lt;/li&gt;
&lt;li&gt;VPC&lt;/li&gt;
&lt;li&gt;Subnets in two availability zones ( to test cross zone load balancing ). In this situation the subnets are public (ie. the hosts have public and private IP addresses). But they very well could have been private subnets.&lt;/li&gt;
&lt;li&gt;Custom route table.&lt;/li&gt;
&lt;li&gt;Internet gateway ( without an IGW and a route to it, you cannot route traffic in or out of your VPC to the internet).&lt;/li&gt;
&lt;li&gt;Network load balancer and three additional pieces which all need to be tied together correct. The load balancer itself with a designation of "Internet facing". A listener must be created, the listening is what directs requests to the a specific "target group". Lastly the target group is created, and it actually holds the logical grouping of hosts who also should be listening on the same ports as the "listener".&lt;/li&gt;
&lt;li&gt;Security groups needed to be configured and associated to the correct VPC. In this solution, a generous number of services were allowed for troubleshooting and illustration purposes. Port 80 was exposed to test the actual application, port 22 and ICMP were allowed for troubleshooting purposes.&lt;/li&gt;
&lt;li&gt;Compute instances were deployed, one instance stood up in each subnet in each AZ to simulate a highly available workload.&lt;/li&gt;
&lt;li&gt;One final thing to note, the server code was deployed in a rather crude way. It works but it was just for testing so therefore no concern was given besides getting it to work. In this project the "user data" was updated to pull in the binary server go file and run it as root. This runs the web server as root and starts it when the instance starts up. You can see the user data &lt;a href="https://github.com/friday963/aws_nlb_server/blob/main/iac/userdata.txt"&gt;HERE&lt;/a&gt; and the actual binary file that gets run &lt;a href="https://github.com/friday963/aws_nlb_server/blob/main/server/server"&gt;HERE&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Lastly VPC flow logs were created on the ENI's attached to the EC2 instances and pushed to a CloudWatch log group.
&lt;a href="https://github.com/friday963/aws_nlb_server/tree/main/iac"&gt;IaC Terraform code&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Put load on the NLB
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;In this last phase you can actually test whether the load balancer distributes the load across our targets. By running the client go application. It accepts a remote host address and a number of go routines to instantiate. I wanted to test from my PC and incrementally increased the number of go routines and eventually tried 100,000, I wouldn't recommend 100,000 though you will DOS yourself 🤭. &lt;/li&gt;
&lt;li&gt;You can run the client like so:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;.\windows_client.exe -remote_host golang-nlb-b21d59f5a66c91f5.elb.us-east-1.amazonaws.com -go_routines 1000&lt;/p&gt;

&lt;h2&gt;
  
  
  Evaluate your flow logs stored in CloudWatch
&lt;/h2&gt;

&lt;p&gt;In this last step, the goal was to get flow logs working and put into CloudWatch. If you spin this project up in your environment you'll see two log streams in the log group created. It takes a while to populate after receiving the client traffic, but it works. You'll notice the traffic from the client application is successfully hitting the instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cheers!
&lt;/h2&gt;

</description>
      <category>networking</category>
      <category>aws</category>
      <category>terraform</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
