<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Paul Elliott</title>
    <description>The latest articles on DEV Community by Paul Elliott (@omahn).</description>
    <link>https://dev.to/omahn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/omahn"/>
    <language>en</language>
    <item>
      <title>AWS Verified Access preview non-review!</title>
      <dc:creator>Paul Elliott</dc:creator>
      <pubDate>Mon, 02 Dec 2024 14:56:55 +0000</pubDate>
      <link>https://dev.to/omahn/aws-verified-access-preview-non-review-4bel</link>
      <guid>https://dev.to/omahn/aws-verified-access-preview-non-review-4bel</guid>
      <description>&lt;p&gt;Yesterday AWS announced &lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/12/aws-verified-access-secure-access-resources-non-https-protocols-preview/" rel="noopener noreferrer"&gt;AWS Verified Access for non-HTTPs&lt;/a&gt; connections.&lt;/p&gt;

&lt;p&gt;This is &lt;em&gt;huge&lt;/em&gt; news as it opens up the possibility of getting direct access to private services &lt;em&gt;without&lt;/em&gt; needing a VPN. This would allow for the first time, 'direct' access to internal RDS databases without needing a jump box or a proxy. Or at least that's the claim. So I was eager to give it a try. Unfortunately, I didn't get very far.&lt;/p&gt;

&lt;p&gt;The first stumbling block is discovering that a client is required, and clients are only available for Windows and Mac, nothing for Linux. Although the contents of the installation package suggest Linux support might be coming in the future.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx71jr06sgfv6lm3vxjjj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx71jr06sgfv6lm3vxjjj.png" alt="Linux support. maybe." width="783" height="591"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Still wanting to give it a try, I deployed a plain Windows 11 VM for testing. The Windows installer worked fine, but strangely, doesn't add any icons to launch the app, so I had to browse through the filesystem to launch the client. There's also no configuration options whatsoever in the app itself, instead it's configured by manually deploying a JSON file onto the filesystem, which looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "1.0",
    "VerifiedAccessInstanceId": "vai-2a7bd80dcdc3175c3",
    "Region": "eu-west-1",
    "DeviceTrustProviders": [],
    "UserTrustProvider": {
        "Type": "iam-identity-center",
        "Scopes": "verified_access:application:connect",
        "Issuer": "https://identitycenter.amazonaws.com/ssoins-6834324c3a3214a1",
        "PkceEnabled": true
    },
    "OpenVpnConfigurations": [
        {
            "Config": "Y2xpZW5***REDACTED***hbWU=",
            "Routes": [
                {
                    "Cidr": "2a07:d018:118c:3b00::/57"
                }
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cue the soul crushing realisation that the service is just a wrapper around OpenVPN. The clue is in the &lt;code&gt;OpenVpnConfigurations&lt;/code&gt; block which is just a base64 encoded OpenVPN configuration. 😭 WireGuard is a much better VPN technology in every way, and it could have been used here. It's faster, lighter, secure by default and much simpler to implement. A &lt;a href="https://medium.com/the-scale-factory/wireguard-vpn-for-remote-working-f43d80f0435a" rel="noopener noreferrer"&gt;blog I wrote a while back&lt;/a&gt; still stands true today.&lt;/p&gt;

&lt;p&gt;But let's carry on, because this could still be a really neat way of getting access to private databases without the overheads of running something like &lt;a href="https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/cvpn-getting-started.html" rel="noopener noreferrer"&gt;Client VPN&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So I copied over the configuration to the location specified on Windows, &lt;code&gt;C:\ProgramData\Connectivity Client\ClientConfig1.json&lt;/code&gt;, and started the client. And got this..&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqx30tzcx99f5bsiku3ih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqx30tzcx99f5bsiku3ih.png" alt="Loading browser. or not." width="800" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;..followed by this about a minute later..&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawri6cxyyuk7v0y8b0oh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawri6cxyyuk7v0y8b0oh.png" alt="Failed." width="759" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;..and that's as far as I've managed to get after following the &lt;a href="https://aws.amazon.com/blogs/aws/aws-verified-access-now-supports-secure-access-to-resources-over-non-https-protocols/" rel="noopener noreferrer"&gt;launch blog instructions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Given this experience, it doesn't feel like the service even warrants the 'preview' label, it's a long way from a state I would consider deploying, even for testing. Given the timing, on the first day of reInvent, I suspect commercial pressures were at play here. It's a shame, as direct access to private resources without the overhead of managing a VPN would be &lt;em&gt;incredibly&lt;/em&gt; useful. I'll be keeping my eyes open on how it progresses and hopefully in the mid-term it will become a viable option.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>openvpn</category>
      <category>wireguard</category>
      <category>vpn</category>
    </item>
    <item>
      <title>Data inconsistency in AWS Amazon Aurora Postgres solved with Local Write Forwarding?</title>
      <dc:creator>Paul Elliott</dc:creator>
      <pubDate>Tue, 26 Nov 2024 21:27:42 +0000</pubDate>
      <link>https://dev.to/omahn/data-inconsistency-in-aws-amazon-aurora-postgres-solved-with-local-write-forwarding-5e5h</link>
      <guid>https://dev.to/omahn/data-inconsistency-in-aws-amazon-aurora-postgres-solved-with-local-write-forwarding-5e5h</guid>
      <description>&lt;p&gt;I'm a big fan of Postgres. I'm also a big fan of AWS Aurora Postgres. While working as a consultant optimising databases for clients, I witnessed first hand the amazing scalability that's possible with these two technologies. But it's not all sun and roses.&lt;/p&gt;

&lt;p&gt;YouTube has many &lt;a href="https://www.youtube.com/results?search_query=aws+aurora+deep+dive" rel="noopener noreferrer"&gt;excellent videos on the architecture behind AWS Aurora&lt;/a&gt;. At a very high level, AWS takes the upstream Postgres, replaces the storage engine and makes some other &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Optimize.overview.html" rel="noopener noreferrer"&gt;modifications to the planning engine&lt;/a&gt;. The &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.StorageReliability.html" rel="noopener noreferrer"&gt;storage engine&lt;/a&gt; is the biggest change though, offloading the data into dedicated storage clusters isolated from the underlying compute. This provides a huge number of benefits, but also causes one major problem, or it did, until recently.&lt;/p&gt;

&lt;p&gt;Aurora clusters are formed of a single writer instance and zero or more reader instances. I'm disregarding Aurora Serverless, as that's a whole other beast and a topic for another day. In the simplest setup, the cluster provides a single writer endpoint and a single reader endpoint. Clients send read only queries to the reader endpoint, which uses DNS round robin to (dumbly) route traffic to the reader instances. Clients send &lt;code&gt;INSERT&lt;/code&gt;, &lt;code&gt;UPDATE&lt;/code&gt; and &lt;code&gt;DELETE&lt;/code&gt; traffic, unsurprisingly, to the writer endpoint, which will always route traffic via DNS to the current writer instance.&lt;/p&gt;

&lt;p&gt;The writer instance and the reader instances all point to the exact same storage backend, as it's shared across all instances. This means that when the writer successfully commits a change to storage, the updated change is available through the storage layer to all the reader instances synchronously, with zero lag.&lt;/p&gt;

&lt;p&gt;So if we commit a change via the writer and then perform a query on a reader, after successfully committing the change on the writer, we'll get our new data back, right? It depends.&lt;/p&gt;

&lt;p&gt;Although it's true that the underlying data on disk is always consistent between the writer and reader instances, as it's the exact same blocks of storage referenced by both, there's one area that's &lt;strong&gt;not&lt;/strong&gt; always consistent.&lt;/p&gt;

&lt;p&gt;Aurora uses Linux on EC2 for the underlying compute. The Linux kernel uses a &lt;a href="https://en.wikipedia.org/wiki/Page_cache" rel="noopener noreferrer"&gt;page cache&lt;/a&gt; to retain data from recently accessed blocks in volatile memory (RAM). And this is what can lead to inconsistent query results. Consider this scenario:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The writer instance receives a query updating a row.&lt;/li&gt;
&lt;li&gt;The writer sends the update to the storage layer.&lt;/li&gt;
&lt;li&gt;The storage layer commits the change and returns a success to the writer instance.&lt;/li&gt;
&lt;li&gt;The writer instance returns to the client that the transaction was successful.&lt;/li&gt;
&lt;li&gt;A reader instance receives a read-only query for the same row that was just updated.&lt;/li&gt;
&lt;li&gt;The reader has a large amount of RAM, and that row is already in the page cache, so it skips going to storage and simply returns the row from the page cache.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;See the problem? The reader instance skipped the lookup to the backend storage as it &lt;em&gt;believed&lt;/em&gt; it already had the latest available data to return. Meanwhile a background process runs between the storage layer and compute layer, which invalidates the page cache blocks when the underlying block has changed. Unfortunately there's a small, but significant, latency to this process which results in this issue. Let's validate that latency.&lt;/p&gt;




&lt;p&gt;I'll be using an Aurora Postgres cluster using engine version 16.4, a single writer instance and a single reader instance, both running &lt;code&gt;db.t4g.medium&lt;/code&gt; instances. As a test client I'll be using a &lt;code&gt;t3a.micro&lt;/code&gt; EC2 instance. The writer, reader and test client are all in different AZs, in the same &lt;code&gt;eu-west-1&lt;/code&gt; region.&lt;/p&gt;

&lt;p&gt;For testing I'll be using a simple Python script. The script will perform the following actions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open database connections to the writer, reader or both, depending on the test scenario.&lt;/li&gt;
&lt;li&gt;Update a row within a table with a counter starting at 0 and increasing sequentially up to the maximum number of repetitions.&lt;/li&gt;
&lt;li&gt;Wait an increasing amount of time, starting with no wait and going up to 100 milliseconds.&lt;/li&gt;
&lt;li&gt;Read the same row back and check if the counter has the new value (consistent), or still has the old value (inconsistent).&lt;/li&gt;
&lt;li&gt;Repeat steps 2-4 until reaching the maximum number of 10,000 repetitions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The time taken between steps 2 and 4 will also include the time it takes the script to run, which needs to be taken into consideration. So let's work out how long that will be. Let's run without any delay in step 3, with both writes and reads going to the writer instance. This will be our absolute best case scenario in terms of code latency, and in this scenario, the code takes (M=0.12795ms, SD=0.000155) measured from the successful commit of the transaction in step 2, to the issuing of the read request in step 4 across 100,000 repetitions. Pretty quick, and more than quick enough for our testing.&lt;/p&gt;

&lt;p&gt;Now we know how quick we can read data back, we can now start to see if a read immediately after a write will return the data we expect. Here's the results when writing to the writer, and reading from the reader.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fve1bvk8x580tsr4w07ps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fve1bvk8x580tsr4w07ps.png" alt="Writes to writer, reads to reader. No write forwarding." width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's walkthrough what we can see here, as it's hugely significant.&lt;/p&gt;

&lt;p&gt;In our scenario, when we make an update through the writer instance and then read the data back from a reader instance, with less than 25 milliseconds between the queries, &lt;strong&gt;you'll get the wrong data back&lt;/strong&gt;. When updating and then immediately reading back, we see a failure rate (the red line) of 99.36%, dropping to 59.96% when an artificial delay of 12.5 milliseconds is added. This is fundamentally at odds with what an &lt;a href="https://en.wikipedia.org/wiki/ACID" rel="noopener noreferrer"&gt;ACID compliant database&lt;/a&gt; should be doing. (Although impossible to see on the chart, we still get a failure rate of 0.02% with a 50 millisecond delay added).&lt;/p&gt;

&lt;p&gt;Having previously identified what the issue is likely to be here, the page cache, let's repeat the same test but sending both the updates and the reads to the writer instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu6p9nvhlt7dfmc7008l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu6p9nvhlt7dfmc7008l.png" alt="Writes to writer, reads to writer. No write forwarding." width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Problem solved, and theory confirmed! We no longer see any inconsistencies when reading data back immediately after updating it.&lt;/p&gt;

&lt;p&gt;Except, this is &lt;strong&gt;a really bad idea&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In Aurora Postgres, you're &lt;em&gt;typically&lt;/em&gt; limited to a single writer instance and up to 15 reader instances. I say typically, as recently &lt;a href="https://aws.amazon.com/blogs/aws/amazon-aurora-postgresql-limitless-database-is-now-generally-available/" rel="noopener noreferrer"&gt;Aurora Postgres Limitless went GA&lt;/a&gt;, which provides horizonal autoscaling of writer instances, but that's a topic for another day as it has some significant design details which need taking into consideration. Putting the Limitless product aside, this means that you will always be limited to a single writer instance within any single Aurora Postgres cluster. So the writer instance should &lt;strong&gt;only&lt;/strong&gt; be used for queries performing updates, with all other queries handled by autoscaling reader instances. But what other option do we have? If we're performing updates and then needing to read that data back with consistency, based on these findings we have to use the writer instance, or add in an artificial delay, don't we?&lt;/p&gt;

&lt;p&gt;This was true until the &lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/10/amazon-aurora-postgresql-local-write-forwarding/" rel="noopener noreferrer"&gt;general availability launch&lt;/a&gt; of Aurora Postgres local write forwarding, which solves this particular problem, with some caveats.&lt;/p&gt;

&lt;p&gt;Local write forwarding is a feature of Aurora Postgres (and Aurora MySQL) which allows you to send a consistency level for writes, which are then sent to &lt;strong&gt;reader&lt;/strong&gt; instances. The reader instance receiving the traffic will identify the query as an update, and forward it on to the writer instance. Depending on the consistency level requested, it will then optionally wait for the change to become consistent to the specified level, before confirming the transaction as a success to the client.&lt;/p&gt;

&lt;p&gt;Local write forwarding is &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-postgresql-write-forwarding-configuring.html" rel="noopener noreferrer"&gt;enabled at the cluster level&lt;/a&gt;, and once enabled clients can specify the level of consistency they need when using the feature:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;OFF&lt;/code&gt;: Disabled, updates sent to a reader instance will fail immediately.&lt;br&gt;
&lt;code&gt;SESSION&lt;/code&gt;: The default on a cluster with local write forwarding enabled. This means that any changes made within a single session will always be consistent within that session, but may not be consistent in other sessions.&lt;br&gt;
&lt;code&gt;EVENTUAL&lt;/code&gt;: This allows for updates to be sent to reader instances for forwarding to the writer instance, but provides absolutely &lt;strong&gt;no&lt;/strong&gt; guarantee that the data will be immediately consistent.&lt;br&gt;
&lt;code&gt;GLOBAL&lt;/code&gt;: The sledge hammer setting. This ensures that &lt;strong&gt;all&lt;/strong&gt; updates sent through the session are replicated to &lt;strong&gt;all&lt;/strong&gt; reader instances before the transaction returns.&lt;/p&gt;

&lt;p&gt;This &lt;em&gt;should&lt;/em&gt; solve our page cache data consistency issue, and it even allows us to set the desired consistency required on a per client basis, which is fantastic. Let's check it works by repeating our previous tests. We'll start with &lt;code&gt;EVENTUAL&lt;/code&gt; consistency, where we should still expect to see failures.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29so4ubrnldcfa5mviw4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29so4ubrnldcfa5mviw4.png" alt="All queries to reader. Eventual write forwarding." width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As expected, we continue to see failures at a similar rate to before. Now let's try with &lt;code&gt;SESSION&lt;/code&gt; consistency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yvx8s2iip4dlnl1lte5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yvx8s2iip4dlnl1lte5.png" alt="All queries to reader. Session write forwarding." width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No more failures! And finally, let's try with &lt;code&gt;GLOBAL&lt;/code&gt; consistency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qic2l7rf9km5ye2bats.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qic2l7rf9km5ye2bats.png" alt="All queries to reader. Global write forwarding." width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As expected, no failures. But at what cost? Let's have a look at those latency figures with the zero added delay scenario.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq18neq38kq1bi2h53rpd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq18neq38kq1bi2h53rpd.png" alt="Local write forwarding latency comparison." width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's look a bit closer at the numbers:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Consistency Level&lt;/th&gt;
&lt;th&gt;Latency (ms)&lt;/th&gt;
&lt;th&gt;Increase from disabled&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;disabled&lt;/td&gt;
&lt;td&gt;4.461590695&lt;/td&gt;
&lt;td&gt;0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;EVENTUAL&lt;/td&gt;
&lt;td&gt;5.70614152&lt;/td&gt;
&lt;td&gt;27.89%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SESSION&lt;/td&gt;
&lt;td&gt;5.927728486&lt;/td&gt;
&lt;td&gt;32.86%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GLOBAL&lt;/td&gt;
&lt;td&gt;6.418921375&lt;/td&gt;
&lt;td&gt;43.87%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A whopping 43.87% increase in latency compared to not using local write forwarding &lt;strong&gt;and&lt;/strong&gt; this is on an otherwise completely empty, isolated and idle cluster, an entirely unrealistic prospect in the real world.&lt;/p&gt;

&lt;p&gt;Now that sounds like a big increase, but the latency figures are still under 10ms across the board. How that scales with a real-life production workload, is entirely dependent on the workload in question. Using load testing tools such as &lt;a href="https://locust.io/" rel="noopener noreferrer"&gt;locust&lt;/a&gt;, and some careful modelling of query patterns, it's possible to simulate such a workload. This would allow that question around scaling to be answered.&lt;/p&gt;

&lt;p&gt;When adding local write forwarding, Amazon have included &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-postgresql-write-forwarding-monitoring.html" rel="noopener noreferrer"&gt;new wait states to Performance Insights&lt;/a&gt;. The chart below shows a reader instance actively forwarding traffic to the writer instance. These new metrics will be really useful as production workloads move onto clusters with local write forwarding enabled, helping to diagnose situations when the feature is causing unexpected bottlenecks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0wv21q15tw1nlshk86p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0wv21q15tw1nlshk86p.png" alt="Local write forwarding new metrics." width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Overall I would consider local write forwarding a big win, even with the latency penalty shown above. The ability to remove all traffic from the writer instance and throw everything at the readers makes life a lot simpler for developers, without having to worry about consistency issues. I highly recommend people have a play and see how it performs.&lt;/p&gt;

&lt;p&gt;If you're interested in the raw data behind this blog post, spot any inaccuracies, or would like to add anything, please do get in touch.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>aurora</category>
      <category>postgressql</category>
    </item>
    <item>
      <title>Accessing GitHub Action runners using Netbird</title>
      <dc:creator>Paul Elliott</dc:creator>
      <pubDate>Mon, 11 Nov 2024 16:57:21 +0000</pubDate>
      <link>https://dev.to/omahn/accessing-github-action-runners-using-netbird-1jih</link>
      <guid>https://dev.to/omahn/accessing-github-action-runners-using-netbird-1jih</guid>
      <description>&lt;p&gt;I've &lt;a href="https://medium.com/the-scale-factory/troubleshoot-github-actions-via-vpn-3d5d41b01ea4" rel="noopener noreferrer"&gt;written previously&lt;/a&gt; about using &lt;a href="https://www.wireguard.com/" rel="noopener noreferrer"&gt;WireGuard&lt;/a&gt; to get remote SSH access to GitHub Actions runners, something that can be really useful for troubleshooting build issues. Have a read over that blog post for the details, but in summary, we use a GitHub Action to provision a WireGuard tunnel between our machine and the runner. It's not the most straightforward solution, and a better alternative now exists which I'll run through here.&lt;/p&gt;

&lt;p&gt;WireGuard is a modern VPN technology built into the Linux kernel, it implements the bare bones of a VPN, just enough to get a secure tunnel established between two devices. Everything else is left up to the reader to manage. In my &lt;a href="https://github.com/marketplace/actions/wireguard-ssh" rel="noopener noreferrer"&gt;WireGuard SSH GitHub Action&lt;/a&gt;, I provided some extras required which made it possible to SSH to a GitHub Actions runner over WireGuard. But it still requires quite a few manual steps to get it working.&lt;/p&gt;

&lt;p&gt;Recently I've been looking at VPN solutions, specifically ones built around WireGuard, that take away some of the manual steps required to manage a large scale deployment. After building proof of concept solutions with several of these offers, I settled on &lt;a href="https://netbird.io" rel="noopener noreferrer"&gt;NetBird&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;NetBird provides a management layer, with a web UI, which handles all the necessary background management tasks needed when running more complex WireGuard based VPN solutions. It's BSD licensed so you can self-host or you can use their hosted offering, which is free for small deployments and works perfectly for accessing GitHub Actions runners. Let's see how using NetBird can simplify remote access to runners.&lt;/p&gt;

&lt;p&gt;NetBird has the ability to use an identity provider, also known as an IDP, or through a secret key known as a setup key. I want this solution to be completely hands-off and automated, so I'm going to use setup keys. These are used by the NetBird client to automatically register new peers on-demand. They can be constrained using expiry dates, number of uses, or more usefully in this use case, they can also be used to make the peer ephemeral.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvllvtx9ttw40odjb5y30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvllvtx9ttw40odjb5y30.png" alt="Netbird Setup Keys" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j3vs3sdizxlddsltn9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j3vs3sdizxlddsltn9t.png" alt="Setup Key Configuration" width="576" height="845"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Setup keys can also automatically assign peers registered with the key to a group. Groups in NetBird allow fine-grained access control lists to be created, allowing traffic flows to be restricted. In this case, we've assigned the setup key to a new group called &lt;code&gt;runners&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The key is set to be reusable, limited to the maximum 365 days and to be ephemeral. Peers registered using a setup key that's set to be ephemeral means the peers will be automatically cleaned up, and removed from Netbird if there's no activity for 10 minutes. This saves us the overhead of removing old runner peers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnmqs85h6ru745oppa1h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnmqs85h6ru745oppa1h.png" alt="Setup Key Created" width="448" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;⚠️ Store the setup key somewhere secure, it will only be displayed once at the time of creation.&lt;/p&gt;

&lt;p&gt;We're now ready to start registering our GitHub Action runners with NetBird. But before we do we should register a client that we will use to connect to the runners. We'll use another setup key for this, but this setup key will be far more restrictive, only allowing a single use and automatically registering with another group called &lt;code&gt;clients&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdg1kxg69pxobf5k9lz2t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdg1kxg69pxobf5k9lz2t.png" alt="Clients Setup Key" width="576" height="845"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once again, store the setup key somewhere secure, it will only be displayed once at the time of creation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajw51mzrdy2mym9krejw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajw51mzrdy2mym9krejw.png" alt="Client Setup Secret Key" width="448" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's register our client device which we'll be using to connect to the GitHub Action runner. Go to &lt;a href="https://app.netbird.io/install" rel="noopener noreferrer"&gt;https://app.netbird.io/install&lt;/a&gt; and download the client software for your device. Now let's use the client setup key to register our client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;paul@lightoak:~$ sudo netbird up --setup-key 0CE86986-6B44-4B44-8037-D587DCE86DFC
Connected
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now see our client peer in the Netbird UI &lt;a href="https://app.netbird.io/peer" rel="noopener noreferrer"&gt;peers dashboard&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fab516hdiwkje18xon5ht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fab516hdiwkje18xon5ht.png" alt="Peers Dashboard" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking on the peer shows us further details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkt4hqu54yzs880i0tggf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkt4hqu54yzs880i0tggf.png" alt="Peer Details" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The connection status can be checked at any time by running &lt;code&gt;sudo netbird status&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;paul@lightoak:~$ sudo netbird status
OS: linux/amd64
Daemon version: 0.29.3
CLI version: 0.29.3
Management: Connected
Signal: Connected
Relays: 3/3 Available
Nameservers: 0/0 Available
FQDN: lightoak.netbird.cloud
NetBird IP: 100.99.96.205/16
Interface type: Kernel
Quantum resistance: false
Routes: -
Peers count: 0/0 Connected
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we can see that we're connected to both the management and signal backends, and there's &lt;code&gt;3&lt;/code&gt; relays available. The Netbird website has further details on &lt;a href="https://docs.netbird.io/about-netbird/how-netbird-works" rel="noopener noreferrer"&gt;Netbirds Architecture&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can also see that the 'Peers count' is &lt;code&gt;0/0&lt;/code&gt;, meaning we have no peers available for us to connect to. Let's fix that by adding in our GitHub Action runners.&lt;/p&gt;

&lt;p&gt;Luckily, there's no need to write my own GitHub Action this time around as &lt;a href="https://github.com/Alemiz112" rel="noopener noreferrer"&gt;Alemiz112&lt;/a&gt; has already done the hard work for us with the &lt;a href="https://github.com/marketplace/actions/netbird-connect" rel="noopener noreferrer"&gt;netbird-connect&lt;/a&gt; action.&lt;/p&gt;

&lt;p&gt;Let's create a minimal workflow we can use as an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Netbird demo&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;demo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Netbird Connect&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;netbird&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Alemiz112/netbird-connect@v1.0.1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;setup-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.NETBIRD_SETUP_KEY }}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install public SSH key&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;mkdir ~/.ssh&lt;/span&gt;
          &lt;span class="s"&gt;echo "${{ secrets.SSH_PUBLIC_KEY }}" &amp;gt; ~/.ssh/authorized_keys&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The action uses the &lt;code&gt;runners&lt;/code&gt; setup key we created earlier to authenticate to Netbird. This needs adding as a secret into the repository settings along with the public SSH key to use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0snzpqsj4x8fu3v2vod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0snzpqsj4x8fu3v2vod.png" alt="Repository Settings" width="800" height="694"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjso9o2ehz7582ei22ubv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjso9o2ehz7582ei22ubv.png" alt="Actions Secrets" width="793" height="654"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;NETBIRD_SETUP_KEY&lt;/code&gt; and &lt;code&gt;SSH_PUBLIC_KEY&lt;/code&gt; values need adding as repository secrets, rather than as environment variables.&lt;/p&gt;

&lt;p&gt;With everything in-place, and the workflow config pushed to GitHub, we can now see a successful run when viewing the workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8iy068hr50r3fighcb2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8iy068hr50r3fighcb2.png" alt="Successful run" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When viewing the list of peers in the Netbird console we can also see the peer for the runner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dw7e7awmfayk9ivugr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dw7e7awmfayk9ivugr7.png" alt="Peer List" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The grey circle to the left of the peer indicates that the peer is offline. This is expected, as GitHub Action runners are ephemeral and the runner was destroyed after the workflow completed. As we used an ephemeral setup key to create the runner, the Netbird backend will automatically remove this peer after 10 minutes.&lt;/p&gt;

&lt;p&gt;Now to validate we can actually connect to a runner we'll add a forced sleep period to the workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Forced sleep&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep 30m&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19znz8o9tb9iwl44d7zt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19znz8o9tb9iwl44d7zt.png" alt="Peer List" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnxyu1f42h5rc2sz4jxq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnxyu1f42h5rc2sz4jxq.png" alt="Peer Status" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can see that the peer is still online and available, highlighted by the green circle next to the peer name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F356qzel5s07np9f4hw3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F356qzel5s07np9f4hw3v.png" alt="Peer IP details" width="662" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the list of peers we can now get the private IP of the runner, in this case it's &lt;code&gt;100.99.231.31&lt;/code&gt;. All we need to do now is to use &lt;code&gt;ssh&lt;/code&gt; to connect to the private IP over the Netbird provided tunnel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;paul@lightoak:~$ ssh -o "StrictHostKeyChecking no" runner@100.99.231.31
Welcome to Ubuntu 22.04.5 LTS (GNU/Linux 6.5.0-1025-azure x86_64)

runner@fv-az1149-775:~$   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this setup it's safe to disable strict host key checks as we're connecting peer to peer over a private encrypted and pre-authenticated tunnel.&lt;/p&gt;

&lt;p&gt;The solution could be further refined by using the &lt;a href="https://netbird.io/knowledge-hub/using-ssh-to-secure-remote-access" rel="noopener noreferrer"&gt;built-in SSH support&lt;/a&gt; that Netbird provides. This isn't something I've explored as I've always had existing SSH key infrastructures to use instead, but it could simplify the setup further and is definitely worth exploring.&lt;/p&gt;

&lt;p&gt;Netbird does a lot more, including the ability to have network ACLs to restrict the traffic that can flow between peers. This is a killer feature above all the other solutions and will be the topic of a future blog post.&lt;/p&gt;

</description>
      <category>github</category>
      <category>githubactions</category>
      <category>vpn</category>
      <category>ssh</category>
    </item>
  </channel>
</rss>
