<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Andree Toonk</title>
    <description>The latest articles on DEV Community by Andree Toonk (@atoonk).</description>
    <link>https://dev.to/atoonk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/atoonk"/>
    <language>en</language>
    <item>
      <title>A beautiful Kubernetes Web Client</title>
      <dc:creator>Andree Toonk</dc:creator>
      <pubDate>Fri, 03 Oct 2025 16:15:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/a-beautiful-kubernetes-web-client-25ao</link>
      <guid>https://dev.to/aws-builders/a-beautiful-kubernetes-web-client-25ao</guid>
      <description>&lt;p&gt;Kubernetes runs a massive chunk of the internet. It keeps containers alive, auto-scales your deployments when traffic spikes, and generally makes sure everything doesn’t catch fire. But let’s be real: accessing it often feels like you need a PhD in YAML archaeology just to tail some logs.&lt;/p&gt;

&lt;p&gt;Don’t get me wrong — Kubernetes is incredibly powerful. But powerful doesn’t mean it has to feel like you’re navigating a nuclear reactor control panel every time you need to exec into a pod.&lt;/p&gt;

&lt;p&gt;If you’re not living in kubectl 24/7, simple stuff like checking logs, exec’ing into a container, or bumping a deployment replica count feels way heavier than it should. Onboarding new teammates? Absolute nightmare. You’re passing kubeconfigs around Slack, setting up VPN tunnels, explaining context switching, and hoping nobody accidentally nukes production while learning the ropes.&lt;/p&gt;

&lt;p&gt;We figured there had to be a better way.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/V761PhKlQXY"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;A Saner Way to Access Your Clusters&lt;br&gt;
Border0 already helps teams access servers and databases without the VPN headaches. Our users loved that they could just SSO in and get secure, fully-audited access to their infrastructure. Pretty quickly, we kept hearing the same question: “This is great… but can you make Kubernetes this easy too?”&lt;/p&gt;

&lt;p&gt;So we built the Border0 Kubernetes Web Client — a clean, powerful way to work with your clusters directly from your browser. No kubeconfigs to wrangle. No VPN to configure. No desktop apps hogging RAM. Just authenticate with your existing identity provider (Google, Okta, GitHub, Entra ID, whatever your org already uses), open the portal, and you’re in.&lt;/p&gt;

&lt;p&gt;It’s the Kubernetes experience we always wanted: powerful enough for production workloads, but intuitive enough that you don’t need to memorize obscure kubectl flags or debug YAML indentation at midnight.&lt;/p&gt;

&lt;p&gt;What It Actually Feels Like&lt;br&gt;
Fire up the Border0 Kubernetes Web Client and your cluster comes alive in a clean, readable dashboard. Pods, deployments, services, node health — it’s all right there. Need to drop into a shell inside a pod? One click and you’re in a proper terminal. Logs? Live-tail them right in your browser with search and real-time streaming. Want to scale something? Drag the replica slider and hit apply. Even editing manifests doesn’t make you want to cry.&lt;/p&gt;

&lt;p&gt;And because it’s Border0, you don’t have to think about VPNs, SSH tunnels, or fragile kubeconfig files that break every three weeks. You just log in like a normal human and start working.&lt;/p&gt;

&lt;p&gt;👉 Want to see it in action? Check out the &lt;a href="https://www.youtube.com/watch?v=V761PhKlQXY" rel="noopener noreferrer"&gt;demo here &lt;/a&gt;and watch the Border0 Kubernetes Web Client do its thing.&lt;/p&gt;

&lt;p&gt;Chat with Your Cluster (Yes, Really)&lt;br&gt;
The Border0 Kubernetes Web Client also ships with Katy, our Kubernetes AI copilot. Katy speaks fluent Kubernetes so you don’t have to memorize every kubectl command and flag combination.&lt;/p&gt;

&lt;p&gt;Just open the chat and ask a question. You can ask it to show you all unhealthy pods in production, check which pods are eating the most memory right now, or list services with a specific label. When you’re ready to actually do something, Katy can help with that too — scale a deployment, restart a pod, roll back to a previous version — all while respecting your existing RBAC policies and access controls.&lt;/p&gt;

&lt;p&gt;It’s like having a really competent coworker who actually remembers Kubernetes syntax.&lt;/p&gt;

&lt;p&gt;Use Whatever Tools You Want&lt;br&gt;
If you’re already a kubectl power user, nothing changes. Keep running kubectl, k9s, Lens, or whatever toolchain you’ve memorized. Border0 doesn’t lock you into anything or break your existing workflow — you still get the same identity-based security, visibility, and audit trails regardless of how you connect.&lt;/p&gt;

&lt;p&gt;The web client is about adding a frictionless option for when you need it. It’s perfect when you just need to quickly check something, help a teammate debug an issue, or onboard someone who isn’t deeply familiar with Kubernetes yet. Instead of juggling kubeconfigs or filing access tickets, you just send them a link and they’re good to go.&lt;/p&gt;

&lt;p&gt;Security That Doesn’t Get in the Way&lt;br&gt;
Obviously this isn’t just about making developers happy. The slickest UX in the world doesn’t matter if it blows up your security model. Border0 keeps all the identity and policy controls your team already relies on, and bakes them directly into the web client.&lt;/p&gt;

&lt;p&gt;When someone signs in, they use the same SSO you already trust — Google, Okta, Entra ID, GitHub. No random kubeconfig files floating around in email or Slack DMs. Every action is tied to a real person, so you always know exactly who exec’d into that pod or scaled that deployment. If something goes sideways, you can replay the session and see exactly what happened.&lt;/p&gt;

&lt;p&gt;Policy management is straightforward too. Need to give a contractor access to one namespace for a day? Done. Need to revoke someone’s access immediately? One click and they’re out. All the compliance stuff — audit logs, onboarding/offboarding, just-in-time access — just works without any extra wiring.&lt;/p&gt;

&lt;p&gt;You get complete visibility into every Kubernetes session: who did what, when, and how.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqr4gtnm3503wzf0q67j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqr4gtnm3503wzf0q67j.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give It a Spin&lt;br&gt;
Kubernetes doesn’t have to be intimidating. With Border0, it finally feels approachable without sacrificing power or security. Whether you’re a developer who occasionally needs to check logs, a power user who practically lives in kubectl and k9s, or a security lead who needs visibility and control, the Border0 Kubernetes Web Client gives you what you actually need.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.border0.com/" rel="noopener noreferrer"&gt;Go forth and orchestrate. 🚀&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k8s</category>
      <category>webdev</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>Enhance DigitalOcean with AWS-Level SSM and SSO Features</title>
      <dc:creator>Andree Toonk</dc:creator>
      <pubDate>Wed, 21 Aug 2024 15:09:54 +0000</pubDate>
      <link>https://dev.to/atoonk/enhance-digitalocean-with-aws-level-ssm-and-sso-features-2ef7</link>
      <guid>https://dev.to/atoonk/enhance-digitalocean-with-aws-level-ssm-and-sso-features-2ef7</guid>
      <description>&lt;p&gt;If you’re anything like me, you appreciate DigitalOcean for its simplicity, cost-effectiveness, and ease of use. It’s an ideal platform for personal projects and smaller work-related tasks. However, as great as DigitalOcean is, it doesn’t offer some of the advanced features that larger cloud environments provide, like granular access control and integrated security with Single Sign-On (SSO) systems. These are the IAM and SSM capabilities that AWS users have come to rely on. But what if you could bring these powerful features to DigitalOcean without the added complexity or cost? That’s where Border0 comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge: Bridging the Gap Between Simplicity and Security
&lt;/h2&gt;

&lt;p&gt;DigitalOcean excels in user-friendliness and affordability, but when it comes to robust Identity and Access Management (IAM) and security controls, it falls short compared to giants like AWS and Google Cloud. Without built-in IAM, managing access to your Droplets (SSH), databases, or Kubernetes clusters using SSO credentials can be a bit of a headache. This often forces users to keep services more exposed than they’d like — especially in production environments where security is key.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Bringing AWS-Like IAM and SSO Magic to DigitalOcean
&lt;/h2&gt;

&lt;p&gt;That’s where Border0 steps in. With Border0, you can elevate your DigitalOcean workloads to meet the same security and access management standards that you’d expect from AWS or GCP — minus the headaches. Border0 provides you with the tools to control access to your DigitalOcean resources, whether it’s SSH access to Droplets, database connections, or Kubernetes clusters, all using your SSO credentials. Even better, this works seamlessly with resources in a private DigitalOcean VPC, giving you secure access without the need for a VPN.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo Time! 🚀
&lt;/h2&gt;

&lt;p&gt;Sounds too good to be true? The best part is that it’s incredibly straightforward to set up and use. In the video below, we’ll guide you through an example that shows just how easy it is.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/2SG4i0ZH69U"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup in Minutes ⏱️
&lt;/h2&gt;

&lt;p&gt;In the video, we kick things off by installing the Border0 connector from the DigitalOcean Marketplace as a 1-click Droplet. The entire setup takes about a minute — just enough time for the Droplet VM to boot and for you to click the Border0 login link. It’s fast, it’s simple, and it’s ready to go.&lt;/p&gt;

&lt;p&gt;Once the connector is deployed, we can start securing access to a Droplet (SSH), a MySQL database, and a Kubernetes cluster. These resources are safely tucked away in a private VPC, shielded from the public internet. And yet, thanks to Border0, you can access them effortlessly using your SSO identity — no need to configure complex VPNs or jump through hoops.&lt;/p&gt;

&lt;p&gt;SSH Access Example 🔐&lt;br&gt;
In the demo, you’ll see how we access a DigitalOcean Droplet VM that’s been deployed in a private VPC. No VPN required — I’m logging in using my existing SSO account. This approach isn’t just convenient; it’s also secure, with all access tied directly to your identity, whether that’s a Gmail, GitHub, Azure, or even your corporate Okta account.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Fine-Grained SSH Control 🛡️
&lt;/h2&gt;

&lt;p&gt;But wait, there’s more! Border0 doesn’t just give you access; it lets you control access with precision. You can enforce detailed SSH-specific access policies, such as allowing SSH access only as the ubuntu user while disallowing SFTP and TCP port forwarding. This keeps your environment secure by limiting access to only what’s necessary, minimizing potential attack surfaces.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database Access Example 🗄️
&lt;/h2&gt;

&lt;p&gt;Next up in the demo, we’ll show you how to securely access a DigitalOcean-managed MySQL database using your SSO credentials. This database is hosted within the same private VPC, ensuring it remains isolated from the internet while still allowing seamless access. It’s like having the database right under your desk — without the risk of being wide open to the world.&lt;/p&gt;

&lt;p&gt;And here’s a bonus: with Border0, any database becomes accessible through our web-based database client. This WebAssembly-based client runs entirely in your browser, so you can access your databases from anywhere, on any device, without needing to install extra software. All you need is your SSO account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Identity-Based Database Policies 🎯
&lt;/h2&gt;

&lt;p&gt;Just like with SSH, Border0 lets you enforce fine-grained access control for databases. You can define who has access to specific database schemas, what types of queries they can run, and even set conditions based on identity, network location, or time of day. It’s like having an SSO-based database firewall and VPN rolled into one, complete with full query recording for that extra layer of security.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Access Example 📦
&lt;/h2&gt;

&lt;p&gt;Finally, we take a look at Kubernetes access. The video demo shows how to connect to your DigitalOcean Kubernetes cluster using kubectl. Even though the Kubernetes API is isolated from the internet, Border0 makes it feel like it’s right there at your fingertips, securely accessible with your SSO credentials.&lt;/p&gt;

&lt;p&gt;As with the other examples, you can create policies specifying who has access to which Kubernetes namespaces and what actions they can perform. For instance, you can control who has permission to use kubectl exec. And with full session logs, you can see exactly what actions were performed on which resources, and for kubectl exec, you even get session recordings—perfect for keeping tabs on what’s happening in your clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap Up 🎯
&lt;/h2&gt;

&lt;p&gt;With Border0, you get the best of both worlds: the simplicity and user-friendliness of DigitalOcean combined with the enterprise-grade security and access management features you expect from AWS or GCP. And the best part? You can set it all up in just a few minutes, thanks to the ease of a 1-click Droplet deployment. No complex VPNs or advanced configurations — just secure, streamlined access to your Droplets, databases, and Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;Whether you’re managing Droplets, databases, or Kubernetes clusters, Border0 makes it effortless to use your SSO credentials for secure access. You retain the simplicity and ease of use that makes DigitalOcean so popular, while gaining the advanced security controls typically found in more complex cloud environments.&lt;/p&gt;

&lt;p&gt;You don’t need to be a security expert — Border0 and DigitalOcean together make it easy and pleasant to secure and manage your cloud infrastructure. Ready to enhance your DigitalOcean experience with Border0?&lt;a href="https://portal.border0.com/register" rel="noopener noreferrer"&gt; Get started today for free&lt;/a&gt; and enjoy the best of both worlds: simplicity and security.&lt;/p&gt;

</description>
      <category>dohackathon</category>
      <category>digitalocean</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Using Okta to Control Access to Your Docker Containers</title>
      <dc:creator>Andree Toonk</dc:creator>
      <pubDate>Fri, 26 Apr 2024 17:12:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/using-okta-to-control-access-to-your-docker-containers-410i</link>
      <guid>https://dev.to/aws-builders/using-okta-to-control-access-to-your-docker-containers-410i</guid>
      <description>&lt;p&gt;This article was originally published on &lt;a href="https://www.border0.com/blogs/using-okta-to-control-access-to-your-docker-containers-from-anywhere" rel="noopener noreferrer"&gt;our blog at border0.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Imagine you have a Docker host with various containers to which you need access. Depending on your scenario, that can be hard — very hard. Maybe your host is behind NAT, a firewall, in a private network, or even in a different region altogether. You need to access those containers, but security and logistics get in the way. What if you could access them securely, from anywhere, without the hassle?‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets-global.website-files.com%2F632c21d06c7ca567865cf165%2F6626f9221f94c4b10682aa69_docker.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets-global.website-files.com%2F632c21d06c7ca567865cf165%2F6626f9221f94c4b10682aa69_docker.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog, I’ll introduce a new type of Border0 service that makes it all possible. This new service is specifically designed for Docker hosts, making it easy to access your containers from anywhere without the hassle of VPNs or compromising on security. &lt;a href="https://youtu.be/wLk2s0kfBDw" rel="noopener noreferrer"&gt;I’ll also demonstrate (video)&lt;/a&gt; how to add Okta to our setup, meaning we can access our containers securely using just our Okta credentials.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pain of Container Access
&lt;/h2&gt;

&lt;p&gt;You’re not alone if you’re struggling with accessing your containers remotely. Many teams face the same issue: you need to access containers on a Docker host, but VPNs are a hassle and provide too much network access. You wish you could simply use your browser, on any device (computer or phone), to access them securely. But, you can’t compromise on security — you need to control who has access without exposing your containers directly to the internet. It’s a challenging balance between security and convenience, and you’re stuck in the middle.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing The Docker Service type in Border0
&lt;/h2&gt;

&lt;p&gt;To address these challenges and help you with simple access to your containers from anywhere, we’ve added a new service (Socket) type specifically for Docker hosts. This new Border0 Socket is an SSH service, meaning it allows users to connect using SSH. With a new upstream type, called ‘Docker Exec’.‍&lt;/p&gt;

&lt;p&gt;‍With this new service type, you can now expose containers to authenticated users, who will see a list of all containers they are authorized to access. Administrators have the flexibility to filter out which containers are visible to users, thus enhancing control over who has access to what containers.‍&lt;/p&gt;

&lt;p&gt;Now, accessing your containers is as easy as logging into a web application, using your favorite SSH client, or using the Border0 web client. Anytime, anywhere and from any device!‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Seamless Integration with your SSO providers
&lt;/h2&gt;

&lt;p&gt;Using Border0, you can easily add SSO authentication to services that don’t typically support it — like SSH, Kubernetes, Docker, and databases like MySQL, Postgres, and Microsoft SQL. You’ll get seamless, out-of-the-box SSO integration with leading providers like Google, GitHub, Microsoft, and even passwordless magic email links — all at no extra cost.‍&lt;/p&gt;

&lt;p&gt;For our premium users, we offer the flexibility to “bring your own identity provider”. This means you can connect your Okta Workforce, Google Workspace, or any SAML or OIDC provider to your Border0 account. This integration allows team members to access servers, containers, and databases using their existing enterprise credentials, ensuring a secure and seamless experience.‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vzpwbbf4khh41teeea9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vzpwbbf4khh41teeea9.png" alt="Built-in and Custom Identity providers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;‍It gets better; you can synchronize your directory service — such as Okta, Google Workspace, or Microsoft Entra — with Border0, automatically importing users and groups from your enterprise directory. This allows you to create fine-grained access policies, ensuring that only authorized users — like those in your Okta group “SRE users” — can access specific resources, such as your Docker containers.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo time — See for Yourself How Easy It Is
&lt;/h2&gt;

&lt;p&gt;Take a moment to watch the demo video below, which demonstrates the easy process of setting up Okta SSO and SCIM integration with Border0 and shows you how to use it to access your Docker containers.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/wLk2s0kfBDw"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This brief demo shows how Okta and Border0 work together to provide secure and user-friendly access to your Docker containers, enhancing security without adding complexity to your workflow.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion‍
&lt;/h2&gt;

&lt;p&gt;With the introduction of the Docker service type in Border0, accessing your Docker containers remotely has now become a lot easier. This new service type allows users to connect securely to Docker containers from anywhere, using any device, and without the complexities of VPNs, enhancing both convenience and security. You can now:‍&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Access your Docker containers from any location using just a web browser or CLI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use Single Sign-On (SSO) with familiar credentials from identity providers like Okta, GitHub, or Google.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enjoy automatic directory synchronization and SCIM integration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Maintain control over who can access your resources with highly customizable fine-grained access policies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Benefit from session recording for enhanced oversight.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Start Your Free Trial Today!
&lt;/h2&gt;

&lt;p&gt;Ready to simplify access to your infrastructure and Docker containers? &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fportal.border0.com%2Fregister" rel="noopener noreferrer"&gt;Start your free Border0 trial&lt;/a&gt; today and streamline your Docker container access!&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

</description>
      <category>docker</category>
      <category>okta</category>
      <category>sso</category>
      <category>webdev</category>
    </item>
    <item>
      <title>High-Speed Packet Processing in Go: From net.Dial to AF_XDP</title>
      <dc:creator>Andree Toonk</dc:creator>
      <pubDate>Tue, 12 Mar 2024 01:13:56 +0000</pubDate>
      <link>https://dev.to/aws-builders/high-speed-packet-processing-in-go-from-netdial-to-afxdp-5784</link>
      <guid>https://dev.to/aws-builders/high-speed-packet-processing-in-go-from-netdial-to-afxdp-5784</guid>
      <description>&lt;p&gt;This post was originally published on my Personal blog at &lt;a href="https://toonk.io/sending-network-packets-in-go/"&gt;https://toonk.io/sending-network-packets-in-go/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Pushing limits in Go: from net.Dial to syscalls, AF_PACKET, and lightning-fast AF_XDP. Benchmarking packet sending performance..&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Recently, I wrote a Go program that &lt;a href="https://github.com/atoonk/ping-aws-ips"&gt;sends ICMP ping messages&lt;/a&gt; to millions of IP addresses. Obviously I wanted this to be done as fast and efficiently as possible. So this prompted me to look into the various methods of interfacing with the network stack and sending packets, fast! It was a fun journey, so in this article, I’ll share some of my learnings and document them for my future self :) You’ll see how we get to 18.8Mpps with just 8 cores. There’s also &lt;a href="https://github.com/atoonk/go-pktgen"&gt;this Github repo that has the example code&lt;/a&gt;, making it easy to follow along.&lt;/p&gt;

&lt;h2&gt;
  
  
  The use case
&lt;/h2&gt;

&lt;p&gt;Let’s start with a quick background of the problem statement. I want to be able to send as many packets per second from a Linux machine. There are a few use cases, for example, the Ping example I mentioned earlier, but also maybe something more generic like dpdk-pktgen or even something Iperf. I guess you could summarize it as a packet generator.&lt;/p&gt;

&lt;p&gt;I’m using the Go programming language to explore the various options. In general, the explored methods could be used in any programming language since these are mostly Go-specific interfaces around what the Linux Kernel provides. However, you may be limited by the libraries or support that exist in your favorite programming language.&lt;/p&gt;

&lt;p&gt;Let’s start our adventure and explore the various ways to generate network packets in Go. I’ll go over the options, and we’ll end with a benchmark, showing us which method is the best for our use case. I’ve included examples of the various methods in a Go package; you can find the code here. We’ll use the same code to run a benchmark and see how the various methods compare.&lt;/p&gt;

&lt;h2&gt;
  
  
  The net.Dial method
&lt;/h2&gt;

&lt;p&gt;The net.Dial method is the most likely candidate for working with network connections in Go. It’s a high-level abstraction provided by the standard library’s net package, designed to establish network connections in an easy-to-use and straightforward manner. You would use this for bi-directional communication where you can simply read and write to a Net.Conn (socket) without having to worry about the details.&lt;/p&gt;

&lt;p&gt;In our case, we’re primarily interested in sending traffic, using the net.Dial method that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;conn, err := net.Dial("udp", fmt.Sprintf("%s:%d", s.dstIP, s.dstPort))
if err != nil {
    return fmt.Errorf("failed to dial UDP: %w", err)
}
defer conn.Close()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, you can simply write bytes to your conn like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;conn.Write(payload)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find our code for this in the file &lt;a href="https://github.com/atoonk/go-pktgen/blob/main/pktgen/af_inet.go"&gt;af_inet.go&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it! Pretty simple, right? As we’ll see, however, when we get to the benchmark, this is the slowest method and not the best for sending packets quickly. Using this method, we can get to about 697,277 pps&lt;/p&gt;

&lt;h2&gt;
  
  
  Raw Socket
&lt;/h2&gt;

&lt;p&gt;Moving deeper into the network stack, I decided to use raw sockets to send packets in Go, unlike the more abstract net.Dial method, raw sockets provide a lower-level interface with the network stack, offering granular control over packet headers and content. This method allows us to craft entire packets, including the IP header, manually.&lt;/p&gt;

&lt;p&gt;To create a raw socket, we’ll have to make our own syscall, give it the correct parameters, and provide the type of traffic we’re going to send. We’ll then get back a file descriptor. We can then read and write to this file descriptor. This is what it looks like at the high level; see &lt;a href="https://github.com/atoonk/go-pktgen/blob/main/pktgen/rawsocket.go"&gt;rawsocket.go&lt;/a&gt; for the complete code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fd, err := syscall.Socket(syscall.AF_INET, syscall.SOCK_RAW, syscall.IPPROTO_RAW)
if err != nil {
    log.Fatalf("Failed to create raw socket: %v", err)
}
defer syscall.Close(fd)

// Set options: here, we enable IP_HDRINCL to manually include the IP header
if err := syscall.SetsockoptInt(fd, syscall.IPPROTO_IP, syscall.IP_HDRINCL, 1); err != nil {
    log.Fatalf("Failed to set IP_HDRINCL: %v", err)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it, and now we can read and write our raw packet to file descriptor like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;err := syscall.Sendto(fd, packet, 0, dstAddr)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since I’m using &lt;code&gt;IPPROTO_RAW&lt;/code&gt;, we’re bypassing the transport layer of the kernel’s network stack, and the kernel expects us to provide a complete IP packet. We do that using the BuildPacket function. It’s slightly more work, but the neat thing about raw sockets is that you can construct whatever packet you want.&lt;/p&gt;

&lt;p&gt;We’re telling the kernel just to take our packet, it has to do less work, and thus, this process is faster. All we’re really asking from the network stack is to take this IP packet, add the ethernet headers, and hand it to the network card for sending. It comes as no surprise, then, that this option is indeed faster than the Net.Dial option. Using this method, we can reach about 793,781 pps, about 100k PPS more than the net.Dial method.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AF_INET Syscall Method
&lt;/h2&gt;

&lt;p&gt;Now that we’re used to using syscalls directly, we have another option. In this example, we create a UDP socket directly like below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fd, err := syscall.Socket(syscall.AF_INET, syscall.SOCK_DGRAM, syscall.IPPROTO_UDP)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that we can simply write our payload to it using the Sendto method like before.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;err = syscall.Sendto(fd, payload, 0, dstAddr)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It looks similar to the raw socket example, but a few differences exist. The key difference is that in this case we’ve created a socket of type UDP, which means we don’t need to construct the complete packet (IP and UDP header) like before. When using this method, the kernel manages the construction of the UDP header based on the destination IP and port we specify and handles the encapsulation process into an IP packet.&lt;/p&gt;

&lt;p&gt;In this case, the payload is just the UDP payload. In fact, this method is similar to the Net.Dial method before, but with fewer abstractions.&lt;/p&gt;

&lt;p&gt;Compared to the raw socket method before, I’m now seeing 861,372 pps — that’s a 70k jump. We’re getting faster each step of the way. I’m guessing we get the benefit of some UDP optimizations in the kernel.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pcap Method
&lt;/h2&gt;

&lt;p&gt;It may be surprising to see Pcap here for sending packets. Most folks know pcap from things like tcpdump or Wireshark to capture packets. But it’s also a fairly common way to send packets. In fact, if you look at many of the Go-packet or Python Scappy examples, this is typically the method listed to send custom packets. So, I figured I should include it and see its performance. I was skeptical, but was pleasantly surprised when I saw the pps numbers!&lt;/p&gt;

&lt;p&gt;First, let’s take a look at what this looks like in Go; again, for the complete example, see my implementation in pcap.go here&lt;/p&gt;

&lt;p&gt;We start by creating a Pcap handle like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;handle, err := pcap.OpenLive(s.iface, 1500, false, pcap.BlockForever)
if err != nil {
    return fmt.Errorf("could not open device: %w", err)
}
defer handle.Close()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we create the packet manually, similar to the Raw socket method earlier, but in this case, we include the Ethernet headers.&lt;br&gt;
After that, we can write the packet to the pcap handle, and we’re done!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;err := handle.WritePacketData(packet)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To my surprise, this method resulted in quite a performance win. We surpassed the one million packets per second mark by quite a margin: 1,354,087 pps — almost a 500k pps jump!&lt;/p&gt;

&lt;p&gt;Note that, towards the end of this article, we’ll look at a caveat, but good to know that this method stops working well when sending multiple streams (go routines).&lt;/p&gt;

&lt;h2&gt;
  
  
  The af_packet method
&lt;/h2&gt;

&lt;p&gt;As we explore the layers of network packet crafting and transmission in Go, we next find the AF_PACKET method. This method is popular for IDS systems on Linux, and for good reasons!&lt;/p&gt;

&lt;p&gt;It gives us direct access to the network device layer, allowing for the transmission of packets at the link layer. This means we can craft packets, including the Ethernet header, and send them directly to the network interface, bypassing the higher networking layers. We can create a socket of type AF_PACKET using a syscall. In Go this will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fd, err := syscall.Socket(syscall.AF_PACKET, syscall.SOCK_RAW, int(htons(syscall.ETH_P_IP)))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This line of code creates a raw socket that can send packets at the Ethernet layer. With AF_PACKET, we specify SOCK_RAW to indicate that we are interested in raw network protocol access. By setting the protocol to &lt;code&gt;ETH_P_IP&lt;/code&gt;, we tell the kernel that we’ll be dealing with IP packets.&lt;/p&gt;

&lt;p&gt;After obtaining a socket descriptor, we must bind it to a network interface. This step ensures that our crafted packets are sent out through the correct network device:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;addr := &amp;amp;syscall.SockaddrLinklayer{
    Protocol: htons(syscall.ETH_P_IP),
    Ifindex:  ifi.Index,
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Crafting packets with AF_PACKET involves manually creating the Ethernet frame. This includes setting both source and destination MAC addresses and the EtherType to indicate what type of payload the frame is carrying (in our case, IP). We’re using the same BuildPacket function as we used for the Pcap method earlier.&lt;/p&gt;

&lt;p&gt;The packet is then ready to be sent directly onto the wire:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;syscall.Sendto(fd, packet, 0, addr)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The performance of the AF_PACKET method turns out to be almost identical to that achieved with the pcap method earlier. A quick Google, shows that libpcap, the library underlying tools like tcpdump and the Go pcap bindings, uses AF_PACKET for packet capture and injection on Linux platforms. So, that explains the performance similarities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the AF_XDP Socket
&lt;/h2&gt;

&lt;p&gt;We have one more option to try. AF_XDP is a relatively recent development and promises impressive numbers! It is designed to dramatically increase the speed at which applications can send and receive packets directly from and to the network interface card (NIC) by utilizing a fast path through the traditional Linux network stack. Also see my earlier blog on XDP here.&lt;/p&gt;

&lt;p&gt;AF_XDP leverages the XDP (eXpress Data Path) framework. This capability not only provides minimal latency by avoiding kernel overhead but also maximizes throughput by enabling packet processing at the earliest possible point in the software stack.&lt;/p&gt;

&lt;p&gt;The Go standard library doesn’t natively support AF_XDP sockets, and I was only able to find one library to help with this. So it’s all relatively new still.&lt;/p&gt;

&lt;p&gt;I’m using this library github.com/asavie/xdp and this is how you can initiate an AF_XDP socket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;xsk, err := xdp.NewSocket(link.Attrs().Index, s.queueID, nil)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that we need to provide a NIC queue; this is a clear indicator that we’re working at a lower level than before. The complete code is a bit more complicated than the other options, partially because we need to work with a user-space memory buffer (UMEM) for packet data. This method reduces the kernel’s involvement in packet processing, cutting down the time packets spend traversing system layers. By crafting and injecting packets directly at the driver level. So, instead of pasting the code, &lt;a href="https://github.com/atoonk/go-pktgen/blob/main/pktgen/af_xdp.go"&gt;please look at my code here.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The results look great; using this method, I can now generate 2,647,936 pps. That’s double the performance we saw with AF_PACKET! Whoohoo!&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-up and some takeaways
&lt;/h2&gt;

&lt;p&gt;First off, this was fun to do and learn! We looked at the various options to generate packets from the traditional net.Dial method, to raw sockets, pcap, AF_PACKET and finally AF_XDP. The graph below shows the numbers per method (all using one CPU and one NIC queue). AF_XDP is the big winner!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pq6vusrzwmq2vypepky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pq6vusrzwmq2vypepky.png" alt="The various ways to send network traffic in Go" width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If interested, you can run the benchmarks yourself on a Linux system like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./go-pktgen --dstip 192.168.64.2 --method benchmark \
 --duration 5 --payloadsize 64 --iface veth0

+-------------+-----------+------+
|   Method    | Packets/s | Mb/s |
+-------------+-----------+------+
| af_xdp      |   2647936 | 1355 |
| af_packet   |   1368070 |  700 |
| af_pcap     |   1354087 |  693 |
| udp_syscall |    861372 |  441 |
| raw_socket  |    793781 |  406 |
| net_conn    |    697277 |  357 |
+-------------+-----------+------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The important number to look at is packets per second as that is the limitation on software network stacks. The Mb/s number is simply the packet size x the PPS number you can generate. It’s interesting to see the easy 2x jump from the traditional net.Dial approach to using AF_PACKET. And then another 2x jump when using AF_XDP. Certainly good to know if you’re interested in sending packets fast!&lt;/p&gt;

&lt;p&gt;The benchmark tool above uses one CPU and, thus, one NIC queue by default. The user can, however, elect to use more CPUs, which will start multiple go routines to do the same tests in parallel. The screenshot below shows the tool running with eight streams (and 8 CPUs) using AF_XDP, generating 186Gb/s with 1200 byte packets (18.8Mpps)! That’s really quite impressive for a Linux box (and not using DPDK). Faster than what you can do with Iperf3 for example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2899fwsj1xpfdargohr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2899fwsj1xpfdargohr.png" alt="186 Gbs!" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Some caveats and things I’d like to look at in the future
&lt;/h2&gt;

&lt;p&gt;Running multiple streams (go routines) using the PCAP method doesn’t work well. The performance degrades significantly. The comparable AF_PACKET method, on the other hand, works well with multiple streams and go routines.&lt;/p&gt;

&lt;p&gt;The AF_XDP library I’m using doesn’t seem to work well on most hardware NICs. I opened a GitHub issue for this and hope it will be resolved. It would be great to see this be more reliable as it kind of limits more real-world AF_XDP Go applications. I did most of my testing using veth interfaces; i’d love to see how it works on a physical NIC and a driver with XDP support.&lt;/p&gt;

&lt;p&gt;It turns out that for AF_PACKET, there’s a zero-copy mode facilitated by the use of memory-mapped (mmap) ring buffers. This feature allows user-space applications to directly access packet data in kernel space without the need for copying data between the kernel and user space, effectively reducing CPU usage and increasing packet processing speed. This means that, in theory, the performance of AF_PACKET and AF_XDP could be very similar. However, it appears the Go implementations of AF_PACKET do not support zero-copy mode or only for RX and not TX. So I wasn’t able to use that. I found this patch but unfortunately couldn’t get it to work within an hour or so, so I moved on. If this works, this will likely be the preferred approach as you don’t have to rely on AF_XDP support.&lt;/p&gt;

&lt;p&gt;Finally, I’d love to include DPDK support in this pktgen library. It’s the last one missing. But that’s a whole beast on its own, and I need to rely on good Go DPDK libraries. Perhaps in the future!&lt;/p&gt;

&lt;p&gt;That’s it; you made it to the end! Thanks for reading!&lt;/p&gt;

&lt;p&gt;Cheers&lt;br&gt;
-Andree&lt;/p&gt;

</description>
      <category>go</category>
      <category>programming</category>
      <category>devops</category>
      <category>performance</category>
    </item>
    <item>
      <title>IPv4 surcharge — Your AWS bill is going up this February</title>
      <dc:creator>Andree Toonk</dc:creator>
      <pubDate>Wed, 31 Jan 2024 16:11:02 +0000</pubDate>
      <link>https://dev.to/aws-builders/ipv4-surcharge-your-aws-bill-is-going-up-this-february-4o56</link>
      <guid>https://dev.to/aws-builders/ipv4-surcharge-your-aws-bill-is-going-up-this-february-4o56</guid>
      <description>&lt;p&gt;Here’s how to save money and be more secure!&lt;/p&gt;

&lt;p&gt;As of tomorrow, your AWS bill will go up! Effective February 1, 2024, there will be a charge of $0.005 per IP per hour for all public IPv4 addresses, whether attached to a service or not. That’s a total of $43.80 per year, a pretty hefty number! The reason for this is outlined in the AWS announcement:&lt;br&gt;
‍&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As you may know, IPv4 addresses are an increasingly scarce resource and the cost to acquire a single public IPv4 address has risen more than 300% over the past 5 years. This change reflects our own costs and is also intended to encourage you to be a bit more frugal with your use of public IPv4 addresses.‍&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this blog, I’ll cover how you can save money on your AWS bill by eliminating unnecessary public IPv4 addresses using Border0. But before we go there, let’s look at how many IPv4 addresses Amazon has, how much that’s worth, and how much AWS will make with this new charge to your monthly bill.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  How many IPv4 Addresses does AWS have?
&lt;/h2&gt;

&lt;p&gt;Operating the Amazon infrastructure and keeping up with the incredible growth of AWS requires a massive amount of IP addresses. And so it comes as no surprise that over the years, Amazon has spent a lot of money acquiring an enormous number of IPv4 addresses. All so we can continue to spin up our ec2 instances, load balancers, and NAT gateways without worrying about IPv4 addresses.‍&lt;/p&gt;

&lt;p&gt;To determine exactly how many IPv4 addresses Amazon has, we can look at various publicly available data sets. The data I used is the AWS IP json and the various whois (ARIN, RIPE, etc) data entries.‍&lt;/p&gt;

&lt;p&gt;Crunching all that data, we can determine that Amazon has at least 131,932,752 IPv4 addresses.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Let’s round that up and say 132 Million IPv4 addresses! That’s the equivalent of almost eight /8’s 😮&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Curious about the data and what IPv4 addresses were included? See &lt;a href="https://gist.github.com/atoonk/0ee3f5bebcea874f6032215f16c3c30a"&gt;this link for the raw data&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  ‍How much is the Amazon IPv4 estate worth?‍
&lt;/h2&gt;

&lt;p&gt;IPv4 addresses are like digital real estate. These 32-bit integers have real monetary value and can be bought and sold. In fact, the price of IPv4 addresses has increased significantly over the last decade, and would have made an excellent investment if you got in early!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0gizb8iar6b2fq9onrc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0gizb8iar6b2fq9onrc.png" alt="https://auctions.ipv4.global/prior-sales" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So the next logical question is, how much is the Amazon IPv4 estate worth? Based on data from ipv4.global, the average price for an IPv4 address is currently ~35 dollars. With that data in hand, we can do our back-of-the-napkin math:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So the approximate value of Amazon’s IPv4 estate today is about: &lt;strong&gt;$4.6 Billion dollars!&lt;/strong&gt; Not too shabby!‍&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How much money will AWS make with the new IPv4 charge?
&lt;/h2&gt;

&lt;p&gt;‍Speaking of dollars, let’s take a look if we can make an educated guess about how much AWS will make from the new IPv4 charge. For that, we need the price per IP and the number of IPv4 addresses in use by AWS customers.&lt;/p&gt;

&lt;p&gt;‍We know the first variable, $0.005 per IP per hour, or $43.80 per year per IPv4 address. The second variable, the number of IPv4 addresses in use by AWS customers, is harder to determine. However, we can make some educated guesses for fun!&lt;/p&gt;

&lt;p&gt;‍As mentioned, the significant variable here is how many IP addresses are used at any given time by AWS customers. Let’s explore a few scenarios, starting with a very conservative estimate, say 10% of the IPv4 addresses published in the AWS JSON (79M IPv4 addresses) are used for a year. That’s 7.9 Million IPv4 addresses x $43.80, almost $346 Million a year. At 25% usage, that’s nearly $865 Million a year. And at 30% usage, that’s a billion dollars!‍&lt;/p&gt;

&lt;p&gt;That gives us a pretty good indicator of the scale we’re talking about. Another approach is to try and measure it. How many IP addresses are alive within the AWS network right now? AWS conveniently publishes all addresses, so we could send an ICMP echo request (a ping) to all of them and measure how many send back an echo reply.‍&lt;/p&gt;

&lt;p&gt;That sounded like a fun project! So I wrote &lt;a href="https://github.com/atoonk/ping-aws-ips"&gt;a quick program &lt;/a&gt;that downloads the JSON with all the AWS IP addresses and filters out the categories “AMAZON,” “EC2,” and “GLOBAL ACCELERATOR.” We will assume these are all the customer-used (charged) IP addresses. I.e., we’re not going to ping services like Route53 Health Checks or Cloudfront as those won’t show up on your bill as an IPv4 charge.&lt;/p&gt;

&lt;p&gt;‍The program sends a single ICMP packet to all IP addresses and collects all the replies. With this, we have some actual measurement data, and we observe that we received a reply from roughly 6 Million IPv4 addresses.&lt;br&gt;
6 Million addresses x $43.80 is $ 263 Million annually!‍&lt;/p&gt;

&lt;p&gt;That’s another good data point. However, remember that many ec2 instances and other services will have strict security groups and, by default, won’t respond to a ping packet. So, it’s fair to say that six million active IPs is the absolute minimum. The actual number of active IPv4 addresses could easily be double that given the various default security groups blocking ICMP.‍&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Given this data, I believe it’s fair to say that AWS will likely make anywhere &lt;strong&gt;between $400 Million and $1 Billion&lt;/strong&gt; dollars a year with this new IPv4 charge! &lt;br&gt;
That’s a nice bump for AWS, especially given that this was provided for free until today.‍&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Lowering your AWS bill with Border0
&lt;/h2&gt;

&lt;p&gt;Your AWS services, such as ec2 instances, may have public IPv4 addresses for various reasons. One common reason is to have management access to your servers. For example, using SSH or RDP. Or to access the app running on your machine, like a database or HTTP application.‍&lt;/p&gt;

&lt;p&gt;Some of your applications should likely only be accessible to authorized users and, ideally, not connected directly to the Internet. For example, &lt;a href="https://www.border0.com/blogs/help-my-postgres-database-was-compromised"&gt;recent Border0 research showed&lt;/a&gt; that botnets are actively compromising publicly accessible Mysql and Postgres servers! You don’t want these unprotected on the Internet for everyone to poke at.‍&lt;/p&gt;

&lt;p&gt;We recommend running your AWS infrastructure in a private subnet with only a NAT gateway for outbound Internet connectivity. This way, they’re shielded from the Internet, significantly reducing the risk of getting compromised. As a bonus, you save yourself the AWS IPv4 charge! (note: the charge is only for public IP addresses).&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/NHR3Do4WhVU"&gt;
&lt;/iframe&gt;
&lt;br&gt;
With the deployment of a Border0 connector in your private network, you and the rest of your team can still access all services using just your existing Single Sign-on credentials without needing a VPN.&lt;/p&gt;

&lt;p&gt;Deploying Border0 &lt;a href="https://youtu.be/e0KeS5x-GZ0"&gt;is easier than you may think&lt;/a&gt;! Curious to give it a try? Check out our &lt;a href="https://www.border0.com/blogs/border0-terraform-provider"&gt;terraform example&lt;/a&gt; or this &lt;a href="https://www.border0.com/blogs/elevating-security-and-simplifying-aws-access-with-border0"&gt;blog on Border0 for AWS&lt;/a&gt;.&lt;br&gt;
‍&lt;/p&gt;

&lt;p&gt;Border0 offers &lt;a href="https://portal.border0.com/register"&gt;a generous free tier&lt;/a&gt;, and getting started is easy!&lt;br&gt;
With Border0, access is easier and more secure; your engineers and security team will love it. And, since you’re saving on public IPs, your FinOps folks will be happy, too!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ipv4</category>
      <category>ipv6</category>
      <category>security</category>
    </item>
    <item>
      <title>Help! My database was compromised!</title>
      <dc:creator>Andree Toonk</dc:creator>
      <pubDate>Mon, 15 Jan 2024 15:26:31 +0000</pubDate>
      <link>https://dev.to/aws-builders/help-my-database-was-compromised-3o2j</link>
      <guid>https://dev.to/aws-builders/help-my-database-was-compromised-3o2j</guid>
      <description>&lt;p&gt;&lt;em&gt;A detailed look at an active PostgreSQL and MySQL Ransomware bot.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The other day, I noticed this tweet mentioning how someone’s test PostgreSQL database was compromised. This intrigued me and made me wonder how common this is. So, I decided to run an experiment. Let’s run a simple PostgreSQL server on a DigitalOcean VM and see what happens! And wow, I wasn’t disappointed.‍&lt;/p&gt;

&lt;p&gt;To my surprise, it only took a few hours to get compromised; in fact, I re-ran the experiment a few times and reproduced the compromise several times a day. It’s a scary world out there!‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9qitjx5zfxggto4a8o6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9qitjx5zfxggto4a8o6.png" alt="Compromised Database" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In all cases, my database was wiped out (drop database). Interestingly, the attacker left a note in a newly created database called readme_to_recover with instructions for how to get our data back…‍&lt;/p&gt;

&lt;p&gt;So this made me curious: what exactly happened, and how common is it?‍&lt;/p&gt;

&lt;h2&gt;
  
  
  What exactly happened?
&lt;/h2&gt;

&lt;p&gt;To better understand exactly what happened during these events, I put my PostgreSQL database behind a simple custom PostgreSQL proxy. This authenticates the user and logs all queries. Doing this will give us some good insights into exactly what queries were being executed by the attacking bot.‍&lt;/p&gt;

&lt;p&gt;In this experiment, we made our test PostgreSQL database available from anywhere on the Internet, with username: postgres and password: password. It had an example database called books, with a few tables and some dummy data.‍&lt;/p&gt;

&lt;p&gt;With this setup, we should learn more about what exactly happened. I didn’t have to wait long; soon after starting my new setup, the bot came knocking again!&lt;br&gt;
The first log line shows me where the attack was coming from:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2024/01/11 19:02:10 New connection accepted from 94.156.71.8:58212
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍This is an IP from a hosting provider out of The Netherlands, AS394711, also known as Limenet. The user authenticated as user: postgres and password: password&lt;/p&gt;

&lt;p&gt;‍With our lightweight proxy, we can now see exactly what queries are being executed. The first few queries we see are intended to explore what’s in our database.&lt;br&gt;
&lt;code&gt;SELECT datname FROM pg_database;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This helps the bot to identify all available databases in our PostgreSQL server.&lt;/p&gt;

&lt;p&gt;Now that the bot knows what databases are available, it next tries to determine what tables are available for each database and what they look like. Then, it takes a snapshot of each table in the database before dropping both the tables and databases.‍&lt;/p&gt;

&lt;p&gt;These are the exact queries we recorded, all wrapped in a transaction per table. This was repeated for all tables in my books database.‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BEGIN
SELECT pg_database_size('books')
SELECT table_name FROM information_schema.tables WHERE table_schema = 'public' 
  AND table_type = 'BASE TABLE' AND table_catalog = 'books';
SELECT column_name FROM information_schema.columns WHERE table_schema = 'public'
  AND table_name = 'authors';
SELECT * FROM authors LIMIT 20;
DROP TABLE IF EXISTS authors CASCADE;
COMMIT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍After processing all tables, it then dropped the database like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;DROP DATABASE books;‍&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Before continuing, it also executed this query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity WHERE 
pg_stat_activity.datname &amp;lt;&amp;gt; 'postgres' AND pid &amp;lt;&amp;gt; pg_backend_pid();‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This query terminates all backend processes connected to any database except the ‘postgres’ database and excludes the process running the query itself. It is likely used to terminate other active connections to the database. Possibly to prevent administrators or automated systems from interrupting the attack.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  A note was left — Ransom Demand
&lt;/h2&gt;

&lt;p&gt;As a last step, the attacker left us a note in a newly created database called readme_to_recover.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;CREATE DATABASE readme_to_recover TEMPLATE template0;&lt;/code&gt;&lt;br&gt;
Then, it uses this new database to add a readme like below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BEGIN
CREATE TABLE readme (text_field VARCHAR(255));
INSERT INTO readme (text_field) VALUES 
('All your data is backed up. You must pay 0.007 BTC to 164hyKPAoC5ecqkJ2ygeGoGFRcauWRLujV In 48 hours, your data will be publicly disclosed 
and deleted. (more information: go to http://iplis.ru/data3)');
INSERT INTO readme (text_field) VALUES ('After paying send mail to us: 
rambler+3uzdl@onionmail.org and we will provide a link for you to download your data. 
Your DBCODE is: 3UZDL');
COMMIT‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, a select text_field from readme shows us how to recover our data.‍&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;All your data is backed up. You must pay 0.007 BTC to 164hyKPAoC5ecqkJ2ygeGoGFRcauWRLujV In 48 hours, your data will be publicly disclosed and deleted. (more information: go to &lt;a href="http://iplis.ru/data3After"&gt;http://iplis.ru/data3After&lt;/a&gt; paying send mail to us: &lt;a href="mailto:rambler+3uzdl@onionmail.org"&gt;rambler+3uzdl@onionmail.org&lt;/a&gt; and we will provide a link for you to download your data. Your DBCODE is: 3UZDL&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;‍It conveniently informs us that we can get a copy of our data back for 0.007 BTC (about $327 USD at the time of writing this article). The data will be publicly disclosed and deleted if we don’t pay!‍&lt;/p&gt;

&lt;p&gt;But that’s a lie; we know the bot didn’t take all our data! The query logs clearly show that while it did a select * for each table, it also included a LIMIT 20, which means that the bot only selected 20 rows for each table. So even though it’s claiming it backed up all the data, that’s clearly not true. It can also not disclose all data, as all it took was the first 20 rows for each table. This also means that paying the ransom of 0.007 BTC will be useless.‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnlumoqyxh5pgyhuaz23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnlumoqyxh5pgyhuaz23.png" alt="Queries executed by the attacking ransom bot" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ‍How much money is made?
&lt;/h2&gt;

&lt;p&gt;Taking a closer look at the Bitcoin address specified in the ransom note reveals the activity. Five separate transactions were made to this address in the last few days, combined, bringing in just over $2,400 USD. Notably, each time funds were transferred to it, they were swiftly moved to another wallet. Unfortunately, these victims would never receive their data back.‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwrncepyigoavdic19lh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwrncepyigoavdic19lh.png" alt="https://bitinfocharts.com/bitcoin/address/164hyKPAoC5ecqkJ2ygeGoGFRcauWRLujV" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ‍The same Bot is targeting MySQL databases
&lt;/h2&gt;

&lt;p&gt;Amid my investigation, an intriguing parallel surfaced involving MySQL databases. The same bot attacks MySQL databases and comes from various IP addresses within the same /24 range. However, there were subtle differences in its approach. Notably, when sifting through the data, the bot imposed a LIMIT of 10 rows, again stopping short of a complete data extraction. Following this, it systematically drops all tables and databases. In a move mirroring its PostgreSQL tactics, it established a new database named &lt;code&gt;RECOVER_YOUR_DATA&lt;/code&gt;, which has a table of the same name, &lt;code&gt;RECOVER_YOUR_DATA&lt;/code&gt;. This table contains the same ransomware message we observed earlier, including the same Bitcoin wallet address. However, curiously, the ransom price for MySQL database is 0.017 BTC (USD $732), about double the PostgreSQL price.&lt;/p&gt;

&lt;p&gt;As a last step, the bot attempted to bring the MySQL server to a halt using the &lt;code&gt;SHUTDOWN&lt;/code&gt; command — a clear sign of its calculated and destructive intent, something that will undoubtedly get your attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exposing your database by accident is easier than you think.
&lt;/h2&gt;

&lt;p&gt;Ok, I know what you’re thinking: who the heck exposes their database to the Internet like that? You got what you deserved!&lt;br&gt;
‍&lt;br&gt;
Well, yes and no! Unfortunately, exposing your database publicly to the Internet is asking for trouble! So ideally, you don’t do that or find a way to protect yourself from drive-by attacks like these. At least set a strong password.&lt;br&gt;
So, although it’s not the best idea to leave your database wide open, it’s also not as uncommon as you might think. Perhaps you’ve done it yourself; sometimes it’s just convenient.‍&lt;/p&gt;

&lt;p&gt;A quick check at &lt;a href="https://www.shodan.io/search?query=PostgreSQL"&gt;Shodan.io&lt;/a&gt;, shows us that there are about 833,666 publicly accessible PostgreSQL servers available and over &lt;a href="https://www.shodan.io/search?query=mysql"&gt;3.2 Million MySQL databases&lt;/a&gt;! Lots of potential targets. Most of them run in the public cloud providers like AWS, GCP, DigitalOcean, etc. But, interestingly &lt;a href="https://www.shodan.io/search/facet?query=PostgreSQL&amp;amp;facet=org"&gt;the number two on the list&lt;/a&gt; is a Polish provider called home.pl (AS12824), for which shodan reports over 82,000 publicly reachable PostgreSQL servers.‍&lt;/p&gt;

&lt;p&gt;It’s not surprising to see many open database services in the public cloud. If you run your database in, say, DigitalOcean or even AWS, then these cloud providers don’t always make it easy to access your database from your desktop, or even a workload running in a different region or provider. You may have no other option than to open it from anywhere. And so, while bad practice, it’s not all that surprising that there are that many open databases.‍&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Also, for Docker users, it’s important to know that using docker run -p to publish a container’s port alters your iptables rules for DNAT-based port forwarding. This Docker feature manages external communication for containers, overriding any default deny settings in your iptables INPUT table, thus making the port accessible publicly.‍&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  A better way to expose your database
&lt;/h2&gt;

&lt;p&gt;If you need help with making your database available in an easy, low-friction way to just the users who should have access but don’t want to expose your database to the whole world, then take a look at Border0!‍&lt;/p&gt;

&lt;p&gt;The video below demonstrates how easy it is to make your PostgreSQL database available using Border0.‍&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/GW2f53GWt0s"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;‍&lt;a href="https://docs.border0.com/docs/making-a-database-resource-available"&gt;Using Border0&lt;/a&gt; you can make your database available to your fellow team members, contractors and anyone you determine should have access. All without the need for a VPN, while still allowing you to block access from the whole Internet. All while just needing your single sign-on credentials. So best of both worlds!‍&lt;/p&gt;

&lt;p&gt;So why settle for the outdated confines of conventional access management? Step into the enhanced adaptability and governance that Border0’s policies offer and leverage the transformation in access management. Take the opportunity to explore the advantages of Border0 firsthand by registering for our &lt;a href="https://portal.border0.com/register"&gt;free, full-featured community edition today&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>postgres</category>
      <category>webdev</category>
      <category>security</category>
    </item>
    <item>
      <title>Passwordless Zero Trust Access to AWS RDS</title>
      <dc:creator>Andree Toonk</dc:creator>
      <pubDate>Thu, 16 Nov 2023 20:06:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/passwordless-zero-trust-access-to-aws-rds-2417</link>
      <guid>https://dev.to/aws-builders/passwordless-zero-trust-access-to-aws-rds-2417</guid>
      <description>&lt;p&gt;Accessing databases can be a cumbersome process, often requiring VPNs, jumping through bastion hosts, and shared credentials. It doesn’t need to be hard though! The &lt;a href="https://www.border0.com"&gt;Border0.com&lt;/a&gt; integration with AWS RDS offers a seamless, passwordless experience using IAM-based authentication. This integration simplifies access and enhances security, visibility, and control.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/iJgslr5kZ24"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  ‍IAM-Based Authentication for RDS
&lt;/h2&gt;

&lt;p&gt;IAM-based authentication for RDS allows database access by integrating with AWS’ Identity and Access Management (IAM) system. This eliminates the need for traditional password-based logins, thereby enhancing security by ensuring that no static credentials are required. This way, the risk of credential compromise is significantly reduced, as users and systems can access RDS instances using their existing AWS IAM credentials. This approach not only simplifies the user experience but also adds an extra layer of security.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Better together: RDS IAM and Border0
&lt;/h2&gt;

&lt;p&gt;Border0 integrates with and elevates the capabilities of IAM-based authentication for RDS by serving as an advanced, fine-grained database proxy that seamlessly integrates with your Single Sign-On (SSO) systems and RDS instances. This setup also provides a rich set of policies that allow administrators to define who should have access to the database by defining a set of granular conditions. And with our database level security rules for MySQL, it even provides an L7 database firewall, all in one solution!‍&lt;/p&gt;

&lt;p&gt;One of the standout advantages of combining Border0 with RDS IAM authentication is the provision for passwordless access, where neither the clients nor the Border0 connector need to store database credentials, thereby enhancing security. As an additional bonus, Border0 also takes care of accessing private RDS instances, so there is no need for a VPN. Let’s take a look at how to set this up.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up IAM-Based Auth on RDS
&lt;/h2&gt;

&lt;p&gt;Setting up IAM based authentication for your RDS cluster is easy, simply follow our docs here. For this example, we assume you already have a Border0 connector installed in your AWS environment. If you haven’t done so already, I recommend using either the aws installer, or the Terraform example.‍&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Enable IAM Authentication&lt;/strong&gt;&lt;br&gt;
Ensure IAM-based authentication is enabled for your RDS cluster.‍&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Configure MySQL to accept IAM authentication&lt;/strong&gt;&lt;br&gt;
Execute the following MySQL queries to enable IAM authentication.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE USER Border0ConnectorUser IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';
GRANT ALL ON MyAwesomeDatabase.* TO 'Border0ConnectorUser'@'%';
FLUSH PRIVILEGES;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Create a Border0 Database Socket:&lt;/strong&gt;&lt;br&gt;
Create a Database Socket in the Border0 portal and link it to your Border0 connector and select IAM authentication.‍&lt;/p&gt;

&lt;p&gt;Finally, make sure the connector has the proper AWS credentials. If you used the AWS installer (border0 connector install — aws) or our terraform example, then the IAM credentials will already be taken care off. Otherwise, make sure the connector host has rds-db:connect IAM permissions like below:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "rds-db:connect",
      "Resource": [
      "arn:aws:rds-db:*:*:dbuser:*/Border0ConnectorUser",
      ]
    }
  ]
}‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a detailed guide, refer to the Border0 documentation.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Access
&lt;/h2&gt;

&lt;p&gt;Once set up, you can test the access through Border0’s client portal or use your preferred database client. Border0 offers close integration with a variety of popular database clients such as MySQL Workbench, DBeaver, MySQL, mycli, psql, and DataGrip.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Session and Query Logs
&lt;/h2&gt;

&lt;p&gt;Now that your users can access the database, we can see a report of all access attempts in the Border0 admin portal. These session logs offer valuable information, such as who (SSO identity) accessed specific databases, the location from which the access was made, and the timing of the access. Additionally, query logs record the exact queries executed, providing a clear picture of database activity. This level of detail is particularly useful for compliance, auditing, and security monitoring, addressing challenges like attributing queries to individual users in systems where shared credentials are used.‍‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g_n9F6b0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qk3hywl37m8j3cdsxvse.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g_n9F6b0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qk3hywl37m8j3cdsxvse.png" alt="Database Query logs per Single sign-on Identity.&amp;lt;br&amp;gt;
" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ‍Zero Trust security model for database access
&lt;/h2&gt;

&lt;p&gt;Incorporating Border0 with AWS RDS IAM authentication significantly advances your journey toward a passwordless Zero Trust security model for database access. With Border0, you get fine-grained access controls that align with the Zero Trust principle of least privilege access. You can even specify the types of queries each user is allowed to execute. Unlike traditional methods that use shared database credentials, Border0 allows engineers to connect using their Single Sign-On (SSO) identity, ensuring each database query can be attributed to a user and is continuously authenticated and authorized. This fulfills the Zero Trust mandate of “never trust, always verify.”‍&lt;/p&gt;

&lt;p&gt;The integration also eliminates the need for passwords, reducing the attack surface and making unauthorized access more challenging. This supports the Zero Trust focus on minimizing credential-based attacks. Additionally, Border0 restricts users to specific databases, preventing lateral movement within the network, and allows for the definition of permissible queries. This is a shift from the broader network access granted by traditional VPNs or Bastion hosts and aligns with the Zero Trust principle of micro-segmentation.‍&lt;/p&gt;

&lt;p&gt;Border0’s detailed session and query logs offer extensive visibility, supporting the Zero Trust principle of continuous monitoring. The platform also allows for dynamic policies that adapt to real-time conditions, another cornerstone of Zero Trust. Moreover, all traffic between the end user and the connector (running in your environment) is end-to-end encrypted, ensuring that no intermediaries, including Border0 itself, can see or modify the database traffic.‍&lt;/p&gt;

&lt;p&gt;By integrating these features, Border0 and AWS RDS IAM authentication provide a robust framework that not only aligns with but also enhances the principles of a Zero Trust security model.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog, we looked at how Border0 and AWS RDS’s IAM-based authentication come together to redefine database access, security, and auditability. This powerful integration offers a secure, streamlined, and entirely passwordless experience, aligning perfectly with the principles of a zero-trust security model.‍&lt;/p&gt;

&lt;p&gt;The advantages are manifold: for administrators, the ability to define fine-grained access policies, implement application-level security, and gain detailed query insights provides a level of visibility and control that is hard to match. For engineers, the user experience is significantly enhanced; they can effortlessly discover all databases they have access to and connect using either Border0’s web client or their preferred database client. All that is required are their SSO credentials, eliminating the need for VPNs — even when the RDS instance is hosted in a private AWS subnet.‍&lt;/p&gt;

&lt;p&gt;Border0 takes care of the complexities, allowing both administrators and engineers to focus on what matters most. The end result is a solution that boosts productivity without compromising on security. Whether you’re an administrator aiming for robust security controls or an engineer looking for seamless access, this integration has something for everyone. We invite you to experience these benefits firsthand by exploring Border0’s free, fully-functional community and see for yourself how easy it is to start your zero trust journey. Within just a couple of minutes, you’ll transform the way your organization manages database access, security, and auditability.&lt;/p&gt;

</description>
      <category>rds</category>
      <category>zerotrust</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>SSH Shell access to your GitHub actions VM</title>
      <dc:creator>Andree Toonk</dc:creator>
      <pubDate>Wed, 01 Nov 2023 16:53:44 +0000</pubDate>
      <link>https://dev.to/aws-builders/ssh-shell-access-to-your-github-actions-vm-8dh</link>
      <guid>https://dev.to/aws-builders/ssh-shell-access-to-your-github-actions-vm-8dh</guid>
      <description>&lt;p&gt;Have you ever had a &lt;a href="https://github.com/marketplace/actions/border0-connect"&gt;GitHub action&lt;/a&gt; fail and wish you could just quickly log in to the build VM to troubleshoot? Well, good news! That’s the topic of today’s blog, in which we’ll explain how you can enhance your existing GitHub Actions with just a few lines of extra code so that you can get a shell to your runner container each time it fails.‍&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/tTrm3gBHFxQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  ‍The need for direct access
&lt;/h2&gt;

&lt;p&gt;GitHub Actions has become an indispensable tool over the last few years, serving as a CI/CD pipeline for many code repositories. Like any codebase, bugs are inevitable. Fortunately, most are caught by integration tests run during the build process. The typical next step is to review the logs to identify the issue. However, sometimes the logs aren’t enough, and direct access to the build vm is needed for troubleshooting. Unfortunately, that’s not possible with GitHub Actions; today, today we’re solving that!‍&lt;/p&gt;

&lt;h2&gt;
  
  
  SSH into Your GitHub Actions vm with SSO Credentials
&lt;/h2&gt;

&lt;p&gt;Border0 allows users to access SSH, databases, and web applications without a VPN using their existing Single Sign-On (SSO) credentials. During a recent hackathon, we leveraged this feature to create a GitHub Actions plugin. This will allow users to quickly and easily log in to their GitHub action vm using just their Gmail, Github, or Microsoft SSO credentials.‍&lt;/p&gt;

&lt;p&gt;Now, when your build action fails, you’ll receive a notification. You can then use our browser-based WASM SSH client to troubleshoot issues directly on the virtual machine, identify the problem, fix it, and move on.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Border0 into Your Pipeline
&lt;/h2&gt;

&lt;p&gt;To make things easy, we’ve made the &lt;a href="https://github.com/marketplace/actions/border0-connect"&gt;Border0 action available&lt;/a&gt; so that you can easily include it in your existing actions. Make sure to explore our GitHub repository and explore the README and examples here.‍&lt;/p&gt;

&lt;p&gt;To get started, you’ll need a Border0 account (&lt;a href="https://portal.border0.com/register"&gt;Register here&lt;/a&gt;) and an API token. You can easily add the Border0 GitHub Action as an additional step in your existing workflow, as shown below:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: CI/CD Pipeline with Border0 GitHub Action
on:
  push:
  workflow_dispatch:
    inputs:
      debug:
        type: boolean
        description: Manual trigger
        required: false
        default: false

jobs:
  gh-action-job:
    runs-on: ubuntu-latest
    steps:
      - name: A step that will always fail
        run: |
            echo "this step will wait 10 seconds and always fails"
            sleep 10
            exit 1
      - name: Start Border0 in case of failure.
        if: failure()
        uses: borderzero/gh-action@v2
        with:
          token: ${{ secrets.BORDER0_ADMIN_TOKEN }}
          slack-webhook-url: ${{ secrets.SLACK_WEBHOOK_URL }}
          wait-for: 20‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From now on, the next time your action fails, you can log into your runner VM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced Usage
&lt;/h2&gt;

&lt;p&gt;By default, the Border0 action creates an SSH service only when the action fails. However, you have more options. For instance, you may want the SSH service to run for every workflow run, regardless of its success or failure. An example of that can be found in the README.‍&lt;/p&gt;

&lt;p&gt;Another helpful addition is the Slack integration. This will inform you of the failed action and send the SSH url to your Slack workspace.&lt;/p&gt;

&lt;p&gt;‍Using Border0 policies, you have complete control over who will have access to your build virtual machine. Not only can you define what SSO identities have access, but you can also configure specific IP addresses and countries and use date &amp;amp; time-based filters. Best of all, you’ll get a complete session log, showing you exactly who logged in, when, and from where, including a session recording. These recordings can then be replayed and show you exactly what commands were executed and what happened during the session. Great for compliance, troubleshooting, and even training purposes.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap up.
&lt;/h2&gt;

&lt;p&gt;Troubleshooting your GitHub Actions just got a whole lot easier! In this blog we saw how easy it is to add the Border0 action to your workflow and get a shell to your build vm each time there’s a build failure. Sometimes, staring at the logs just isn’t enough, and we really want a shell and poke around. This will allow you and the rest of your team to troubleshoot the issue faster and more efficiently.‍&lt;/p&gt;

&lt;p&gt;Users can access their build vm using their existing SSO credentials, protected by the Border0 policies, so it’s easy to define who should have access. Users can use their regular SSH client or the Border0 web SSH client, allowing them to quickly troubleshoot even from their tablet or phone!‍&lt;/p&gt;

&lt;p&gt;To try out &lt;a href="https://github.com/marketplace/actions/border0-connect"&gt;Border0 GitHub action&lt;/a&gt; today, sign up for our free, fully featured community edition and run the example for yourself! Alternatively, &lt;a href="https://www.border0.com/contact-us"&gt;schedule a demo&lt;/a&gt; and let us walk you through a custom demo.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Introducing The Border0 Terraform Provider</title>
      <dc:creator>Andree Toonk</dc:creator>
      <pubDate>Wed, 18 Oct 2023 15:35:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/introducing-the-border0-terraform-provider-267a</link>
      <guid>https://dev.to/aws-builders/introducing-the-border0-terraform-provider-267a</guid>
      <description>&lt;p&gt;Terraform has become an essential tool for many developers who manage infrastructure and applications running in the cloud. Border0 customers build on cloud providers such as AWS, so it makes sense we added official support for Terraform and make sure you can easily create Border0 Sockets for all your AWS resources using Terraform.‍&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://registry.terraform.io/providers/borderzero/border0/latest/docs" rel="noopener noreferrer"&gt;Border0 Terraform provider&lt;/a&gt; further enhances how users interact with the Border0 API and is built on top of the &lt;a href="https://www.border0.com/blogs/using-the-go-programming-language-to-work-with-border0" rel="noopener noreferrer"&gt;Border0 Go SDK we released just a few weeks ago&lt;/a&gt;. As a Border0 customer, you can now use the Portal, the CLI, Terraform, our SDK, and the APIs to interact with Border0.‍&lt;/p&gt;

&lt;p&gt;This new Terraform provider allows you to manage and keep track of all your Border0 resources, such as Connectors, Sockets, Policies, and tokens, all using Terraform. In this blog, we will showcase some examples.‍&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/07PNUsGF09g"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Terraform
&lt;/h2&gt;

&lt;p&gt;Terraform is a popular infrastructure as code tool developed by HashiCorp. Thousands of developers and infrastructure engineers use it to manage their cloud resources. Like many of our customers, we ourselves use it extensively for our AWS environments. Having a declarative way to manage your infrastructure makes deploying and tracking changes easy using typical GitOps best practices.‍&lt;/p&gt;

&lt;p&gt;Getting started with the Border0 Terraform provider&lt;br&gt;
To make it easy to get started with the Border0 Terraform provider, we published a &lt;a href="https://github.com/borderzero/terraform-examples" rel="noopener noreferrer"&gt;GitHub repository with a Getting Started example&lt;/a&gt;. The terraform code has a few modules, and at a highlevel does the following:‍&lt;/p&gt;

&lt;p&gt;The first module that’s executed is called infrastructure and creates a new AWS VPC with two EC2 instances, an ECS cluster, and an RDS instance. All of this will be running in a private Subnet, so isolated from direct Internet access for security purposes and closely mimicking a real-world scenario. Only outbound network access is allowed through the NAT gateway.&lt;br&gt;
In the module called connector, we’ll use Terraform to start a Border0 connector for this environment. It’s running on an EC2 instance in the private Subnet.&lt;br&gt;
Lastly, in the third module called sockets, we’ll use Terraform to create Border0 services (sockets) for the EC2, ECS, and RDS resources and link these to the Border0 connector. After completing this, you can now access these private AWS resources through Border0 using just your SSO credentials. Best of all, bringing up this terraform stack will take just a few minutes!‍&lt;br&gt;
At a high level, this diagram demonstrates what we’re deploying.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6cbk6pemmjankj2metv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6cbk6pemmjankj2metv.png" alt="The AWS demo setup"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Running the example
&lt;/h2&gt;

&lt;p&gt;Okay, it’s time to build! To follow along, check out our example repository like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/borderzero/terraform-examples.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have the example on our machine, we need to take care of a few prerequisites.‍&lt;/p&gt;

&lt;p&gt;AWS credentials&lt;br&gt;
If you have your AWS default credentials set up in your local environment and want to use these, you’re good to go. Otherwise, you can configure AWS credentials in one of the following ways.‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_ACCESS_KEY_ID="access-key"
export AWS_SECRET_ACCESS_KEY="secret-access"
export AWS_REGION="us-west-2"
export AWS_DEFAULT_REGION="us-west-2"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, you can update the variables.tf file, and add the AWS credentials there.‍&lt;/p&gt;

&lt;p&gt;Border0 API key&lt;br&gt;
Next, we’ll need to make sure Terraform has a Border0 API key to interact with the Border0 API. You can create an API key in the Border0 portal under &lt;a href="https://portal.border0.com/organizations/current?tab=tokens" rel="noopener noreferrer"&gt;Organization Setting &amp;gt; Access Token&lt;/a&gt;. We recommend you use a token with the ‘member’ role.&lt;/p&gt;

&lt;p&gt;‍The Border0 API token can be configured in the variables.tf file. Alternatively, you can set the Border0 API key as an environment variable like this.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;export BORDER0_TOKEN=YOUR_TOKEN&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Running Terraform
&lt;/h2&gt;

&lt;p&gt;Before creating a plan file and applying the changes, we need to run the &lt;code&gt;terraform init&lt;/code&gt; command to initialize our Terraform working directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍Now that we’ve populated the various variables and initialized the terraform provider, we’re ready for the next steps. Good practice is first to run terraform plan; this will create a plan of execution and show you exactly what will be created, changed, or deleted. After that, we can apply the plan with &lt;code&gt;terraform apply&lt;/code&gt;.‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan &amp;amp;&amp;amp; terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍Bringing up various AWS resources will take a few minutes, so this is a good time to refill your coffee. Once finished, you’ll see three EC2 instances; two are example instance targets you can connect to, and one acts as the Border0 connector. You’ll also see an RDS Mysql server and an ECS cluster with two containers, finally, there’s an HTTP socket that makes the NGINX containers available.‍&lt;/p&gt;

&lt;p&gt;You’ll now be able to connect to these resources from the Border0 client portal.‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi31rn0am238deqhsx4g5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi31rn0am238deqhsx4g5.png" alt="Border0 client portal, showing the newly created resources"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ‍Security as Code
&lt;/h2&gt;

&lt;p&gt;In the example repository, you’ll find a &lt;a href="https://github.com/borderzero/terraform-examples/blob/main/terraform-example/connector/policy.tf#L5C1-L29C2" rel="noopener noreferrer"&gt;file called policy.tf&lt;/a&gt;, which contains an example policy. This policy is attached to the newly created Border0 Sockets and together with any organization-wide policies, controls who will be able to access your Services.‍&lt;/p&gt;

&lt;p&gt;Managing access to your EC2, ECS, and RDS instances through Border0 is also a great example use-case for GitOps. GitOps is a set of practices that uses Git as a single source of truth for declarative infrastructure and applications. By using Git as the cornerstone for managing your Border0 policies, you have one place to manage and track who has access to your infrastructure. Many of the good things that come for free with GitOps are also important security best practices. Examples include: Version control, Declarative policies, and infrastructure; if you hook it up to your CI/CD, you get Automated delivery, peer review, and audit trails.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap up
&lt;/h2&gt;

&lt;p&gt;Anyone managing infrastructure at scale will likely use some infrastructure as code service. Terraform is the most popular option, so having an official Border0 Terraform provider is valuable to any security or infrastructure operator. You now have everything you need to manage your Border0 resources using Terraform and can even manage your policies as code. To learn more about the Border0 Terraform provider, check out the official Terraform Registry for Border0, it comes with plenty of documentation and examples.‍&lt;/p&gt;

&lt;p&gt;To try out Border0 with Terraform, &lt;a href="https://portal.border0.com/register" rel="noopener noreferrer"&gt;sign up&lt;/a&gt; for our free, fully featured community edition and run the example for yourself! Alternatively &lt;a href="https://www.border0.com/contact-us" rel="noopener noreferrer"&gt;schedule a demo&lt;/a&gt; and let us walk you through a custom demo; let’s geek out together 🤓&lt;/p&gt;

</description>
      <category>security</category>
      <category>terraform</category>
      <category>rds</category>
      <category>ec2</category>
    </item>
    <item>
      <title>Simplifying AWS Access with Border0</title>
      <dc:creator>Andree Toonk</dc:creator>
      <pubDate>Wed, 27 Sep 2023 18:16:29 +0000</pubDate>
      <link>https://dev.to/aws-builders/simplifying-aws-access-with-border0-eec</link>
      <guid>https://dev.to/aws-builders/simplifying-aws-access-with-border0-eec</guid>
      <description>&lt;p&gt;Remember the simplicity of managing your initial AWS infrastructure? A few EC2 instances and an RDS cluster were all manageable until your business and infrastructure grew. Now, you’re swamped with numerous AWS accounts, multiple VPCs, and a plethora of EC2 instances, ECS clusters, and RDS databases.&lt;/p&gt;

&lt;p&gt;With the growth of your business and infrastructure, your engineering team expanded, and the convenience of everyone having access to everything has now become a ticking time bomb and a significant liability, deviating sharply from the principle of least privilege.&lt;/p&gt;

&lt;p&gt;Sound familiar? You’re not alone! Many companies desire to reverse this trend, seeking more security, compartmentalization, control, and visibility. The ideal solution? One that integrates seamlessly with AWS, deploys in minutes, centers around Single Sign-On, and avoids complexities for engineering teams. That’s precisely what Border0 delivers!‍&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Curious to see what Border0 for AWS looks like? Check out this quick 5 minute video Demo!&lt;/em&gt;&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/2SybY0rpbsw?start=1"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Border0 for AWS, a better together story
&lt;/h2&gt;

&lt;p&gt;At Border0, our mission is to simplify access management for your AWS services, empowering AWS administrators and security teams to reclaim control and visibility. So today, we’re proud to share more details about our integration with AWS, providing organizations with a streamlined and secure access management journey with Single Sign-On for everything at the center.&lt;br&gt;
‍&lt;br&gt;
Border0 gives you back visibility and control over your AWS environments by offering granular access control and providing comprehensive audit trails, session logs, and session recordings, allowing you to see exactly who logged in when and even replay the session. It integrates flawlessly with many AWS services, including EC2, ECS, RDS, SSM, EC2 Instance Connect, CloudWatch, and Secrets Manager, to name a few. A modern-day PAM (privileged access management) solution for the cloud! Let’s dive in and explore!&lt;/p&gt;

&lt;h2&gt;
  
  
  Seamless SSO integration: Forget about static and shared credentials
&lt;/h2&gt;

&lt;p&gt;Experience seamless Single Sign-On (SSO) integration for your AWS infrastructure and leave the complications of static and shared credentials behind. Border0 enables users to utilize their SSO credentials to access AWS EC2 instances, ECS containers, and RDS databases, eliminating the challenges associated with managing long-lived SSH keys and shared credentials.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Authorization and Fine-grained access control
&lt;/h2&gt;

&lt;p&gt;A significant part of the challenge is the sprawl of access that engineers have. With &lt;a href="https://www.border0.com/blogs/border0-policies-real-time-access-management-for-the-modern-world" rel="noopener noreferrer"&gt;Border0 policies&lt;/a&gt;, administrators can now establish dynamic access control rules to manage access to AWS resources based on specific SSO identities, conditions, and contexts, such as time of day, date, country, IP addresses, and even Pagerduty on-call status. For those seeking more customization, &lt;a href="https://www.border0.com/blogs/the-most-flexible-policy-engine-in-the-world" rel="noopener noreferrer"&gt;integration with existing policy systems&lt;/a&gt; or custom data sources is available, allowing the creation of even more tailored access control rules. This provides a centralized location to manage and enforce all access efficiently!‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Consolidated visibility and Session recording
&lt;/h2&gt;

&lt;p&gt;Collect all access events across your entire infrastructure centralized in one place, enabling real-time analysis and session replays. See who accessed what AWS resources, when, and from where. Using the session recording capability, you’ll be able to replay all sessions, allowing you to see exactly what database queries were executed by whom, or watch back a video recording of the SSH session! Use one of our integrations to notify your team in real-time by email or Slack of any new sessions, or export it all in real-time to AWS CloudWatch for further analysis.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Zero Trust access for your infrastructure
&lt;/h2&gt;

&lt;p&gt;By moving to Border0 for access control, you also immediately move to a least privilege access model. You’re no longer providing users access to a network, like with a VPN, but only to the specific services you defined by policy. Moving away from a network-based perimeter security model limits attackers from pivoting and moving around laterally. Congratulations, you’re well on your way to implementing Zero Trust access for your infrastructure, even for resources in a private subnet!‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Your engineers will love it!
&lt;/h2&gt;

&lt;p&gt;Border0 not only gives you back control and visibility over who’s accessing your AWS services, but your engineers will love it too!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3clyu86qisuz2e8dm47f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3clyu86qisuz2e8dm47f.png" alt="Your launchpad: Border0 client portal"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By using Border0, engineers can easily discover all the AWS resources they have access to. Accessing them can be done using their preferred tools (it turns out folks are pretty picky about what SSH or Database clients they use) or use our beautiful and easy-to-use web client, allowing users to access EC2 instances, ECS containers and even RDS databases using just their browser, any time, anywhere!&lt;/p&gt;

&lt;p&gt;Finally, engineers no longer have to worry about jumping on and off various VPNs. And because we’ve eliminated shared secrets for the users, all they need is their SSO account.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Easy to install and get started
&lt;/h2&gt;

&lt;p&gt;By now, you may be wondering how to get started. Good news! We’ve worked hard to ensure that adding Border0 to your AWS infrastructure is easy. To get started, you’ll need to install the Border0 connector into your existing AWS VPC(s). To help with this, we’ve made a cloud formation template available that can be launched using a web-based wizard or the following CLI command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;border0 connector install --aws&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will spin up an EC2 instance in the AWS VPC and Subnet of your choice. It will also make sure it has the correct IAM credentials, and three minutes later, you’re ready to go! The Border0 connector will register itself, after which it will appear alive in the Border0 portal.‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1d878f5702jtcqo2sjz.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1d878f5702jtcqo2sjz.gif" alt="Install Border0 into your AWS environment with a single command"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ‍Close integration with AWS
&lt;/h2&gt;

&lt;p&gt;Border0’s close integration with AWS services and protocols ensures that turning AWS resources into Border0 Services is a low-effort task. Using the AWS discovery plugins, resources like EC2 instances, ECS clusters, and RDS databases will show up as discovered resources within seconds. You can then add them to Border0 with a single click.‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmd9h3jq7n93omwbtyrai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmd9h3jq7n93omwbtyrai.png" alt="AWS Service Discovery."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Border0 connector supports various upstream authentication methods, ensuring the right strategy is available depending on your use case. For example, in addition to static credentials like username and password, SSH keys, or certificates. We also support AWS-specific methods such as EC2 Instance Connect, AWS Systems Manager (SSM), and for databases, we support IAM-based authentication.‍&lt;/p&gt;

&lt;p&gt;If you’re all in with AWS, then make sure also to enable the AWS CloudWatch integration and send Border0 session logs and audit events to CloudWatch. Additionally, you can use &lt;a href="https://docs.border0.com/docs/using-secret-managers-to-store-credentials" rel="noopener noreferrer"&gt;external secret vaults&lt;/a&gt; for upstream credentials, including AWS secrets manager or AWS SSM parameter store.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  The Transformation: Before and After Border0
&lt;/h2&gt;

&lt;p&gt;Before Border0, organizations struggled with high operational overhead, security challenges due to a lack of consolidated privilege management, over-provisioned access, use of shared secrets, and lack of visibility. After implementing Border0, organizations experienced a revolutionary shift and can now define granular access control rules that just make sense, are intuitive, builds on your SSO system, and take real-time context into account. The additional visibility and control is a significant upgrade, and due to the close integration with AWS, deploying Border0 into existing environments takes less than 5 minutes!&lt;/p&gt;

&lt;p&gt;Best of all, your engineers will love it. With a single SSO login command, engineers can discover the AWS resources that are relevant to them. And log into EC2 instances, containers, Databases, and HTTP services using just their SSO credentials.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap up
&lt;/h2&gt;

&lt;p&gt;Border0 provides a modern-day Access Management solution for AWS. Built by and for security-conscious cloud-native organizations. Offering a harmonious blend of security, control, visibility, and simplicity. It addresses the challenges of growing infrastructures and provides a seamless, secure, and efficient environment for organizations to thrive in the cloud-native era.&lt;/p&gt;

&lt;p&gt;But don’t just take my word for it; give it a try today and start your transformation journey with Border0. &lt;a href="https://portal.border0.com/register" rel="noopener noreferrer"&gt;Sign up&lt;/a&gt; for our fully-featured free community edition or &lt;a href="https://www.border0.com/contact-us" rel="noopener noreferrer"&gt;schedule a custom demo&lt;/a&gt; to explore a world where security and simplicity coexist and elevate your organization’s AWS access management with Border0.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>zerotrust</category>
      <category>devops</category>
      <category>sso</category>
    </item>
    <item>
      <title>AWS IPv4 Estate Now Worth $4.5 Billion</title>
      <dc:creator>Andree Toonk</dc:creator>
      <pubDate>Sun, 17 Sep 2023 22:14:39 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-ipv4-estate-now-worth-45-billion-4d05</link>
      <guid>https://dev.to/aws-builders/aws-ipv4-estate-now-worth-45-billion-4d05</guid>
      <description>&lt;p&gt;Three years ago, I wrote a blog titled “&lt;a href="https://toonk.io/aws-and-their-billions-in-ipv4-addresses/index.html"&gt;AWS and their Billions in IPv4 addresses&lt;/a&gt;“, in which I estimated AWS owned about $2.5 billion worth of IPv4 addresses. AWS has continued to grow incredibly fast, and so has its IPv4 usage. In fact, it’s grown so much that it will soon start to charge customers for IPv4 addresses! Enough reason to check in again, three years later, to see what AWS’ IPv4 estate looks like today.&lt;/p&gt;

&lt;h2&gt;
  
  
  A quick 2020 recap
&lt;/h2&gt;

&lt;p&gt;Let’s first quickly summarize what we learned when looking at AWS’s IPv4 usage in 2020. First, in 2020, we observed that the total number of IPv4 addresses we could attribute to AWS was just over 100 Million (100,750,168). That’s the equivalent of just over six /8 blocks.&lt;/p&gt;

&lt;p&gt;Second, for fun, we tried to put a number on it; back then, I used $25 per IP, bringing the estimated value of their IPv4 estate to Just over $2.5 billion.&lt;/p&gt;

&lt;p&gt;Third, AWS publishes their actively used IPv4 addresses in a JSON file. The JSON file contained references to roughly 53 Million IPv4 addresses. That meant they still had ~47 Million IPv4 addresses, or 47%, available for future allocations. That’s pretty healthy!&lt;/p&gt;

&lt;h2&gt;
  
  
  The 2023 numbers
&lt;/h2&gt;

&lt;p&gt;Okay, let’s look at the current data. Now, three years later, what does the IPv4 estate for AWS look like? I used the same scripts and methods as three years ago and found the following.&lt;/p&gt;

&lt;p&gt;First, we observe that AWS currently owns 127,972,688 IPv4 addresses. Ie. almost 128 million IPv4 addresses. That’s an increase of 27 million IPv4 addresses. In other words, AWS added the equivalent of 1.6 /8’s or 415 /16’s in three years!&lt;/p&gt;

&lt;p&gt;Second, what’s it worth? This is always tricky and just for fun. Let’s first assume the same $25 per IPv4 address we used in 2020.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;127,972,688 ipv4 addresses x $25 per IP = $3,199,317,200.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, with the increase of IPv4 addresses, the value went up to ~$3.2 Billion. That’s a $700 million increase since 2020.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;However, if we consider the increase in IPv4 prices over the last few years, this number will be higher. Below is the total value of 127M IPv4 addresses at different market prices.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Total number of IPv4 addresses: 127,972,688
value at $20 per IP: $2,559,453,760
value at $25 per IP: $3,199,317,200
value at $30 per IP: $3,839,180,640
value at $35 per IP: $4,479,044,080
value at $40 per IP: $5,118,907,520
value at $50 per IP: $6,398,634,400
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Jetedoq8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bnrxnmx9ip0gjsydznsc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Jetedoq8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bnrxnmx9ip0gjsydznsc.png" alt="IPv4 prices over time — Source: ipv4.global&amp;lt;br&amp;gt;
" width="800" height="436"&gt;&lt;/a&gt;&lt;br&gt;
Based on this data from ipv4.global, the average price for an IPv4 address is currently ~$35 dollars. With that estimate, we can determine the value of AWS’s IPv4 estate today to about 4.5 Billion dollars. An increase of 2 Billion compared to three years ago!&lt;/p&gt;

&lt;p&gt;Thirdly, let’s compare the difference between the IPv4 data we found and what’s published in the JSON file AWS makes available. In the JSON today, we count about 73 million IPv4 addresses (72,817,397); three years ago, that was 53 Million. So, an increase of 20 million in IPv4 addresses allocated to AWS services.&lt;/p&gt;

&lt;p&gt;Finally, when we compare the ratio between what Amazon owns and what is allocated to AWS according to the JSON data, we observe that about 57% (72817397 / 127972688) of the IPv4 addresses have been (publicly) allocated to AWS service. They may still have 43% available for future use. That’s almost the same as three years ago when it was 47%.&lt;br&gt;
(Note: this is an outsider’s perspective; we should likely assume not everything is used for AWS).&lt;/p&gt;

&lt;h2&gt;
  
  
  Where did the growth come from
&lt;/h2&gt;

&lt;p&gt;A quick comparison between the results from three years ago and now shows the following significant new additions to AWS’ IPv4 estate,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Two new /11 allocations: 13.32.0.0/11 and 13.192.0.0/11. This whole 13/8 block was formerly owned by Xerox.
(Note: it appears AWS owned 13.32.0.0/12 already in 2020).&lt;/li&gt;
&lt;li&gt;Two new /12 allocations: 13.224.0.0/12 (see above as well). It appears they continued purchasing from that 13/8 block.&lt;/li&gt;
&lt;li&gt;I’m also seeing more consolidation in the 16.0.0.0/8 block. AWS used to have quite a few /16 allocations from that block, which are now consolidated into three /12 allocations: 16.16.0.0/12 16.48.0.0/12, and 16.112.0.0/12&lt;/li&gt;
&lt;li&gt;Finally, the 63.176.0.0/12 allocation is new.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS is starting to charge for IPv4 addresses
&lt;/h2&gt;

&lt;p&gt;In August of this year, AWS announced they will start charging their customers for IPv4 addresses as of 2024.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Effective February 1, 2024 there will be a charge of $0.005 per IP per hour for all public IPv4 addresses, whether attached to a service or not (there is already a charge for public IPv4 addresses you allocate in your account but don’t attach to an EC2 instance).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s a total of $43.80 per year per IPv4 address; that’s a pretty hefty number! The reason for this is outlined in the same AWS blog:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As you may know, IPv4 addresses are an increasingly scarce resource and the cost to acquire a single public IPv4 address has risen more than 300% over the past 5 years. This change reflects our own costs and is also intended to encourage you to be a bit more frugal with your use of public IPv4 addresses&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The 300% cost increase to acquire an IPv4 address is interesting and is somewhat reflected in our valuation calculation above (though we used a more conservative number).&lt;/p&gt;

&lt;p&gt;So, how much money will AWS make from this new IPv4 charge? The significant variable here is how many IP addresses are used at any given time by AWS customers. Let’s explore a few scenarios, starting with a very conservative estimate, say 10% of what is published in their IPv4 JSON is in use for a year. That’s 7.3 Million IPv4 addresses x $43.80, almost $320 Million a year. At 25% usage, that’s nearly $800 Million a year. And at 31% usage, that’s a billion dollars!&lt;/p&gt;

&lt;p&gt;Notice that I’m using a fairly conservative number here, so it’s not unlikely for &lt;strong&gt;AWS to make between $500 Million to a Billion dollars a year with this new charge!&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The data
&lt;/h2&gt;

&lt;p&gt;You can find the data I used for this analysis on the link below. There, you’ll also find all the IPv4 prefixes and a brief summary. &lt;a href="https://gist.github.com/atoonk/d8bded9d1137b26b3c615ab614222afd"&gt;https://gist.github.com/atoonk/d8bded9d1137b26b3c615ab614222afd&lt;/a&gt;&lt;br&gt;
Similar data from 2020 can be &lt;a href="https://gist.github.com/atoonk/b749305012ae5b86bacba9b01160df9f"&gt;found here&lt;/a&gt;.&lt;br&gt;
PS. Let me know if someone knows the LACNIC or AFRINIC AWS resources, as those are not included in this data set.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap up
&lt;/h2&gt;

&lt;p&gt;In this article, we saw how, over the last three years, AWS grew its IPv4 estate with an additional 27 million IP addresses to now owning 128 Million IPv4 addresses. &lt;strong&gt;At a value of $35 per IPv4 address, the total value of AWS’ IPv4 estate is ~4.5 Billion dollars. An increase of $2 billion&lt;/strong&gt; compared to what we looked at three years ago!&lt;/p&gt;

&lt;p&gt;Regarding IPv4 capacity planning, it seems like the unallocated IPv4 address pool (defined as not being in the AWS JSON) has remained stable, and quite a bit of IPv4 addresses are available for future use.&lt;/p&gt;

&lt;p&gt;All this buying of IPv4 addresses is expensive, and in response to the increase in IPv4 prices, AWS will soon start to charge its customers for IPv4 usage. &lt;strong&gt;Based on my estimates, It’s not unlikely that AWS will generate between $500 million and $1 billion in additional revenue with this new charge. Long live IPv4!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cheers&lt;br&gt;
Andree&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ipv4</category>
      <category>ipv6</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>A Go net.Listen() function that includes SSO, AuthZ, sessions Management and Anycast</title>
      <dc:creator>Andree Toonk</dc:creator>
      <pubDate>Mon, 21 Aug 2023 23:01:52 +0000</pubDate>
      <link>https://dev.to/aws-builders/a-go-netlisten-function-that-includes-sso-authz-sessions-management-and-anycast-4id4</link>
      <guid>https://dev.to/aws-builders/a-go-netlisten-function-that-includes-sso-authz-sessions-management-and-anycast-4id4</guid>
      <description>&lt;p&gt;At &lt;a href="https://www.border0.com"&gt;Border0&lt;/a&gt;, we’re big users and fans of the Go programming language; almost all of our code is written in Go. So it only made sense to open source our Go SDK for Border0. This SDK will make it easier for Go enthusiasts, novice or expert, to manage their Border0 resources or, even better, put SSO authentication in front of any net.Listener! This allows you to embed Border0 directly into your applications; let’s dive in!&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/y55nzjzeN10"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  ‍‍Building TCP services using the Border0 net.Listener type
&lt;/h2&gt;

&lt;p&gt;The Border0 SDK is a powerful tool for Go developers, designed to seamlessly integrate robust authentication and granular access control into your applications. In addition to managing Border0 resources, the SDK provides support for the &lt;a href="https://pkg.go.dev/net"&gt;net.Listener&lt;/a&gt; interface.&lt;/p&gt;

&lt;p&gt;‍Most Go developers are familiar with the net.Listener interface. Border0’s implementation of the net.Listener interface takes this familiarity and supercharges it with Border0 authentication, authorization capabilities, and a global anycast network.&lt;/p&gt;

&lt;p&gt;‍So while it retains the simplicity and familiarity of Go’s standard net.Listener, it comes supercharged with Border0’s advanced features.‍&lt;/p&gt;

&lt;p&gt;When you use Border0’s net.Listener implementation, you’re not just opening a port for communication; you’re also ensuring that every request coming through is authenticated and continuously authorized. This means that your application is shielded from unauthorized access right from the entry point. The listener leverages Border0’s policies, allowing developers to specify precisely which Single Sign-On (SSO) identities can access the service and under what conditions. This granular control ensures that your services are both secure and compliant. Additionally, you get audit and session log capabilities, providing you with insights into who connected to the listener, thus enhancing your auditing capabilities. Furthermore, the listener is integrated into the Border0 anycast platform, ensuring low latency and a seamless user experience.‍&lt;/p&gt;

&lt;h2&gt;
  
  
  A simple http server example with Border0
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VPzZtWZx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vje60xaynh5p43jygvda.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VPzZtWZx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vje60xaynh5p43jygvda.png" alt="Learn how to build web applications with Go and Border0. Also see example code on Github" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Learn how to build web applications with Go and Border0. Also see example code on Github&lt;br&gt;
&lt;a href="https://github.com/borderzero/border0-go/blob/main/examples/06-http-listener/main.go"&gt;‍The example below&lt;/a&gt; demonstrates that with just a few lines of code, developers can harness the power of Border0, combining the simplicity of Go’s standard library with enterprise-grade security and scalability.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"log"&lt;/span&gt;
    &lt;span class="s"&gt;"net/http"&lt;/span&gt;
    &lt;span class="s"&gt;"os"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/borderzero/border0-go"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/borderzero/border0-go/listen"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;border0&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="c"&gt;// use the Border0 socket name defined here &lt;/span&gt;
   &lt;span class="c"&gt;// socket will be created if not exists&lt;/span&gt;
   &lt;span class="n"&gt;listen&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithSocketName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"sdk-socket-http"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; 
   &lt;span class="c"&gt;// Let's attach a policy; make sure this policy exist&lt;/span&gt;
   &lt;span class="n"&gt;listen&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithPolicies&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"production-engineers-only"&lt;/span&gt;&lt;span class="p"&gt;}),&lt;/span&gt;
   &lt;span class="c"&gt;// if not provided, Border0 SDK will use BORDER0_AUTH_TOKEN env var&lt;/span&gt;
   &lt;span class="n"&gt;listen&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithAuthToken&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"BORDER0_AUTH_TOKEN"&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; 
  &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalln&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"failed to start listener:"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="n"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HandlerFunc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ResponseWriter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c"&gt;// Border0 will set various HTTP headers related to the users' identity.&lt;/span&gt;
  &lt;span class="c"&gt;// We can use this to build identity aware applications&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Header&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"X-Auth-Name"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
    &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Header&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"X-Auth-Email"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Hello, %s %s! This is Border0-go + standard library http."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;

  &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalln&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Serve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;‍&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍That’s it, with just a few lines of Go code, you’ve implemented an HTTP server that listens for requests on the Border0 listener. Note that this listener does not listen on a local port, it’s only available through Border0, no secret bypass!‍&lt;/p&gt;

&lt;p&gt;To run the server, simply execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go run main.go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The program automatically creates a Border0 socket for your Go web server, making it globally accessible via our anycast infrastructure. We handle your SSL certificates, DNS and enforce built-in SSO (single sign-on) authentication. Additionally, session logs give insights into who accesses your service. Plus, with the Border0 listener, it’s possible to operate from behind NAT without the need for open inbound TCP ports‍&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/EYad-f5o4_k"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;em&gt;Check out this video, in which we build the app above, using the Border0 Go SDK.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Border0 Sockets and Policies in Go
&lt;/h2&gt;

&lt;p&gt;Using the SDK you can also manage all the main components you work with in Border0, mostly Sockets and Policies. You can think of Sockets as virtual hosts or proxy servers just for you behind SSO. These come in various flavors, for example, HTTP(s), SSH, Database, and TCP sockets. Each of these can be configured to your unique requirements, and most importantly, each of them will have a set of Border0 policies. These &lt;a href="https://www.border0.com/blogs/border0-policies-real-time-access-management-for-the-modern-world"&gt;policies allow you to configure who (SSO identity) should have access &lt;/a&gt;to what resources and under what conditions.&lt;/p&gt;

&lt;p&gt;‍All of this can be configured using our admin portal or the Border0 CLI. Both use the public REST API, available at &lt;a href="https://api.border0.com/"&gt;api.border0.com&lt;/a&gt;. Anyone with a Border0 account can use this API to automate your unique requirements. If your favorite language is Go, then the easiest way is to use our SDK. It abstracts some of the lower-level API handling, making it a pleasure to work with the Border0 API.‍&lt;/p&gt;

&lt;p&gt;Using the Border0 Go SDK, you can quickly get a list of all your sockets and policies, create new ones or manage and delete existing sockets and policies. All you need is an API token, and you’re off to the races with your automation journey! To make it easy to get started, we put together a bunch of common examples; check out &lt;a href="https://github.com/borderzero/border0-go/tree/main/examples"&gt;all the examples here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;Network socket programming in Go is fun, and with the Border0 Go SDK, you get a lot of extra features for free, making it even more enjoyable! Now you don’t have to worry about SSL certificates, DNS names, ports, firewalls, or load balancers; pretty magical, right?‍&lt;/p&gt;

&lt;p&gt;Getting started is easiest with some examples, so make sure to check out the &lt;a href="https://github.com/borderzero/border0-go/tree/main/examples"&gt;examples folder here&lt;/a&gt;. In the first few examples, we’ll show you how to manage Border0 Sockets and Policies using the SDK. Followed by various net.Listener examples that show you how to make a simple Go web application with built-in support for Border0 or even an authenticated reverse proxy that performs content rewriting!&lt;/p&gt;

&lt;p&gt;Excited to try this? Check out our &lt;a href="https://portal.border0.com/register"&gt;fully featured free community edition&lt;/a&gt;, or &lt;a href="https://www.border0.com/contact-us"&gt;schedule a demo&lt;/a&gt; and let us walk you through a custom demo; let’s geek out together 🤓&lt;br&gt;
Happy hacking!&lt;/p&gt;

</description>
      <category>go</category>
      <category>sso</category>
      <category>aws</category>
      <category>security</category>
    </item>
  </channel>
</rss>
