In an era where cloud and on-premises environments increasingly converge, the ability to seamlessly integrate these ecosystems has never been more critical. This story explores what I like to call "AWS Anywhere" - an overarching concept encompassing a suite of AWS capabilities that enable seamless hybrid operations. From establishing a Site-to-Site VPN to bridging your on-premises network with AWS, configuring Route 53 Inbound Resolvers to enable private connections to VPC Endpoint Interfaces, leveraging IAM Roles Anywhere for secure identity management, and setting up the SSM agent for streamlined operations, this journey culminates in deploying EKS Hybrid Nodes.
This builds on my earlier story, AWS Landing Zone 3: Hybrid Networking, where I explored hybrid networking fundamentals. This time, I'm going a step further by providing source code and guides, making it easier for anyone to replicate the setup and put these concepts into action.
This story takes a practical approach, using a Raspberry Pi as the on-premises node to simulate real-world scenarios at home. By the end, you'll understand how these AWS services combine to create a unified hybrid infrastructure.
EKS Hybrid Nodes TL;DR
Amazon EKS Hybrid Nodes, introduced at AWS re:Invent 2024, allow businesses to run Kubernetes workloads seamlessly across on-premises, edge, and cloud environments. This solution simplifies Kubernetes management by offloading control plane availability and scalability to AWS while integrating with services like centralized logging and monitoring. It enables organizations to maximize existing infrastructure while modernizing deployments with AWS cloud capabilities. This unified approach reduces operational complexity and accelerates application modernization.
Before we get there, let me take you through the foundational steps that must be set up first.
Source code & guides
Terraform
The Terraform configuration files are available via the link below. These files include switches - disabled by default - represented by boolean variables that enable specific functionalities, all of which are detailed in the sections below.
Guides.md
I've also utilized a combination of templatefile()
and local_file
resources to generate markdown guides. These guides provide step-by-step instructions and commands to configure the on-premises side of the setup, whether you're using a Raspberry Pi or another machine. Once a functionality is enabled and applied, a tailored guide file is generated, complete with references and values specific to your environment.
AWS Anywhere
Raspberry Pi
For my setup, I used my Raspberry Pi, the same one I used a few years ago to simulate hybrid networking as mentioned in the introduction. It runs Ubuntu 24.04, ensuring compatibility with all necessary installations to make this integration work seamlessly.
$ ssh pi 255 ✘ │ 14:50:42
Welcome to Ubuntu 24.04.1 LTS (GNU/Linux 6.8.0-1017-raspi aarch64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro
* Strictly confined Kubernetes makes edge and IoT secure. Learn how MicroK8s
just raised the bar for easy, resilient and secure K8s cluster deployment.
https://ubuntu.com/engage/secure-kubernetes-at-the-edge
Last login: Sat Dec 28 14:37:17 2024 from 192.168.101.10
seb@pi:~$ neofetch
.-/+oossssoo+/-. seb@pi
`:+ssssssssssssssssss+:` ------
-+ssssssssssssssssssyyssss+- OS: Ubuntu 24.04.1 LTS aarch64
.ossssssssssssssssssdMMMNysssso. Host: Raspberry Pi 4 Model B Rev 1.4
/ssssssssssshdmmNNmmyNMMMMhssssss/ Kernel: 6.8.0-1017-raspi
+ssssssssshmydMMMMMMMNddddyssssssss+ Uptime: 3 days, 5 hours, 24 mins
/sssssssshNMMMyhhyyyyhmNMMMNhssssssss/ Packages: 810 (dpkg)
.ssssssssdMMMNhsssssssssshNMMMdssssssss. Shell: bash 5.2.21
+sssshhhyNMMNyssssssssssssyNMMMysssssss+ Terminal: /dev/pts/0
ossyNMMMNyMMhsssssssssssssshmmmhssssssso CPU: (4) @ 1.800GHz
ossyNMMMNyMMhsssssssssssssshmmmhssssssso Memory: 399MiB / 7802MiB
+sssshhhyNMMNyssssssssssssyNMMMysssssss+
.ssssssssdMMMNhsssssssssshNMMMdssssssss.
/sssssssshNMMMyhhyyyyhdNMMMNhssssssss/
+sssssssssdmydMMMMMMMMddddyssssssss+
/ssssssssssshdmNNNNmyNMMMMhssssss/
.ossssssssssssssssssdMMMNysssso.
-+sssssssssssssssssyyyssss+-
`:+ssssssssssssssssss+:`
.-/+oossssoo+/-.
Once it's up and running, we can move on to the next step.
AWS VPC defaults
By default, apart from the Transit Gateway responsible for integrating different networks, the module configures a Transit VPC. This VPC hosts VPC Interface Endpoints and can also act as a centralized egress point in your landing zone setup. Here's what you get:
AWS Site-to-Site VPN
AWS Site-to-Site VPN is a flexible and cost-effective solution for securely connecting on-premises networks to AWS. While AWS Direct Connect offers a more reliable, dedicated connection with lower latency, Site-to-Site VPN is often chosen for its quicker, simpler setup or when Direct Connect isn't available. The VPN connection uses IPsec tunnels over the Internet, ensuring secure communication between local networks and AWS resources.
On the AWS side, this is a managed service. On my Pi, I configured the VPN connection using StrongSwan, an open-source IPsec-based VPN solution. StrongSwan provides flexible configuration and integrates seamlessly with diverse network setups. By pairing StrongSwan with AWS's managed service, I maintain granular control over the VPN configuration while benefiting from AWS's operational simplicity.
To get it configured start with:
on_prem_s2s_vpn_enabled = true
on_prem_props = { ... }
and upon successful Terraform apply you'll get a guide on how to configure StrongSwan. For testing purposes feel free to limit it to a single VPN tunnel. When done here's what you get:
The private subnet is included in case you wish to spin up an EC2 instance for testing purposes, such as confirming successful connectivity to and from a real server running in the VPC.
Hybrid DNS and private access to VPC Interface Endpoints
Hybrid DNS and private access to VPC Interface Endpoints, alongside AWS PrivateLink, enable secure, private connectivity to AWS services. By integrating a local Bind DNS server with Route 53 Inbound Resolver over the configured Site-to-Site VPN, DNS queries for AWS services are routed privately within AWS. This setup allows the use of VPC Interface Endpoints to connect to AWS APIs. PrivateLink ensures that traffic to services such as S3, Systems Manager, or EKS remains within the AWS network, avoiding exposure to the public Internet and enhancing both security and performance.
To get it configured start with:
r53_inbound_resolver_enabled = true
and upon successful Terraform apply you'll get a guide on how to configure Bind. When done here's what you get:
IAM Roles Anywhere
With the Site-to-Site VPN in place, I can securely extend my on-premises network to AWS, providing seamless communication between local infrastructure and AWS resources. Integrating IAM Roles Anywhere enables secure and temporary access to AWS services from on-prem systems. This leverages the VPN connection and IAM roles for secure authentication. Additionally, with private access to VPC Interface Endpoints, on-prem systems can resolve AWS service API addresses to private IPs, ensuring traffic remains within the AWS network for enhanced security, and avoiding public Internet exposure.
To get it configured start with:
iam_roles_anywhere_enabled = true
# NOTE: Terraform must be re-run once the CA cert is uploaded to SSM PS
and upon successful Terraform apply you'll get a guide on how to configure a local CA, generate certificates, and leverage IAM to access AWS services. When done here's what you get:
SSM Agent
With the Site-to-Site VPN in place, private access to AWS services via VPC Interface Endpoints configured, and IAM Roles Anywhere enabling secure role-based access, the next step is integrating the SSM Agent. The SSM Agent facilitates secure, managed access to instances in AWS, including on-premises servers, via AWS Systems Manager. By leveraging IAM roles, the SSM Agent ensures that commands and configurations are executed securely, enabling full management of infrastructure across both on-premises and AWS environments. Additionally, communication between the agent and the service remains private, as all traffic flows through established private connections to AWS services, avoiding the public Internet.
To get it configured start with:
ssm_hybrid_activation_registred = true
ssm_advanced_instances_tier_enabled = true
and upon successful Terraform apply you'll get a guide on how to configure the SSM Agent. When done here's what you get:
EKS Hybrid Nodes continued
The functionalities described and configured above provide a robust foundation for integrating on-premises systems with AWS. They also cover the key prerequisites for connecting an on-premises Kubernetes node to an EKS cluster in AWS.
The source code does not include specific EKS cluster configuration, as such setups vary by use case. Instead, it assumes an EKS cluster with hybrid node support enabled is already running in a separate VPC. The provided configuration focuses on integrating the cluster with the Transit Gateway and the necessary IAM-related resources.
AWS provides comprehensive guidance for configuring everything from scratch, especially the CNI-related aspects, which can be found here while most of the non-EKS-specific prerequisites have already been covered in this post.
And don't forget, we're only simulating this at home!
To get it configured start with:
eks_hybrid_nodes_enabled = true
eks_props = { ... }
and upon successful Terraform apply you'll get a guide on how to configure your node (Pi). When done here's what you get most likely (depending on your individual EKS cluster setup):
This is it!
Wrap-up
What started as a personal Proof of Concept turned into a hands-on guide for building a hybrid infrastructure that connects on-premises systems with AWS. Using a Raspberry Pi as a simulated on-prem node, this journey explored key AWS services and functionalities required to establish a route to EKS Hybrid Nodes.
The lessons learned here - from setting up a Site-to-Site VPN to enabling private API access and configuring a hybrid Kubernetes node - showcase practical steps that can inform professional designs. These configurations pave the way for securely running Kubernetes workloads across hybrid environments, leveraging AWS for control plane management while maintaining local resources.
The benefits of this approach are clear: enhanced security with private connectivity, simplified operations through managed AWS services, and the ability to experiment and learn on a small scale while gaining insights applicable to enterprise-grade scenarios. This PoC not only highlights the potential of hybrid cloud architectures but also demonstrates how such integrations can help modernize on-prem systems and provide flexibility for diverse business needs.
Top comments (0)