When we were tasked with an application rebuild, a unique networking challenge came to light: we needed to sync an existing legacy MySQL database (in our general AWS organization) to a new Postgres database (in a client-specific organization).
The Challenge
Cross-organization communication usually suggests VPC Peering. However, we hit a major roadblock: the separate VPCs were using the same CIDR blocks. Since VPC peering does not support overlapping subnets, it was out of the question. There was of course the option to change subnets, but with existing infrastructure, this was a volatile option that would lead to resource destruction and rebuilding, increasing the time of the task.
After researching ways to tunnel traffic without merging address spaces, I landed on AWS PrivateLink. It was the perfect solution—it provides private connectivity between VPCs even with IP conflicts and integrates seamlessly into Infrastructure as Code (in this case, Terraform).
The "Provider" Side (Legacy Application)
On the legacy side, we need to expose the RDS instance's IP. Since RDS doesn't have a static IP, we grab the Network Interface (ENI) to dynamically target the IP with a Network Load Balancer (NLB).
Note: in the below code, the client name has been redacted from resource names.
# Grab the network interface for the RDS
data "aws_network_interfaces" "rds_eni" {
filter {
name = "description"
values = ["RDSNetworkInterface"]
}
filter {
name = "vpc-id"
values = [module.production.virtual_network.id]
}
}
# Pull that interface to get the current private IP of the RDS instance
data "aws_network_interface" "rds_specific_eni" {
id = data.aws_network_interfaces.rds_eni.ids[0]
}
# Create a target group for the RDS instance
resource "aws_lb_target_group" "rds_target" {
name = "rds-privatelink-target"
port = 3306
protocol = "TCP"
target_type = "ip"
vpc_id = module.production.virtual_network.id
}
resource "aws_lb_target_group_attachment" "rds_attachment" {
target_group_arn = aws_lb_target_group.rds_target.arn
target_id = data.aws_network_interface.rds_specific_eni.private_ip
port = 3306
}
# The NLB acts as the entry point for PrivateLink
resource "aws_lb" "rds_nlb" {
name = "rds-provider-nlb"
internal = true
load_balancer_type = "network"
subnets = [
module.production.private1_subnet.id,
module.production.private2_subnet.id,
module.production.private3_subnet.id
]
}
resource "aws_lb_listener" "rds_listener" {
load_balancer_arn = aws_lb.rds_nlb.arn
port = 3306
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.rds_target.arn
}
}
# The actual Endpoint Service that the other Org will connect to
resource "aws_vpc_endpoint_service" "rds_service" {
acceptance_required = false
network_load_balancer_arns = [aws_lb.rds_nlb.arn]
}
output "rds_endpoint_service_name" {
value = aws_vpc_endpoint_service.rds_service.service_name
}
# Security: Allow the NLB to perform health checks on the RDS instance
resource "aws_security_group_rule" "allow_nlb_health_checks" {
type = "ingress"
from_port = 3306
to_port = 3306
protocol = "tcp"
security_group_id = var.rds_security_group_id
cidr_blocks = [
module.production.private1_subnet.cidr_block,
module.production.private2_subnet.cidr_block,
module.production.private3_subnet.cidr_block
]
description = "Allow NLB Health Checks for PrivateLink tunnel"
}
The "Consumer" Side (Rebuild Application)
On the new application side, we create the VPC Endpoint. This creates a local IP address in the new VPC that "tunnels" traffic back to the legacy database.
# Create the security group for the local endpoint
resource "aws_security_group" "legacy_rds_proxy_endpoint_sg" {
name = "legacy-db-proxy-endpoint-sg"
description = "Allows the app to talk to the PrivateLink Endpoint"
vpc_id = module.production.virtual_network.id
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
# Allow traffic from your application security group
security_groups = [module.production.app_security_group.id]
}
}
# Create the link to the service in the other organization
resource "aws_vpc_endpoint" "legacy_db_link" {
vpc_id = module.unified.virtual_network.id
service_name = var.legacy_db_service_name # The output from the previous step
vpc_endpoint_type = "Interface"
security_group_ids = [aws_security_group.legacy_rds_proxy_endpoint_sg.id]
subnet_ids = [module.production.private1_subnet.id, module.production.private2_subnet.id]
private_dns_enabled = false
}
output "legacy_db_connect_string" {
value = aws_vpc_endpoint.legacy_db_link.dns_entry[0].dns_name
}
Conclusion
By using PrivateLink, we bypassed the subnet overlap issue entirely. The new application simply sees a local DNS name that behaves like a database within its own network, while AWS handles the heavy lifting of routing that traffic across organization boundaries securely.
Top comments (1)
Great writeup, thanks!