DEV Community

Jeff Loughridge for AWS Community Builders

Posted on • Originally published at konekti.us

First Class Container Networking with EC2 IP Prefix Assignments

Container networking in the AWS VPC is now much simpler following AWS’s announcement of EC2 IP prefix assignments. Want to get rid of overlays and bridge networking? Let’s examine how the new IP prefix assignment functions and how it can be used to enable containers as first class citizens on the network.

Feature Overview

Prior to the announcement of this feature, there was no way to reserve a contiguous block of IPv4 or IPv6 prefixes for use by an EC2 instance. The new feature provides this functionality for multiple IPv4 and IPv6 prefixes. The maximum prefix size for IPv4 is /28 and IPv6 is /80.

The number of prefixes you can assign is limited by the number of IP addresses supported by the instance type as documented at https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html. The new limits are based on the cumulative number of addresses and prefixes. The documentation gives the example of a c5.large instance which has a limit of 10 IPv4 addresses and 10 IPv6 addresses on a network interface. Therefore, the total number of /28 and /80 prefixes must be less than 10.

A major benefit of this approach is the cost savings associated with additional IP addresses on less expensive instances. Previously the requirement for a large number of IP addresses drove the selection of more expensive instances. Now the small instances can support more IP addresses.

The EC2 IP prefix assignment feature–as of August 2021–is only supported by the latest release of the AWS v2 CLI. You cannot assign prefixes to elastic network interfaces (ENI) in the Console or CloudFormation. The prefix assignment can be automatic or manual. With automatic assignment, a /28 or /80 is selected from the subnet in which the ENI resides. The user can request a /28 or /80. With either mode, the prefix can be assigned at the time of ENI creation or at a later time.

Container Networking

In the ideal scenario for container networking, each container gets a unique IP address on the primary network substrate. NAT, bridges, and overlays are obviated. There are a number of ways to accomplish this including the docker macvlan and ipvlan network modes. AWS’s VPC networking implementation doesn’t play nicely with multiple MAC addresses so macvlan is a nonstarter. In the ipvlan approach, multiple IP addresses share the same MAC address of the host’s ENI.

Since the IPv4 prefix assignment is for private IPv4 addresses only, the EC2 IP prefix assignment feature really shines for IPv6 as you can assign globally reachable IPv6 addresses. Offering an IPv6 service may not be viable for serving content to end users although many use cases exist for IOT and other machine-to-machine communication. All is not lost for IPv4 because you could put a load balancer in front of the containers and communicate with them using private IPv4 addresses.

It's Demo Time

Let’s configure a docker host in a public subnet and run webservers in containers with IPv6 addresses. You can follow along with this demo using the terraform at https://github.com/jeffbrl/terraform-examples/tree/master/networking-with-ec2-prefixes.

Here is the network configuration used in my templates (note that I am using the private IPv4 block from RFC6598).

VPC IPv4 CIDR - 100.64.0.0/16

VPC IPv6 CIDR - Assigned dynamically from Amazon IPv6 space

Public “1A” Subnet - 100.64.0.0/24

Because the AWS provider for terraform currently lacks support for EC2 IP prefixes, I use the local-exec provisioning on a null resource to call the AWS CLI, specifically ‘aws ec2 assign-private-ip-addresses’ and ‘aws ec2 assign-ipv6-addresses”.

Once the docker host is online, I request a /80 and /28 to be associated with the host’s ENI using the AWS CLI.

resource "null_resource" "ec2_prefix_assignment_v6" {
  provisioner "local-exec" {
    command = "aws ec2 assign-ipv6-addresses --network-interface-id ${aws_network_interface.docker_eni.id} --ipv6-prefix-count=1 --region us-east-1"
  }
}

resource "null_resource" "ec2_prefix_assignment_v4" {
  provisioner "local-exec" {
    command = "aws ec2 assign-private-ip-addresses --network-interface-id ${aws_network_interface.docker_eni.id} --ipv4-prefix-count=1 --region us-east-1"
  }
}

Enter fullscreen mode Exit fullscreen mode

Now we’ll configure the docker network on the host. We want the docker host to dole out IPs from the ranges obtained above while the container should recognize that they are attached to the 100.64.0.0/24 and IPv6 /64 associated with the public “1A” subnet.

sudo docker network create --ipv6 -d ipvlan -o parent=eth0 --subnet 100.64.0.0/24 \
--ip-range=$ipv4_prefix --subnet $public_subnet_ipv6 --ip-range $ipv6_prefix dockernet

Enter fullscreen mode Exit fullscreen mode

The “ipv4_prefix” and “ipv6_prefix” are the /28 and /80 obtained from the request for EC2 IP prefixes.

We’ll execute three containers based on a custom container image I built that displays the IPv4 and IPv6 addresses of the container.

sudo docker run --rm -d --net dockernet gcr.io/stately-minutia-658/jweb
sudo docker run --rm -d --net dockernet gcr.io/stately-minutia-658/jweb
sudo docker run --rm -d --net dockernet gcr.io/stately-minutia-658/jweb

Enter fullscreen mode Exit fullscreen mode

Docker hands out IPv6 consecutively from the base of the IPv6 prefix we used in the “--ip-range” parameter. We can observe the IPv6 address of a given container using “docker inspect ”.

Navigating to three URLs using the bracket syntax for IPv6 literals yields the following.

Alt Text

Alt Text

Alt Text

Voila! We have a docker container host in which each container has a globally unique IPv6.

Conclusion

This post described the new AWS VPC feature for assigning contiguous IPv4 and IPv6 prefixes to EC2 instances (the ENIs to be specific). I demonstrated how this drastically simplifies container networking, making containers first class citizens on the network rather than being stuck behind NAT or other kludges. I’m eager to experiment further with this feature particularly in its uses for networking appliances in the cloud.

Latest comments (0)