<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jayesh Kumar Tank</title>
    <description>The latest articles on DEV Community by Jayesh Kumar Tank (@k8sdev).</description>
    <link>https://dev.to/k8sdev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/k8sdev"/>
    <language>en</language>
    <item>
      <title>Deploying Statefulset on Private EKS on Fargate Cluster with EFS</title>
      <dc:creator>Jayesh Kumar Tank</dc:creator>
      <pubDate>Mon, 11 Jan 2021 10:45:56 +0000</pubDate>
      <link>https://dev.to/k8sdev/deploying-statefulset-on-private-eks-on-fargate-cluster-with-efs-4gph</link>
      <guid>https://dev.to/k8sdev/deploying-statefulset-on-private-eks-on-fargate-cluster-with-efs-4gph</guid>
      <description>&lt;p&gt;&lt;em&gt;Because Kubernetes has transformed the way organisations do application deployments and data is an integral part of the applications and can't be left behind...&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In a previous &lt;a href="https://dev.to/k8sdev/setup-a-fully-private-amazon-eks-on-aws-fargate-cluster-10cb"&gt;article&lt;/a&gt;, we discussed the setup for Fully Private EKS on Fargate Cluster fulfilling security requirements of certain regulated industries. This post is a follow up where we will further add the persistent storage and host StatefulSets on this Fully Private Fargate Cluster, while adhering to compliance requirements.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Originally published on my blog: &lt;a href="https://k8s-dev.github.io" rel="noopener noreferrer"&gt;https://k8s-dev.github.io&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;EFS support for EKS on Fargate is a recent feature, see the &lt;a href="https://aws.amazon.com/blogs/aws/new-aws-fargate-for-amazon-eks-now-supports-amazon-efs" rel="noopener noreferrer"&gt;release blog post&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Short intro to StatefulSet
&lt;/h2&gt;

&lt;p&gt;Statefulset is a Kubernetes contruct which helps in managing stateful applications, while maintaining the guarantees around the sequence of deployment and uniqueness of the underlying pods created by it. Statefulset and its sticky identity is particularly useful while hosting databases on kubernetes, such as MySQL with read-replicas.&lt;/p&gt;

&lt;p&gt;Statefulset is very similar to Deployment, but the main differentiation arises from the fact that Statefulset maintain sticky identity to pods, they are created in order and have persistence across rescheduling cycles. Some features of StatefulSets are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stable(persistent across pod re-scheduling), unique network identifiers&lt;/li&gt;
&lt;li&gt;Stable(persistent across pod re-scheduling), persistent storage&lt;/li&gt;
&lt;li&gt;Ordered, graceful deployment and scaling&lt;/li&gt;
&lt;li&gt;Ordered, automated rolling updates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Elastic File System (EFS) is a scalable and fully managed shared file-system implementation on AWS, which has integration with EKS Fargate for providing persistent storage. EFS is highly elastic and scalable, it automatically grows and shrinks on demand, while encrypting data at rest and in transit. In this solution we will make use of EFS VPC Endpoint which is ideal for security sensitive workloads running on AWS Fargate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-Requisite
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A Fully Private EKS on Fargate Cluster setup (following the &lt;a href="https://k8s-dev.github.io/posts/eksfargate" rel="noopener noreferrer"&gt;article&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Design
&lt;/h2&gt;

&lt;p&gt;Kubernetes supports persistent storage via “Container Storage Interface (CSI)” standard. Application pods running on Fargate uses EFS CSI driver to access EFS file system using standard Kubernetes APIs. EFS support at rest data encryption and while using the EFS CSI driver, all data in transit is encrypted by default which is a &lt;strong&gt;compliance requirement&lt;/strong&gt; in most regulated industries.&lt;/p&gt;

&lt;p&gt;Whenever a pod running on Fargate is terminated and relaunched, the CSI driver reconnects the EFS file system, even when pod is relaunched in a different AWS Availability Zone, this makes EFS as an appealing solution to provide persistent storage. EFS CSI driver comes pre-installed with Fargate stack and support for EFS is provided out of the box, updates are managed by AWS transparently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fozaektj2i5e6xxsb3fuv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fozaektj2i5e6xxsb3fuv.png" alt="Design Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The EFS integration with EKS on Fargate leverages three Kubernetes constructs - StorageClass, Persistent Volume (PV) and Persistent Volume Claim (PVC). This allows segregation of duties while operating the cluster where storage admin (or cluster admin) would configure Storage Class, EFS and would create PVs out of this. The developer team would then utilise these available PVs to create PVCs as and when required to deploy applications.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;While Kubernetes Storage Class allows volume creation dynamically and statically - in this experiment we will use EFS CSI driver to statically create volume which is the only supported implementation yet with EFS CSI driver.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Implementation Steps
&lt;/h2&gt;

&lt;p&gt;Before proceeding - ensure to have a EKS on Fargate cluster up and running.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Create VPC Endpoint for EFS&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The EFS VPC endpoint provides secure connectivity to the Amazon EFS API without requiring an internet gateway, NAT instance, or virtual private network (VPN) connection. Follow the guide at : &lt;a href="https://docs.aws.amazon.com/efs/latest/ug/efs-vpc-endpoints.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/efs/latest/ug/efs-vpc-endpoints.html&lt;/a&gt; to setup a VPC Endpoint for EFS in the same region where EKS on Fargate Cluster resides.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Create Elastic File System&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;First, we need to create EFS file system in the same AWS Region where EKS on Fargate Cluster is residing. Follow &lt;a href="https://docs.aws.amazon.com/efs/latest/ug/getting-started.html" rel="noopener noreferrer"&gt;EFS getting start guide&lt;/a&gt;, and configure EFS in the same private subnets which are used while creating EKS on Fargate Cluster. We shall have EFS encryption enabled for this experimentation. &lt;/p&gt;

&lt;p&gt;Because EFS operates as NFS mount system, we need to add rule to allow NFS traffic in EKS on Fargate security group. &lt;/p&gt;

&lt;p&gt;Also make sure to add EKS on Fargate security group in EFS configuration. &lt;/p&gt;

&lt;p&gt;Finally, note down the FileSystem ID after successful creation which would be needed further down when we will create a persistent storage from it.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Create Fargate Profile for StatefulSets&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Fargate allows hands free management for Kubernetes Clusters. To direct EKS to schedule pods on Fargate - we need to create a Fargate profile which is a combination of namespace and labels. In this experiments we are creating a profile with namespace 'efs-statefulset' and all of the objects including pods, service, persistent volume and persistent volume claim will get created in this namespace.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As mentioned in previous &lt;a href="https://k8s-dev.github.io/posts/eksfargate" rel="noopener noreferrer"&gt;article&lt;/a&gt;, make sure to run these commands from Bastion host created in public subnet in the EKS on Fargate VPC.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws eks create-fargate-profile &lt;span class="nt"&gt;--fargate-profile-name&lt;/span&gt; fargate-statefulset &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="nt"&gt;--cluster-name&lt;/span&gt; private-fargate-cluster &lt;span class="nt"&gt;--pod-execution-role-arn&lt;/span&gt; arn:aws:iam::1234567890:role/private-fargate-pod-execution-role &lt;span class="nt"&gt;--subnets&lt;/span&gt; subnet-01b3ae56696b33747 subnet-0e639397d1f12500a subnet-039f4170f8a820afc &lt;span class="nt"&gt;--selectors&lt;/span&gt; &lt;span class="nv"&gt;namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;efs-statefulset


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;StorageClass, PV and PVC&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;StorageClass in equivalent of a storage profile in Kubernetes which provides a way for admins to specify type of storage, quality-of-service, backup policy or any other arbitrary policy.&lt;/p&gt;

&lt;p&gt;Persistent Volume is a Kubernetes resource which allows to store data in persistent way across life-cycle of a pod. PV could be created dynamically or statically using StorageClass. PVC is a request for storage by a Pod. By using PVC object - Kubernetes abstracts away the implementation details of PV.&lt;/p&gt;

&lt;p&gt;Let's make use of the following .yaml to rollout StorageClass, PV and PVC along with a sample application based on amazon-linux:2 image. All it does is to keep redirecting date and time into a file hosted on persistent volume in EFS.&lt;/p&gt;

&lt;p&gt;Ensure to have pushed amazon-linux:2 container image to ECR repo prior to deploying the statefulset. Steps to create, tag and push images to ECR are mentioned in previous post. &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolume&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs-pv&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs-statefulset&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;capacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5Gi&lt;/span&gt;
  &lt;span class="na"&gt;volumeMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Filesystem&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteMany&lt;/span&gt;
  &lt;span class="na"&gt;persistentVolumeReclaimPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Retain&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs-sc&lt;/span&gt;
  &lt;span class="na"&gt;csi&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs.csi.aws.com&lt;/span&gt;
    &lt;span class="na"&gt;volumeHandle&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;EFS filesystem ID&amp;gt;&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs-claim&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs-statefulset&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteMany&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs-sc&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5Gi&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;StatefulSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs-app-sts&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs-statefulset&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-efs&lt;/span&gt;
  &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs-app&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs-statefulset&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-efs&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;terminationGracePeriodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;linux&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;01234567890.dkr.ecr.us-east-1.amazonaws.com/amazon-linux2&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;while&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;do&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$(date&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-u)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/efs-data/out.txt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;5;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;done"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs-storage&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/efs-data&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs-storage&lt;/span&gt;
        &lt;span class="na"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;claimName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;efs-claim&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Deploy the yaml using the command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &amp;lt;file.yaml&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once deployed, we can see the status of PV and PVC by using the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;


&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pv &lt;span class="nt"&gt;-n&lt;/span&gt; efs-statefulset

NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                         STORAGECLASS   REASON   AGE
efs-pv   5Gi        RWX            Retain           Bound    efs-statefulset/efs-claim   efs-sc                  17s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pvc &lt;span class="nt"&gt;-n&lt;/span&gt; efs-statefulset

NAME        STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
efs-claim   Bound    efs-pv   5Gi        RWX            efs-sc         48s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This shows that PV is allocated and PVC is in bound state. Also the statefulsets are rolled out and are in running state.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get statefulsets &lt;span class="nt"&gt;-n&lt;/span&gt; efs-statefulset &lt;span class="nt"&gt;-o&lt;/span&gt; wide

NAME          READY   AGE   CONTAINERS   IMAGES
efs-app-sts   3/3     27m   linux        1234567890.dkr.ecr.us-east-1.amazonaws.com/amazon-linux2


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Experiments
&lt;/h3&gt;

&lt;p&gt;To appreciate the way statefulset maintains the sticky identity - we will perform a few experiments. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scale exercise:&lt;/strong&gt; Let's down scale the current statefulset to 2, this would delete a running statefulset and we will see that the efs-app-sts-2 would get terminated first as it was the last one to get deployed.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl scale sts efs-app-sts &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="nt"&gt;-n&lt;/span&gt; efs-statefulset

statefulset.apps/efs-app-sts scaled

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; efs-statefulset &lt;span class="nt"&gt;-w&lt;/span&gt;

NAME            READY   STATUS    RESTARTS       AGE
efs-app-sts-0   1/1     Running       0          29m
efs-app-sts-1   1/1     Running       0          28m
efs-app-sts-2   0/1     Terminating   0          5s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To see the ordered pod termination let's do another experiment. In one terminal execute the pod delete command while in other terminal put a watch on statefulset. We can easily observe that pod go down in order, and wait to fully terminate one before starting to delete another one. And the same order gets repeated while re-creating the pods.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pods &lt;span class="nt"&gt;--selector&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;test-efs &lt;span class="nt"&gt;-n&lt;/span&gt; efs-statefulset

pod &lt;span class="s2"&gt;"efs-app-sts-0"&lt;/span&gt; deleted
pod &lt;span class="s2"&gt;"efs-app-sts-1"&lt;/span&gt; deleted


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In second terminal, execute the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; efs-statefulset &lt;span class="nt"&gt;-w&lt;/span&gt;

NAME            READY   STATUS             RESTARTS   AGE
efs-app-sts-0   1/1     Running            0          33m
efs-app-sts-1   1/1     Running            0          32m
efs-app-sts-0   1/1     Terminating        0          34m
efs-app-sts-1   1/1     Terminating        0          33m
efs-app-sts-0   0/1     Terminating        0          34m
efs-app-sts-1   0/1     Terminating        0          33m
efs-app-sts-1   0/1     Terminating        0          33m
efs-app-sts-1   0/1     Terminating        0          33m
efs-app-sts-0   0/1     Terminating        0          34m
efs-app-sts-0   0/1     Terminating        0          34m
efs-app-sts-0   0/1     Pending            0          0s
efs-app-sts-0   0/1     Pending            0          1s
efs-app-sts-0   0/1     Pending            0          63s
efs-app-sts-0   0/1     ContainerCreating  0          63s
efs-app-sts-0   1/1     Running            0          73s
efs-app-sts-1   0/1     Pending            0          1s
efs-app-sts-1   0/1     Pending            0          2s
efs-app-sts-1   0/1     Pending            0          57s
efs-app-sts-1   0/1     ContainerCreating  0          57s
efs-app-sts-1   1/1     Running            0          68s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Et voilà! After deleting the statefulset, the replication controller triggered a new pod which used the same order to recreate and attach to EFS volume.&lt;/p&gt;

&lt;p&gt;For more information about StatefulSet, see the &lt;a href="[https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/)"&gt;Kubernetes Documentation&lt;/a&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  Closing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;In this article - we created a private EFS Endpoint and used it to host persistent data using StatefulSet in Kubernetes.&lt;/li&gt;
&lt;li&gt;This deployment solves some of the compliance challenges faced by BFSI and other regulated sectors, given the private deployment and encryption support.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Please let know if you had challenges replicating this in your own AWS environment. Happy Learning!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>security</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Building a fully Private Amazon EKS on AWS Fargate Cluster</title>
      <dc:creator>Jayesh Kumar Tank</dc:creator>
      <pubDate>Sun, 03 Jan 2021 05:33:17 +0000</pubDate>
      <link>https://dev.to/k8sdev/setup-a-fully-private-amazon-eks-on-aws-fargate-cluster-10cb</link>
      <guid>https://dev.to/k8sdev/setup-a-fully-private-amazon-eks-on-aws-fargate-cluster-10cb</guid>
      <description>&lt;p&gt;Regulated industries needs to host their Kubernetes workloads in most secure ways and fully private EKS on Fargate cluster attempts to solve this problem. Each pod running on Fargate has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another pod, which makes it secure from compliance point of view. With AWS Fargate, you no longer have to provision, configure, or scale groups of virtual machines to run containers. This removes the need to choose server types, decide when to scale your node groups, or optimise cluster packing.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Originally published on my blog: &lt;a href="https://k8s-dev.github.io" rel="noopener noreferrer"&gt;https://k8s-dev.github.io&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Source code for this post is hosted at : &lt;a href="https://github.com/k8s-dev/private-eks-fargate" rel="noopener noreferrer"&gt;https://github.com/k8s-dev/private-eks-fargate&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Constraints
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;No internet connectivity to Fargate cluster, no public subnets, except for bastion host&lt;/li&gt;
&lt;li&gt;Fully private access for Amazon EKS cluster's Kubernetes API server endpoint&lt;/li&gt;
&lt;li&gt;All AWS services communicates to this cluster using VPC or Gateway endpoints, essentially using private AWS access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A &lt;strong&gt;VPC endpoint&lt;/strong&gt; enables private connections between your VPC and supported AWS services. Traffic between VPC and the other AWS service does not leave the Amazon network. So this solution does not require an internet gateway or a NAT device except for bastion host subnet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-Requisite
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;At least 2 private subnets in VPC because pods running on Fargate are only supported on private subnets&lt;/li&gt;
&lt;li&gt;A Bastion host in a public subnet in the same VPC to connect to EKS Fargate Cluster via kubectl&lt;/li&gt;
&lt;li&gt;AWS cli installed and configured with and default region on this bastion host&lt;/li&gt;
&lt;li&gt;VPC endpoints and Gateway endpoint for AWS Services that your cluster uses&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Design
&lt;/h3&gt;

&lt;p&gt;This diagram shows high level design for the implementation. EKS on Fargate cluster spans 2 private subnets and a bastion host is provisioned in public subnet with internet connectivity. All communication to EKS cluster will be initiated from this bastion host. EKS cluster is fully private and communicates to various AWS services via VPC and Gateway endpoints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdtw8g6q8sr6x0ow6huba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdtw8g6q8sr6x0ow6huba.png" alt="Design Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation Steps
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Fargate pod execution role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a Fargate pod execution role which allows Fargate infrastructure to make calls to AWS APIs on your behalf to do things like pull  container images from Amazon ECR or route logs to other AWS services.  Follow : &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-sg-pod-execution-role" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-sg-pod-execution-role&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create AWS Services VPC Endpoints&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since we are rolling out fully private EKS on Fargate cluster, it should have private only access to various AWS Services such as to ECR, CloudWatch, loadbalancer, S3 etc.&lt;/p&gt;

&lt;p&gt;This step is &lt;strong&gt;essential&lt;/strong&gt; to perform so that pods running on Fargate cluster can pull container images, push logs to CloudWatch and interact with loadbalancer. &lt;/p&gt;

&lt;p&gt;See the entire list here of endpoints that your cluster may use: &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html#vpc-endpoints-private-clusters" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html#vpc-endpoints-private-clusters&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Setup the VPC endpoint and Gateway endpoints in the same VPC for the services that you plan to use in your EKS on Fargate cluster by following steps at :  &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#create-interface-endpoint" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#create-interface-endpoint&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to provision the following endpoints at the minimum:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Interface endpoints for ECR (both ecr.api and ecr.dkr) to pull container images&lt;/li&gt;
&lt;li&gt;A gateway endpoint for S3 to pull the actual image layers&lt;/li&gt;
&lt;li&gt;An interface endpoint for EC2&lt;/li&gt;
&lt;li&gt;An interface endpoint for STS to support Fargate and IAM Roles for Services Accounts&lt;/li&gt;
&lt;li&gt;An interface endpoint for CloudWatch logging (logs) if CloudWatch logging is required&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe5witoap0uaydfluoips.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe5witoap0uaydfluoips.png" alt="Interface Endpoints"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create Cluster with Private API-Server Endpoint&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use aws cli to create EKS cluster in the designated VPC. Modify with the actual cluster name, kubernetes version, pod execution role arn, private subnet names and security group name before you run the command. Please notice that this might take 10-15 minutes to get the cluster in Ready state.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws eks create-cluster &lt;span class="nt"&gt;--name&lt;/span&gt; &amp;lt;private-fargate-cluster&amp;gt; &lt;span class="nt"&gt;--kubernetes-version&lt;/span&gt; 1.18 &lt;span class="nt"&gt;--role-arn&lt;/span&gt; arn:aws:iam::1234567890:role/private-fargate-pod-execution-role
&lt;span class="nt"&gt;--resources-vpc-config&lt;/span&gt; &lt;span class="nv"&gt;subnetIds&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;subnet-01b3ae56696b33747,subnet-0e639397d1f12500a,subnet-039f4170f8a820afc&amp;gt;,securityGroupIds&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;sg-0c867ffb5ec31bb6b&amp;gt;,endpointPublicAccess&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;,endpointPrivateAccess&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;span class="nt"&gt;--logging&lt;/span&gt; &lt;span class="s1"&gt;'{"clusterLogging":[{"types":["api","audit","authenticator","controllerManager","scheduler"],"enabled":true}]}


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Above command creates a  private EKS cluster with private endpoints and enables logging for Kubernetes control plane components such as for apiserver, scheduler etc. Tweak as per your compliance needs.&lt;/p&gt;

&lt;p&gt;Connection to this private cluster could be achieved in 3 ways, via bastion host, Cloud9 or  connected network as listed here at : &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access&lt;/a&gt; . For the ease of doing it, we will go ahead with EC2 Bastion host approach.&lt;/p&gt;

&lt;p&gt;While you are waiting for the cluster to be in ready state, let's set up the Bastion host. This is needed because the EKS on Fargate cluster is in private subnets only, without internet connectivity and the API server access is set to private only. Bastion host will be created in public subnet in the same VPC and will be used to access/reach EKS on Fargate cluster. Only requisite is to install kubectl and aws cli on it. &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noopener noreferrer"&gt;Kubectl can be downloaded from here&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html" rel="noopener noreferrer"&gt;aws cli v2 could be downloaded from here&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;All subsequent commands could be run from Bastion host.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Setup access to EKS on Fargate Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;kubectl on bastion host needs to talk to api-server in order to communicate to EKS on Fargate cluster. Perform the following steps in order to setup this communication. Make sure to have aws cli configuration done prior to running this command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws eks &lt;span class="nt"&gt;--region&lt;/span&gt; &amp;lt;region-code&amp;gt; update-kubeconfig &lt;span class="nt"&gt;--name&lt;/span&gt; &amp;lt;cluster_name&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This saves the kubeconfig file to ~/.kube/config path which enables running kubectl commands to EKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enable logging(optional)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Logging could be very useful to debug the issues while rolling out the application. Setup the logging in the EKS Fargate cluster via :  &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create Fargate Profile&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In order to schedule pods running on Fargate in your cluster, you must define a Fargate profile that specifies which pods should use Fargate when they are launched. Fargate profiles are immutable by nature. A profile is essentially combination of namespace and optionally labels. Pods that match a selector (by matching a namespace for the selector and all of the labels specified in the selector) are scheduled on Fargate.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws eks create-fargate-profile &lt;span class="nt"&gt;--fargate-profile-name&lt;/span&gt; fargate-custom-profile &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--cluster-name&lt;/span&gt; private-fargate-cluster &lt;span class="nt"&gt;--pod-execution-role-arn&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
arn:aws:iam::01234567890:role/private-fargate-pod-execution-role &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--subnets&lt;/span&gt; subnet-01b3ae56696b33747 subnet-0e639397d1f12500a &lt;span class="se"&gt;\&lt;/span&gt;
subnet-039f4170f8a820afc &lt;span class="nt"&gt;--selectors&lt;/span&gt; &lt;span class="nv"&gt;namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;custom-space


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This command creates a Fargate profile in the cluster and tells EKS to schedule pods in namespace 'custom-space' to Fargate. However, we need to make few changes to our cluster, before we are able to run applications on it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuh4e4kztsjabhv9viqo4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuh4e4kztsjabhv9viqo4.png" alt="Private EKS on Fargate Cluster"&gt;&lt;/a&gt; &lt;br&gt;
&lt;strong&gt;Update CoreDNS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CoreDNS is configured to run on Amazon EC2 infrastructure on Amazon EKS clusters. Since in our cluster we do not have EC2 nodes, we need to update the CoreDNS deployment to remove the &lt;a href="http://eks.amazonaws.com/compute-type" rel="noopener noreferrer"&gt;eks.amazonaws.com/compute-type&lt;/a&gt; : ec2 annotation. And create a Fargate profile so that CoreDNS pods can use it to run. Update the following Fargate profile JSON for your own cluster name, account name, role arn  and save in a local file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;fargateProfileName"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;coredns"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;clusterName"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;private-fargate-cluster"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;podExecutionRoleArn"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arn:aws:iam::1234567890:role/private-fargate-pod-execution-role"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;subnets"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;subnet-01b3ae56696b33747"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;subnet-0e639397d1f12500a"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;subnet-039f4170f8a820afc"&lt;/span&gt;
    &lt;span class="pi"&gt;],&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;selectors"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;
        &lt;span class="pi"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;namespace"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kube-system"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;labels"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;k8s-app"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kube-dns"&lt;/span&gt;
            &lt;span class="pi"&gt;}&lt;/span&gt;
        &lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="pi"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Apply this JSON with the following command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws eks create-fargate-profile &lt;span class="nt"&gt;--cli-input-json&lt;/span&gt; file://updated-coredns.json


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next step is to remove the annotation from CoreDNS pods, allowing them to be scheduled on Fargate infrastructure:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl patch deployment coredns &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;--type&lt;/span&gt; json &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Final step to make coreDNS function properly is to recreate these pods, we will use rollout deployment for this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl rollout restart &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system deployment.apps/coredns


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Make sure to double check after a while that coreDNS pods are in running state in kube-system namespace before proceeding further.&lt;/p&gt;

&lt;p&gt;After these steps EKS on Fargate private cluster is up and running. Because this cluster does not have internet connectivity, pods scheduled on this can not pull container images from public registry like dockerhub etc. Solution to this is to setup ECR repo, host private images in it and refer to these images in pod manifests. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setup ECR registry&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This involves creating ECR repository, pulling image locally, tagging it appropriately and pushing to ECR registry. A container image could be copied to ECR from bastion host and can be accessed by EKS on Fargate via ECR VPC endpoint.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws ecr create-repository &lt;span class="nt"&gt;--repository-name&lt;/span&gt; nginx
docker pull nginx:latest
docker tag nginx &amp;lt;1234567890&amp;gt;.dkr.ecr.&amp;lt;region-code&amp;gt;.amazonaws.com/nginx:v1
aws ecr get-login-password &lt;span class="nt"&gt;--region&lt;/span&gt; &amp;lt;region-code&amp;gt; | docker login &lt;span class="nt"&gt;--username&lt;/span&gt; AWS &lt;span class="nt"&gt;--password-stdin&lt;/span&gt; &amp;lt;1234567890&amp;gt;.dkr.ecr.&amp;lt;region-code&amp;gt;.amazonaws.com
docker push &amp;lt;1234567890&amp;gt;.dkr.ecr.&amp;lt;region-code&amp;gt;.amazonaws.com/nginx:v1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Run Application in EKS on Fargate Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that we have pushed nginx image to ECR, we can reference it to the deployment yaml.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom-space&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;124567890&amp;gt;.dkr.ecr.&amp;lt;region-code&amp;gt;.amazonaws.com/nginx:v1&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Cross-check to ensure that nginx application is in running state after a while.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="s"&gt;kubectl get pods -n custom-space&lt;/span&gt;
&lt;span class="s"&gt;NAME                          READY   STATUS      RESTARTS   AGE&lt;/span&gt;
&lt;span class="s"&gt;sample-app-578d67447d-fw6kp   1/1     Running     0          8m52s&lt;/span&gt;
&lt;span class="s"&gt;sample-app-578d67447d-r68v2   1/1     Running     0          8m52s&lt;/span&gt;
&lt;span class="s"&gt;sample-app-578d67447d-vbbfr   1/1     Running     0          8m52s&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Describe the deployment should look like this.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="s"&gt;kubectl describe deployment -n custom-space&lt;/span&gt;
&lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;                   &lt;span class="s"&gt;sample-app&lt;/span&gt;
&lt;span class="na"&gt;Namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;              &lt;span class="s"&gt;custom-space&lt;/span&gt;
&lt;span class="na"&gt;CreationTimestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;      &lt;span class="s"&gt;Tue, 19 Dec 2020 23:43:04 +0000&lt;/span&gt;
&lt;span class="na"&gt;Labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;                 &lt;span class="s"&gt;&amp;lt;none&amp;gt;&lt;/span&gt;
&lt;span class="na"&gt;Annotations:            deployment.kubernetes.io/revision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="na"&gt;Selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;               &lt;span class="s"&gt;app=nginx&lt;/span&gt;
&lt;span class="na"&gt;Replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;               &lt;span class="s"&gt;3 desired | 3 updated | 3 total | 3 available | 0 unavailable&lt;/span&gt;
&lt;span class="na"&gt;StrategyType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;           &lt;span class="s"&gt;RollingUpdate&lt;/span&gt;
&lt;span class="na"&gt;MinReadySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;        &lt;span class="m"&gt;0&lt;/span&gt;
&lt;span class="na"&gt;RollingUpdateStrategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;25% max unavailable, 25% max surge&lt;/span&gt;
&lt;span class="na"&gt;Pod Template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;app=nginx&lt;/span&gt;
  &lt;span class="na"&gt;Containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;nginx&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;        &lt;span class="s"&gt;1234567890.dkr.ecr.&amp;lt;region-code&amp;gt;.amazonaws.com/nginx:v1&lt;/span&gt;
    &lt;span class="na"&gt;Port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;         &lt;span class="s"&gt;80/TCP&lt;/span&gt;
    &lt;span class="na"&gt;Host Port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    &lt;span class="s"&gt;0/TCP&lt;/span&gt;
    &lt;span class="na"&gt;Environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;&amp;lt;none&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;Mounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;       &lt;span class="s"&gt;&amp;lt;none&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;Volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;        &lt;span class="s"&gt;&amp;lt;none&amp;gt;&lt;/span&gt;
&lt;span class="na"&gt;Conditions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;Type           Status  Reason&lt;/span&gt;
  &lt;span class="s"&gt;----           ------  ------&lt;/span&gt;
  &lt;span class="s"&gt;Available      True    MinimumReplicasAvailable&lt;/span&gt;
  &lt;span class="s"&gt;Progressing    True    NewReplicaSetAvailable&lt;/span&gt;
&lt;span class="na"&gt;OldReplicaSets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;&amp;lt;none&amp;gt;&lt;/span&gt;
&lt;span class="na"&gt;NewReplicaSet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;   &lt;span class="s"&gt;sample-app-578d67447d (3/3 replicas created)&lt;/span&gt;
&lt;span class="na"&gt;Events&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;Type    Reason             Age   From                   Message&lt;/span&gt;
  &lt;span class="s"&gt;----    ------             ----  ----                   -------&lt;/span&gt;
  &lt;span class="s"&gt;Normal  ScalingReplicaSet  10m   deployment-controller  Scaled up replica set sample-app-578d67447d to &lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Ensuring that nginx application is actually running on Fargate infrastructure, check the EC2 console page. You would not find any EC2 running there as part of EKS cluster, as the underlying infrastructure is fully managed by AWS.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl get nodes
NAME                                    STATUS   ROLES   AGE   VERSION
fargate-ip-10-172-36-16.ec2.internal    Ready    &amp;lt;none&amp;gt;  12m v1.18.8-eks-7c9bda
fargate-ip-10-172-10-28.ec2.internal    Ready    &amp;lt;none&amp;gt;  12m v1.18.8-eks-7c9bda
fargate-ip-10-172-62-216.ec2.internal   Ready    &amp;lt;none&amp;gt;  12m v1.18.8-eks-7c9bda


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Another experiment to try out is to create a new deployment with image which is not part of ECR and see that pod will be in crashbackoff state because being a private cluster it can not reach dockerhub on public internet to pull the image.&lt;/p&gt;

&lt;h3&gt;
  
  
  Closing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;In this experimentation - we created a private EKS on Fargate Cluster, with private API endpoints.&lt;/li&gt;
&lt;li&gt;This deployment solves some of the compliance challenges faced by BFSI and other regulated sectors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Please let know if you face challenges replicating this in your own environment. PRs to improve this documentation is welcome. Happy Learning!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>security</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
