<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: AnupamMahapatra</title>
    <description>The latest articles on DEV Community by AnupamMahapatra (@anupamncsu).</description>
    <link>https://dev.to/anupamncsu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/anupamncsu"/>
    <language>en</language>
    <item>
      <title>Istio - An Introduction to service mesh and its dashboard</title>
      <dc:creator>AnupamMahapatra</dc:creator>
      <pubDate>Mon, 11 May 2020 04:31:42 +0000</pubDate>
      <link>https://dev.to/anupamncsu/istio-an-introduction-to-service-mesh-and-its-dashboard-5gj3</link>
      <guid>https://dev.to/anupamncsu/istio-an-introduction-to-service-mesh-and-its-dashboard-5gj3</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ya_KI0cU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ohjb053zreuo7lrq61vz.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ya_KI0cU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ohjb053zreuo7lrq61vz.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a service mesh and why do we need it ?
&lt;/h2&gt;

&lt;p&gt;The term service mesh is used to describe the network of microservices that make up such applications and the interactions between them. As a service mesh grows in size and complexity, it can become harder to understand and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh also often has more complex operational requirements, like A/B testing, canary releases, rate limiting, access control, and end-to-end authentication.&lt;/p&gt;

&lt;h2&gt;
  
  
  what is Istio
&lt;/h2&gt;

&lt;p&gt;Istio service mesh is a &lt;strong&gt;sidecar container implementation&lt;/strong&gt; of the features and functions needed to provide &lt;strong&gt;behavioral insights and operational control&lt;/strong&gt; over the microservices cluster as a whole. We get these benefits with &lt;strong&gt;no changes to our source code.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
By using the sidecar model, Istio runs in a Linux container in our Kubernetes pods (much like a sidecar rides along side a motorcycle) and injects and extracts functionality and information based on our configuration that lives outside of our code decreasing complexity and heft. It also &lt;strong&gt;moves operational aspects away from code development&lt;/strong&gt; and into the domain of operations.&lt;/p&gt;

&lt;p&gt;Istio's functionality running outside of our source code introduces the concept of Service Mesh. That's a coordinated group of one or more binaries that make up a &lt;strong&gt;mesh of networking functions&lt;/strong&gt; such as:   &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Traffic Management:&lt;/strong&gt; Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observability:&lt;/strong&gt; Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Policy Enforcement&lt;/strong&gt; Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service Identity and Security:&lt;/strong&gt; Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic as it flows over networks of varying degrees of trustability.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Istio Architecture
&lt;/h2&gt;

&lt;p&gt;An Istio service mesh is logically split into a &lt;strong&gt;data plane and a control plane.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;data plane&lt;/strong&gt; is composed of a set of intelligent &lt;strong&gt;Envoy proxies&lt;/strong&gt; deployed as sidecars. These proxies mediate and control all network communication between microservices along with &lt;strong&gt;Mixer&lt;/strong&gt;, a general-purpose policy and telemetry hub.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;control plane&lt;/strong&gt; manages and configures the proxies to route traffic. Additionally, the control plane configures Mixers to enforce policies and collect telemetry.&lt;/p&gt;

&lt;p&gt;The following diagram shows the different components that make up each plane: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xx4t4t5b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/q3dtas78fochxd620uv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xx4t4t5b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/q3dtas78fochxd620uv2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Envoy:&lt;/strong&gt; Mediate all inbound and outbound traffic for all services in the service mesh.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mixer:&lt;/strong&gt; Enforces access control and usage policies across the service mesh, and collects telemetry data from the Envoy proxy.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pilot:&lt;/strong&gt; Pilot provides service discovery for the sidecars, traffic management capabilities for intelligent routing (e.g., A/B tests, canary deployments, etc.), and resiliency (timeouts, retries, circuit breakers, etc.). It converts high level routing rules into Envoy-specific configurations, and propagates them to the sidecars at runtime.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Citadel:&lt;/strong&gt; Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management. &lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;For this setup I am using a local kubernetes cluster on my machine. Docker Desktop now comes with a kubernetes setup. I expect you will have the setup ready with a kubectl.&lt;/p&gt;

&lt;p&gt;Once we have started the kubernetes instances, we create a &lt;strong&gt;namespace : istio-system&lt;/strong&gt; and start all of the Istio-related components. From there, as we create projects and pods, istio adds configuration information to our deployments, and our pods will use Istio. &lt;/p&gt;

&lt;p&gt;Istio is installed in two parts. The first part involves the CLI tooling that will be used to deploy and manage Istio backed services. The second part configures the Kubernetes cluster to support Istio.&lt;/p&gt;
&lt;h3&gt;
  
  
  Install CLI tooling
&lt;/h3&gt;

&lt;p&gt;The following command will download the Istio 1.0.0 release on your local.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -L https://istio.io/downloadIstio | sh -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;point the path to the destination folder&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd istio-1.5.2
export PATH=$PWD/bin:$PATH
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure Istio
&lt;/h3&gt;

&lt;p&gt;For this blog, we will apply the &lt;strong&gt;demo&lt;/strong&gt; configuration profiles.&lt;br&gt;&lt;br&gt;
The profiles provide customization of the Istio control plane and of the sidecars for the Istio data plane. You can start with one of Istio’s built-in &lt;a href="https://istio.io/docs/setup/additional-setup/config-profiles/"&gt;configuration profiles&lt;/a&gt; and then further customize the configuration for your specific needs. &lt;/p&gt;

&lt;p&gt;For this demo, please fork the files from &lt;a href="https://github.com/anupam-ncsu/AWS-KubernetesResources/tree/master/Istio/manifest"&gt;here&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;istioctl manifest apply --set profile=demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Add a &lt;a href="https://github.com/anupam-ncsu/AWS-KubernetesResources/blob/master/Istio/manifest/namespace-dev.yaml"&gt;namespace&lt;/a&gt; label to instruct Istio to automatically inject Envoy sidecar proxies when you deploy your application later. In this case, every app that gets deployed to &lt;strong&gt;namespace : dev&lt;/strong&gt; will have istio enabled.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply - f manifest/namespace-dev.yaml
kubectl label namespace dev istio-injection=enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploy Applications
&lt;/h3&gt;

&lt;p&gt;For simplicity, we will use the famous &lt;a href="https://github.com/anupam-ncsu/AWS-KubernetesResources/blob/master/Istio/manifest/bookinginfo-dev.yaml"&gt;bookinfo sample published by the istio team with minor modifications.&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f manifest/bookinginfo-dev.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The application will start. As each pod becomes ready, the Istio sidecar will deploy along with it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get services -n dev

kubectl get pods - n dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait till the pods are in ready state&lt;br&gt;&lt;br&gt;
Verify everything is working correctly up to this point. Run this command to see if the app is running inside the cluster and serving HTML pages by checking for the page title in the response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -it $(kubectl get pod -l app=ratings -n dev -o jsonpath='{.items[0].metadata.name}') -n  dev -c ratings -- curl productpage:9080/productpage | grep -o "&amp;lt;title&amp;gt;.*&amp;lt;/title&amp;gt;"

&amp;lt;title&amp;gt;Simple Bookstore App&amp;lt;/title&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Istio Ingress: Open application to traffic
&lt;/h3&gt;

&lt;p&gt;The Bookinfo application is deployed but not accessible from the outside.   &lt;/p&gt;

&lt;p&gt;To make it accessible, you need to create an &lt;a href="https://github.com/anupam-ncsu/AWS-KubernetesResources/blob/master/Istio/manifest/bookinginfo-ingress.yaml"&gt;Istio Ingress Gateway&lt;/a&gt;, which maps a path to a route at the edge of your mesh. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads.&lt;br&gt;&lt;br&gt;
Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads. Unlike other mechanisms for controlling traffic entering your systems, such as the Kubernetes Ingress APIs, Istio gateways let you use the full power and flexibility of Istio’s traffic routing. You can do this because Istio’s Gateway resource just lets you configure layer 4-6 load balancing properties such as ports to expose, TLS settings, and so on. Then instead of adding application-layer traffic routing (L7) to the same API resource, you bind a regular Istio virtual service to the gateway. This lets you basically manage gateway traffic like any other data plane traffic in an Istio mesh.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f bookinginfo-ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm that the gateway got created&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get gateway -n dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Ingress IP and Ports
&lt;/h3&gt;

&lt;p&gt;Check the ingress gateway External IP&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                                                      AGE
bookinfo-gateway   LoadBalancer   10.111.214.48   localhost     15020:30172/TCP,80:30126/TCP,443:30000/TCP,15029:30787/TCP,15030:30922/TCP,15031:31764/TCP,15032:31819/TCP,31400:31537/TCP,15443:31256/TCP   17h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is  (or perpetually ), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.&lt;/p&gt;

&lt;p&gt;As I am running the kubernetes cluster on my local , the hostname is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;127.0.0.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The port is the one we specified in our gateway defination: 80&lt;br&gt;
The apps are exposed through the VirtualService which links to the app &lt;strong&gt;productpage&lt;/strong&gt; . &lt;/p&gt;
&lt;h2&gt;
  
  
  Microservice structure visualization
&lt;/h2&gt;

&lt;p&gt;We can visualize the app through the istio default dashboard on kiali.&lt;br&gt;
Open a new terminal instance and execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;istioctl dashboard kiali
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the visualization is simple to understand and debug.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TpWigq6S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/f0zezxg32ad9b00nc7j3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TpWigq6S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/f0zezxg32ad9b00nc7j3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice, that we linked only to the product page through the ingress virtual service. The Rest of the mesh is connected through their own service definitions.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>istio</category>
      <category>microservices</category>
      <category>aws</category>
    </item>
    <item>
      <title>Deploying EKS the Hard Way.</title>
      <dc:creator>AnupamMahapatra</dc:creator>
      <pubDate>Sat, 18 Apr 2020 06:36:12 +0000</pubDate>
      <link>https://dev.to/anupamncsu/deploying-eks-the-hard-way-28d8</link>
      <guid>https://dev.to/anupamncsu/deploying-eks-the-hard-way-28d8</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f__pEpsb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fujk4sav6mtehobc7kyq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f__pEpsb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fujk4sav6mtehobc7kyq.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/eks/"&gt;EKS&lt;/a&gt; is a managed kubernetes service from Amazon AWS. There is nothing difficult about it(Sorry for the misleading title!). That being said, there are still a lot of moving pieces in the backend, to make this peachy platform work.&lt;br&gt;&lt;br&gt;
So instead of going the universally acclaimed easiest route of deploying though &lt;a href="https://eksctl.io/"&gt;EKSCTL&lt;/a&gt;, I will be going into the individual components that we need to make this cluster work, there by understanding more about them instead of just spinning up through a cli command.&lt;/p&gt;
&lt;h4&gt;
  
  
  Assuming you have a fresh AWS account with just a root user to go in, we will do this setup in 3 phases.
&lt;/h4&gt;
&lt;h3&gt;
  
  
  1. Create a three Subnet highly available Network Stack on AWS.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://dev.to/anupamncsu/aws-virtual-private-cloud-setup-as-a-city-analogy-3ekc"&gt;Please read my blog&lt;/a&gt; to understand and create it. It will be just a single Cloudformation stack to get this setup. We need this to deploy our worker nodes of EKS cluster into.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Create an admin user to deploy and administer the EKS cluster.
&lt;/h3&gt;

&lt;p&gt;It is always a best practice to not use the root account to use the AWS. &lt;a href="https://dev.to/anupamncsu/aws-access-through-users-groups-3253"&gt;In this blog I explain how to create a user&lt;/a&gt; with another cloudformation stack. The blog explains to create two users, but we will be using the admin user for this tutorial for simplicity. Also, how to configure the user access from your local desktop. We will need this access after we have the cluster to deploy apps into EKS. The famous kubectl will have this as a prerequisite to work. Going forward with the assumption that you have this step completed and you have a working AWS CLI with a admin profile.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Deploy the cluster
&lt;/h3&gt;

&lt;p&gt;This section consist of creating the IAM resources for the cluster and then the compute resources. This is the meat and potato of this blog so lets explain them in detail.&lt;/p&gt;


&lt;h4&gt;
  
  
  EKS IAM Resources
&lt;/h4&gt;

&lt;p&gt;The EKs cluster consists of two defined sections, the EKS &lt;strong&gt;service&lt;/strong&gt; that AWS maintains and the worker &lt;strong&gt;node&lt;/strong&gt; cluster that you are responsible for providing the details and security for. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Both these sections have two different IAM roles with a defined sets of Managed policies to attach to them. These can be defined as :
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;###### EKS Service Role ######
  EksClusterServiceRole:
    Type: AWS::IAM::Role
    Properties: 
      AssumeRolePolicyDocument: 
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
              - eks.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      Description: EksClusterServiceRole
      ManagedPolicyArns: 
        - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
        - arn:aws:iam::aws:policy/AmazonEKSServicePolicy
      MaxSessionDuration: 3600
      Path: /
      RoleName: !Ref EksClusterServiceRoleName
      Tags: 
        - Key: Environment
          Value: !Ref Environment

###### EKS Node Role ######
  EksClusterNodeRole:
    Type: AWS::IAM::Role
    Properties: 
      AssumeRolePolicyDocument: 
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
              - ec2.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      Description: EksClusterNodeRole
      ManagedPolicyArns: 
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
      MaxSessionDuration: 3600
      Path: /
      RoleName: !Ref EksClusterNodeRoleName
      Tags: 
        - Key: Environment
          Value: !Ref Environment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Both these compute sections need their own security groups. To design these groups we have to think beside accessing each other for the functioning of the cluster(&lt;strong&gt;port 1025-65535&lt;/strong&gt;), who accesses these groups. The Service section is being accessed by outside when we run Kubectl from our desktop. We can clear a definite IP, a known CIDR range if we know what that IP might be. In this case for simplicity, I have opened it to the whole world. The worker nodes need to be accesed by other services accessing the apps running inside it. That will again need futher scrutiny based on what accesses your apps. For example, If you plan to deploy web applications and expose them to only a internal load-balancer, you can keep a defined port open to that load balancer and that will make it secure. For the purpose of simplicity, I have kept the https port 443 open to the whole world.
These two groups can be defined as follows:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Service Security Group ##
  EksServiceSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties: 
      GroupDescription: EksServiceSecurityGroup
      GroupName: !Ref EksServiceSecurityGroupName
      Tags: 
        - Key: Environment
          Value: !Ref Environment
      VpcId: !Ref Vpc

## Node Security Group ##
  EksNodeSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    DependsOn: EksServiceSecurityGroup
    Properties: 
      GroupDescription: EksNodeSecurityGroup
      GroupName: !Ref EksNodeSecurityGroupName
      Tags: 
        - Key: Environment
          Value: !Ref Environment
      VpcId: !Ref Vpc 

### Service  Security group Ingress ###
  EksServiceSecurityGroupIngress1:
    Type: AWS::EC2::SecurityGroupIngress
    DependsOn: EksNodeSecurityGroup
    Properties:
      GroupId: !Ref EksServiceSecurityGroup
      IpProtocol: tcp
      FromPort: 443
      ToPort: 443
      SourceSecurityGroupId: !Ref EksNodeSecurityGroup

### Service  Security group Egress ###
  EksServiceSecurityGroupEgress1:    
    Type: AWS::EC2::SecurityGroupEgress
    DependsOn: EksNodeSecurityGroup
    Properties: 
      DestinationSecurityGroupId: !Ref EksNodeSecurityGroup
      FromPort: 1025
      ToPort: 65535 
      IpProtocol: tcp
      GroupId: !Ref EksServiceSecurityGroup

  EksServiceSecurityGroupEgress2:    
    Type: AWS::EC2::SecurityGroupEgress
    DependsOn: EksNodeSecurityGroup
    Properties: 
      DestinationSecurityGroupId: !Ref EksNodeSecurityGroup
      FromPort: 443
      ToPort: 443 
      IpProtocol: tcp
      GroupId: !Ref EksServiceSecurityGroup

### Node Security group Ingress ###
  # Open every port to itself
  EksNodeSecurityGroupIngress1:
    Type: AWS::EC2::SecurityGroupIngress
    DependsOn: EksNodeSecurityGroup
    Properties:
      GroupId: !Ref EksNodeSecurityGroup
      IpProtocol: -1
      FromPort: -1
      ToPort: -1
      SourceSecurityGroupId: !Ref EksNodeSecurityGroup

  # open 1024-65535 to service SG
  EksNodeSecurityGroupIngress2:
    Type: AWS::EC2::SecurityGroupIngress
    DependsOn: EksNodeSecurityGroup
    Properties:
      IpProtocol: tcp
      FromPort: 1025
      ToPort: 65535 
      GroupId: !Ref EksNodeSecurityGroup
      SourceSecurityGroupId: !Ref EksServiceSecurityGroup 

  # open 443 to Service SG
  EksNodeSecurityGroupIngress3:
    Type: AWS::EC2::SecurityGroupIngress
    DependsOn: EksNodeSecurityGroup
    Properties:
      IpProtocol: tcp
      FromPort: 443
      ToPort: 443 
      GroupId: !Ref EksNodeSecurityGroup
      SourceSecurityGroupId: !Ref EksServiceSecurityGroup

### Node Security group Egress ###
  # All open
  EksNodeSecurityGroupEgress1:    
    Type: AWS::EC2::SecurityGroupEgress
    DependsOn: EksNodeSecurityGroup
    Properties: 
      FromPort: -1
      ToPort: -1 
      GroupId: !Ref EksNodeSecurityGroup
      IpProtocol: -1
      CidrIp: 0.0.0.0/0 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;The last resource we need is a SSH key the cluster will use to encrypt data at rest. We will use AWS SSM to create store and manage accessibility to this key. This key will be used by the EKs cluster to encrypt the secrets and configurations in it. The key has access defnied on the admin user and root. We can add more users to this key based on the use. Example, we have a user on AWS Lambda who might query the EKS cluster for some data, that user will need access to this key on certain access (List,Decrypt,etc) to be able to read the data from the EKS cluster.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## SSH Key for cluster secret encryption
  EksKMS:
    Type: AWS::KMS::Key
    Properties:
      Description: Key for EKS cluster to use when encrypting your Kubernetes secrets
      KeyPolicy:
        Version: '2012-10-17'
        Id: EKS-key
        Statement:
        - Sid: Enable IAM User Permissions
          Effect: Allow
          Principal:
            AWS: !Sub arn:aws:iam::${AWS::AccountId}:root
          Action: kms:*
          Resource: '*'
        - Sid: Allow administration of the key
          Effect: Allow
          Principal:
            AWS: !Sub 
            - 'arn:aws:iam::${AWS::AccountId}:user/${AWSaccountAdminUserName}'
            - {AWSaccountAdminUserName: !Ref AWSaccountAdminUserName}
          Action:
          - kms:Create*
          - kms:Describe*
          - kms:Enable*
          - kms:List*
          - kms:Put*
          - kms:Update*
          - kms:Revoke*
          - kms:Disable*
          - kms:Get*
          - kms:Delete*
          - kms:ScheduleKeyDeletion
          - kms:CancelKeyDeletion
          Resource: '*'

  EksKMSAlias:
    Type: AWS::KMS::Alias
    Properties: 
      AliasName: !Ref EksKMSAliasName
      TargetKeyId: !GetAtt EksKMS.Arn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This complete set of resource will spin up as a single Cloudformation stack . &lt;a href="https://github.com/anupam-ncsu/AWS-CloudResources/blob/master/EKScluster/EKS-IAMresources.yaml"&gt;Please fork my github file here.&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  EKS Compute Resource
&lt;/h3&gt;

&lt;p&gt;This section deploys the actual bare-metals of your cluster(&lt;strong&gt;hah Gotcha! there is no bare metal, its AWS, its upon some cloud in the sky&lt;/strong&gt;). &lt;br&gt;
We deploy the service section of EKS and after that the worker node cluster of EKS. Every parameter that we use here comes from the last two templates. &lt;br&gt;
It is worth mentioning that the smallest t-shirt size of EC2 that you can use is t2.small and the smallest disk that can be attached to it is of 4GB .&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## EKS Build ##
  EKS:
    Type: AWS::EKS::Cluster
    Properties:
      Name: !Ref EKSClusterName
      ResourcesVpcConfig:
        SecurityGroupIds: !Ref EksServiceSecurityGroupID
        SubnetIds: !Ref SubnetIds
      RoleArn: !Ref EksClusterServiceRole
      Version: !Ref KubernetesVersion

  EKSNodegroup:
    Type: AWS::EKS::Nodegroup
    DependsOn: EKS
    Properties:
      AmiType: !Ref AmiType
      ClusterName: !Ref EKS
      DiskSize: !Ref DiskSize
      ForceUpdateEnabled: !Ref ForceUpdateEnabled
      InstanceTypes:
        - !Ref InstanceTypes
      NodegroupName: !Ref EKSManagedNodeGroupName
      NodeRole: !Ref EksClusterNodeRoleIP
      ScalingConfig:
        MinSize: !Ref NodeAutoScalingGroupMinSize
        DesiredSize: !Ref NodeAutoScalingGroupDesiredSize
        MaxSize: !Ref NodeAutoScalingGroupMaxSize
      Subnets: !Ref SubnetIds
      RemoteAccess:
        Ec2SshKey: !Ref Ec2SshKey
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This complete set of resource will spin up as a single Cloudformation stack . &lt;a href="https://github.com/anupam-ncsu/AWS-CloudResources/blob/master/EKScluster/EKS-ComputeResources.yaml"&gt;Please fork my github file here.&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;The cluster takes about 10-15 mins to be ready. To access the cluster, we will need to install &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/"&gt;kubectl&lt;/a&gt; on our local system. Kubectl is a command line tool for controlling Kubernetes clusters, allowing you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.&lt;/p&gt;

&lt;p&gt;We need to configure the kubectl to talk to our EKS cluster. For this we need to go and edit the file &lt;strong&gt;~./kube/config&lt;/strong&gt; with the details of the cluster we find on the EKS tab of the AWS console. The format of the file will be as such&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;clusters:
- cluster:
    certificate-authority-data: &amp;lt;&amp;lt; TOKEN &amp;gt;&amp;gt;
    server: &amp;lt;&amp;lt; ENDPOINT URL &amp;gt;&amp;gt;
  name: &amp;lt;&amp;lt; ARN OF THE CLUSTER &amp;gt;&amp;gt;
contexts:
- context:
    cluster: &amp;lt;&amp;lt; ARN OF THE CLUSTER &amp;gt;&amp;gt;
    user: &amp;lt;&amp;lt; ARN OF THE CLUSTER &amp;gt;&amp;gt;
  name: &amp;lt;&amp;lt; CLUSTER ID &amp;gt;&amp;gt;
current-context: &amp;lt;&amp;lt; CLUSTER ID &amp;gt;&amp;gt;
kind: Config
preferences: {}
users:
- name: &amp;lt;&amp;lt; CLUSTER ID &amp;gt;&amp;gt;
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - &amp;lt;&amp;lt; CLUSTER ID &amp;gt;&amp;gt;
      - -r
      - &amp;lt;&amp;lt; ADMIN USER ROLE ARN &amp;gt;&amp;gt;
      command: aws-iam-authenticator
      env:
      - name: AWS_PROFILE
        value: &amp;lt;&amp;lt; AWS PROFILE &amp;gt;&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; This is error prone and a much easier way of doing it will be leveragin the aws cli and its admin role profile to modify this file for you by executing the following :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks update-kubeconfig --name my-eks-cluster --profile &amp;lt;ADMIN-PROFILE&amp;gt; --region &amp;lt;EKS-Cluster-Region&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything is good till this point, you should be good to hit a sane kubectl command to see whats the fuss it all about.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  WARNING:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Enjoy your K8s responsibly.&lt;br&gt;
Please dont keep this setup running. Cloudformations are super easy to recreate, so take this down once you are done for the day. Mr.Bezos has enough $$$s as we speak.&lt;/strong&gt; &lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>kubernetes</category>
      <category>docker</category>
    </item>
    <item>
      <title>AWS access through Users/Groups</title>
      <dc:creator>AnupamMahapatra</dc:creator>
      <pubDate>Mon, 13 Apr 2020 23:52:24 +0000</pubDate>
      <link>https://dev.to/anupamncsu/aws-access-through-users-groups-3253</link>
      <guid>https://dev.to/anupamncsu/aws-access-through-users-groups-3253</guid>
      <description>&lt;p&gt;When we open an account in AWS, we are the root user. While you can do do any things with that user, It is important to follow best practice and create groups and users for each type of access you might need.&lt;br&gt;
In my case, I was spinning up an EKS cluster. I thought it gives me an opportunity to write a small blog on how to go about it.&lt;br&gt;
We can create resource and query them through AWS API using AWS CLI. The CLI gives us a lot of power to do things through API. But the CLI needs to get access. These access can be classified into users and each account can thus have multiple users and access. Instead of directly giving access to the users, we put users into a group. These groups then get access using policies. Each group can have one or more users.&lt;/p&gt;

&lt;p&gt;Each &lt;strong&gt;USER&lt;/strong&gt; is created within a certain &lt;strong&gt;GROUP&lt;/strong&gt; and each group is given access to resources dictated by &lt;strong&gt;POLICIES&lt;/strong&gt;. This way, you can edit access to multiple users.&lt;/p&gt;

&lt;p&gt;In my use case, I am creating an admin group for the entire aws account and an EKS admin group just for EKS service:&lt;/p&gt;

&lt;p&gt;Both of these can be created using cloud formation through your root account:&lt;/p&gt;

&lt;p&gt;The following create a admin account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Admin user and group for an account'

Parameters:
  UserName:
    Type: String
    Description: User name MUST be unique per account globally or it will create an ireversible error

  AWSAdminPassword:
    Type: String

Resources:
## Custom Group ###
  AWSAdminIAMGroup:
    Type: AWS::IAM::Group
    Properties: 
      GroupName: AWS-admins
      Path: /
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AdministratorAccess

### Custom User ###
  AWSAminIAMUser:
    Type: AWS::IAM::User
    Properties: 
      LoginProfile:
        Password: !Ref AWSAdminPassword
      Groups: 
        - !Ref AWSAdminIAMGroup
      Path: /
      UserName: !Ref UserName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This user is suppose to have access through console, hence i provided the username and password.&lt;/p&gt;

&lt;p&gt;The second user is for programmatic access to EKS API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Setting up a group and a user for EKS admin'

Resources:

## Custom Group ###
  EKSAminIAMGroup:
    Type: AWS::IAM::Group
    Properties: 
      GroupName: EKS-admins
      Path: /


### Custom User ###
  EKSAminIAMUser:
    Type: AWS::IAM::User
    Properties: 
      Groups: 
        - !Ref EKSAminIAMGroup
      Path: /

### Custom Policy ###
  EKSAdminIAMpolicy:
    Type: AWS::IAM::Policy
    Properties: 
      Groups:
        - !Ref EKSAminIAMGroup
      PolicyDocument: 
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - 'eks:*'
                Resource: '*'
      PolicyName: EKSAdminIAMpolicy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create Cloudformation stacks with these files to create the resources.&lt;/li&gt;
&lt;li&gt;These files creates a group , user and a policy to attach to them.&lt;/li&gt;
&lt;li&gt;To access the user from a local terminal, we need to configure the keys to the user a an AWS profile&lt;/li&gt;
&lt;li&gt;After the role is created, Go to : AWS &amp;gt; IAM &amp;gt; Users &amp;gt; [User] &amp;gt; Security Credentials&lt;/li&gt;
&lt;li&gt;Create &lt;strong&gt;Access Keys&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Copy the &lt;strong&gt;Access Key ID&lt;/strong&gt; and &lt;strong&gt;Secret Access Key&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Open in your terminal:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano ~/.aws/configure

$credentials
[SOME-PROFILE-NAME]
aws_access_key_id=&amp;lt;COPIED FROM AWS&amp;gt;
aws_secret_access_key=&amp;lt;COPIED FROM AWS&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;After configuring both user accounts, my configure file looks as such&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;My configure file looks as such:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[p-admin]
aws_access_key_id=FAKEFAKEFAKEFAKEFAKEFAKEFAKEFAKE
aws_secret_acess_key=FAKEFAKEFAKEFAKEFAKEFAKEFAKEFAKE

[p-eks]
aws_access_key_id=FAKEFAKEFAKEFAKEFAKEFAKEFAKEFAKE
aws_secret_access_key=FAKEFAKEFAKEFAKEFAKEFAKEFAKEFAKE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After the profile is configured, we can make aws cli command referencing the profiles:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks list-clusters --profile [p-eks] --region [REGION-NAME]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AWS Admin profile can be used to login to the portal as such:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to : &lt;a href="https://aws.amazon.com/console/"&gt;https://aws.amazon.com/console/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Choose to login as IAM user and NOT root user&lt;/li&gt;
&lt;li&gt;Once logged, please setup a &lt;strong&gt;Multi-Factor Authentication (MFA)&lt;/strong&gt; at:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;IAM &amp;gt; User &amp;gt; [User Name]&amp;gt; [Security Credentials] &amp;gt; Assigned MFA device &amp;gt; Virtual MFA device
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>iam</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Just a HelloWorld Nginx webserver !. with SSL</title>
      <dc:creator>AnupamMahapatra</dc:creator>
      <pubDate>Tue, 07 Apr 2020 13:18:46 +0000</pubDate>
      <link>https://dev.to/anupamncsu/just-a-helloworld-nginx-webserver-with-ssl-2hjg</link>
      <guid>https://dev.to/anupamncsu/just-a-helloworld-nginx-webserver-with-ssl-2hjg</guid>
      <description>&lt;p&gt;There are many occasions when to test out a system we want a running example webserver at the end of it . An example with no bells and whistles . Just a plain webserver with a self signed cert will do. So Let me dumb it down.&lt;/p&gt;

&lt;p&gt;The following script will give you just that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download and install nginx web server.&lt;/li&gt;
&lt;li&gt;generate a self signed cert using openssl.&lt;/li&gt;
&lt;li&gt;generate a couple static webpage to be served through nginx at different paths&lt;/li&gt;
&lt;li&gt;edit the nginx configuration to point to the webpage and server it with the ssl certificate.&lt;/li&gt;
&lt;li&gt;install firewalld and open firewall permission for http and https.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pre-requisite:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Running on a CentOS/RedHat Linux with yum installed.&lt;/li&gt;
&lt;li&gt;openssl installed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create the following script (&lt;strong&gt;SinglePageNginx.sh&lt;/strong&gt;) . Give it execute access (&lt;strong&gt;chmod 755 SinglePageNginx.sh&lt;/strong&gt;) and run (&lt;strong&gt;./SinglePageNginx.sh&lt;/strong&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
set -e

# Create self signed cert for HTTPS reverse proxy as Nginx
openssl genrsa -out /tmp/app.key 2048
openssl req -new -key /tmp/app.key -out /tmp/app.csr -subj "/C=CA/ST=ON/L=Toronto/O=Digital/OU=IT/CN=app.local.com"
openssl x509 -req -days 365 -in /tmp/app.csr -signkey /tmp/app.key -out /tmp/app.crt
chmod 644 /tmp/app.crt /tmp/app.key
echo "self signed cert done" &amp;gt;&amp;gt; /tmp/debug.log

# Install and configure nginx for HTTPS
yum -y install nginx
mkdir -p /etc/nginx/ssl
mv -f /tmp/app.key /etc/nginx/ssl/app.key
mv -f /tmp/app.crt /etc/nginx/ssl/app.crt
chmod 755 /etc/nginx/ssl
chmod -R 644 /etc/nginx/ssl/*
mv -f /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
# mv -f /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.bak

######################
# STATIC WEB PAGE
######################
mkdir -p /etc/nginx/www
cat &amp;gt; /etc/nginx/www/index.html &amp;lt;&amp;lt;'EOF'  
&amp;lt;h1&amp;gt; Hello There&amp;lt;/h1&amp;gt;
  &amp;lt;p&amp;gt;
    This webpage is serverd through nginx
  &amp;lt;/p&amp;gt;
EOF
chmod 0755  /etc/nginx/www
chmod 644 /etc/nginx/www/index.html
echo "index webpage created "  &amp;gt;&amp;gt; /tmp/debug.log


# set conf in nginx
cat &amp;gt; /etc/nginx/nginx.conf &amp;lt;&amp;lt;'EOF'  
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}

http {
  log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
  '$status $body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"';
  access_log  /var/log/nginx/access.log  main;
  sendfile            on;
  tcp_nopush          on;
  tcp_nodelay         on;
  keepalive_timeout   65;
  types_hash_max_size 2048;
  include /etc/nginx/mime.types;
  default_type        application/octet-stream;
  include /etc/nginx/conf.d/*.conf;
}
EOF


# set app conf in nginx
cat &amp;gt; /etc/nginx/conf.d/app.conf &amp;lt;&amp;lt;'EOF'  
server {
    listen 443 ssl;
    server_name localhost;
    root /etc/nginx/www;
    error_log /var/log/nginx/app-server-error.log notice;
    index demo-index.html index.html;
    expires -1;

    ssl_certificate           /etc/nginx/ssl/app.crt;
    ssl_certificate_key       /etc/nginx/ssl/app.key;

    ssl_session_cache  builtin:1000  shared:SSL:10m;
    ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
    ssl_prefer_server_ciphers on;

    access_log  /var/log/nginx/app.access.log;

    sub_filter_once off;
    sub_filter 'server_hostname' '$hostname';
    sub_filter 'server_address'  '$server_addr:$server_port';
    sub_filter 'server_url'      '$request_uri';
    sub_filter 'remote_addr'     '$remote_addr:$remote_port';
    sub_filter 'server_date'     '$time_local';
    sub_filter 'client_browser'  '$http_user_agent';
    sub_filter 'request_id'      '$request_id';
    sub_filter 'nginx_version'   '$nginx_version';
    sub_filter 'document_root'   '$document_root';
    sub_filter 'proxied_for_ip'  '$http_x_forwarded_for';

    location / {
      index index.html;
    }
}
EOF
chmod -R 644  /etc/nginx/ssl/* /etc/nginx/nginx.conf /etc/nginx/conf.d/app.conf
echo "nginx installation done" &amp;gt;&amp;gt; /tmp/debug.log


yum -y install firewalld
systemctl unmask firewalld
systemctl restart firewalld
firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --zone=public --permanent --add-service=https
firewall-cmd --reload
systemctl enable firewalld
systemctl restart firewalld


# Start and enable on boot, nginx as a service
systemctl enable nginx
systemctl restart nginx
echo "nginx start done" &amp;gt;&amp;gt; /tmp/debug.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should have a nginx running in your local&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://localhost:443 --insecure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Making an imporvement on the above script. Making it even simple with a default configuration file and couple of webpages at different paths. &lt;br&gt;
Again:&lt;/p&gt;

&lt;p&gt;Create the following script (&lt;strong&gt;DoublePageNginx.sh&lt;/strong&gt;) . Give it execute access (&lt;strong&gt;chmod 755 DoublePageNginx.sh&lt;/strong&gt;) and run (&lt;strong&gt;./DoublePageNginx.sh&lt;/strong&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
set -e

# Create self signed cert for HTTPS reverse proxy as Nginx
openssl genrsa -out /tmp/app.key 2048
openssl req -new -key /tmp/app.key -out /tmp/app.csr -subj "/C=CA/ST=ON/L=Toronto/O=Digital/OU=IT/CN=app.local.com"
openssl x509 -req -days 365 -in /tmp/app.csr -signkey /tmp/app.key -out /tmp/app.crt
chmod 644 /tmp/app.crt /tmp/app.key
echo "self signed cert done" &amp;gt;&amp;gt; /tmp/debug.log

yum -y install nginx
mkdir -p /etc/nginx/ssl
cp -f /tmp/app.key /etc/nginx/ssl/app.key
cp -f /tmp/app.crt /etc/nginx/ssl/app.crt
chmod 755 /etc/nginx/ssl &amp;amp;&amp;amp; chmod -R 644 /etc/nginx/ssl/*
mv -f /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
echo "nginx installed" &amp;gt;&amp;gt; /tmp/debug.log

# 
cat &amp;gt; /etc/nginx/nginx.conf &amp;lt;&amp;lt;'EOF'  
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}
http {
  log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
  '$status $body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"';
  access_log  /var/log/nginx/access.log  main;
  sendfile            on;
  tcp_nopush          on;
  tcp_nodelay         on;
  keepalive_timeout   65;
  types_hash_max_size 2048;
  include /etc/nginx/mime.types;
  default_type        application/octet-stream;
  include /etc/nginx/conf.d/*.conf;
  server 
  {
          listen       443 ssl http2 default_server;
          listen       [::]:443 ssl http2 default_server;
          server_name  _;
          root         /etc/nginx/www;
          index index.html index.htm;
          ssl_certificate "/etc/nginx/ssl/app.crt";
          ssl_certificate_key "/etc/nginx/ssl/app.key";
          ssl_session_cache shared:SSL:1m;
          ssl_session_timeout  10m;
          ssl_ciphers HIGH:!aNULL:!MD5;
          ssl_prefer_server_ciphers on;
          # Load configuration files for the default server block.
          include /etc/nginx/default.d/*.conf;

          location / {
            # it picks up default root and checks for default index.html file at the path
            }

          location /bar {
            # it picks up default root, adds /bar to the root and looks for the default index.html file at the path
           }


          error_page 404 /404.html;
              location = /40x.html {
          }

          error_page 500 502 503 504 /50x.html;
              location = /50x.html {
          }
    }
}
EOF

## Create static webpages to serve
mkdir -p /etc/nginx/www
cat &amp;gt; /etc/nginx/www/index.html &amp;lt;&amp;lt;'EOF'  
&amp;lt;h1&amp;gt; Hello There&amp;lt;/h1&amp;gt;
  &amp;lt;p&amp;gt;
    This webpage is serverd through nginx at default root path
  &amp;lt;/p&amp;gt;
EOF
chmod 0755  /etc/nginx/www
chmod 644 /etc/nginx/www/index.html
echo "index webpage created "  &amp;gt;&amp;gt; /tmp/debug.log

mkdir -p /etc/nginx/www/bar
cat &amp;gt; /etc/nginx/www/bar/index.html &amp;lt;&amp;lt;'EOF'  
&amp;lt;h1&amp;gt; Hello There&amp;lt;/h1&amp;gt;
  &amp;lt;p&amp;gt;
    This webpage is serverd through nginx at path /$root/bar
  &amp;lt;/p&amp;gt;
EOF
chmod 0755  /etc/nginx/www/bar
chmod 644 /etc/nginx/www/bar/index.html
echo "index webpage created for /bar"  &amp;gt;&amp;gt; /tmp/debug.log

## firewalld
yum -y install firewalld
systemctl unmask firewalld
systemctl restart firewalld
firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --zone=public --permanent --add-service=https
firewall-cmd --reload
systemctl enable firewalld
systemctl restart firewalld

systemctl restart nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should have a nginx running in your local&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://localhost:443 --insecure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Self-Referencing Security Groups on AWS</title>
      <dc:creator>AnupamMahapatra</dc:creator>
      <pubDate>Mon, 06 Apr 2020 17:57:20 +0000</pubDate>
      <link>https://dev.to/anupamncsu/self-referencing-security-groups-on-aws-59gb</link>
      <guid>https://dev.to/anupamncsu/self-referencing-security-groups-on-aws-59gb</guid>
      <description>&lt;p&gt;A snapshot of self referencing Security group on AWS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
Description: Create a VPC with a SG which references itself
AWSTemplateFormatVersion: '2010-09-09'
Resources:
  vpctester:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: 172.16.0.0/23
      EnableDnsSupport: false
      EnableDnsHostnames: false
      InstanceTenancy: default
      Tags:
      - Key: Name
        Value: vpctester
  sgtester:
    Type: AWS::EC2::SecurityGroup
    DependsOn: vpctester
    Properties:
      GroupDescription: vpc tester sg
      VpcId:
        Ref: vpctester
  sgtesteringress:
    Type: AWS::EC2::SecurityGroupIngress
    DependsOn: sgtester
    Properties:
      GroupId:
        Ref: sgtester
      IpProtocol: tcp
      FromPort: '0'
      ToPort: '65535'
      SourceSecurityGroupId:
        Ref: sgtester
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>COVID-19 Daily Data for Analytics - JSON and CSV</title>
      <dc:creator>AnupamMahapatra</dc:creator>
      <pubDate>Fri, 03 Apr 2020 00:56:46 +0000</pubDate>
      <link>https://dev.to/anupamncsu/covid-19-data-for-analytics-json-and-csv-10fe</link>
      <guid>https://dev.to/anupamncsu/covid-19-data-for-analytics-json-and-csv-10fe</guid>
      <description>&lt;p&gt;This post is to socialize an API publishing COVID-19 spread in countries by date and to reform the datapoint to publish a GoogleSheet version to be consumed by Tableau.&lt;/p&gt;

&lt;p&gt;DataPoints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COUNTRY 
- DATE  
- COVID19-CONFIRMED 
- COVID19-DEATHS    
- COVID19-RECOVERED     
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We wrote a python code to consume the api json object and write it to google sheet using GoogleAPI everyday.&lt;br&gt;
For now its running from my local server and has not been hosted anywhere, but you can easily use the code to host your own.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import time
import requests
import gspread 
import  oauth2client
import csv
from datetime import datetime, date, timedelta

from oauth2client.service_account import ServiceAccountCredentials
scope = ["https://spreadsheets.google.com/feeds",'https://www.googleapis.com/auth/spreadsheets',"https://www.googleapis.com/auth/drive.file","https://www.googleapis.com/auth/drive"]
creds = ServiceAccountCredentials.from_json_keyfile_name("conf.json", scope)

print ("start running code");

response = requests.get('https://pomber.github.io/covid19/timeseries.json')

countries = response.json().keys()

count = 0

for key in countries : 
    #print(key)
    entries = response.json().get(key)
    for entry in entries:
        #print(entry)
        if(datetime.strptime(str(entry.get("date")), '%Y-%m-%d').date() == date.today() - timedelta(days=1)):
            data = [ key, str(entry.get("date")), str(entry.get("confirmed")), str(entry.get("deaths")), str(entry.get("recovered")) ]
            # print(data)
            creds = ServiceAccountCredentials.from_json_keyfile_name("conf.json", scope)
            client = gspread.authorize(creds)
            sheet = client.open("Covid19DataStream").sheet1
            sheet.insert_row(data)
            count = count+1
            if(count == 500):
                print("sleeping")
                time.sleep(110)
                count = 0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The refernced conf.json stores your private.key for writing to the google sheet API.&lt;br&gt;
Please you this reference to create your own&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=cnPlKLEGR7E"&gt;https://www.youtube.com/watch?v=cnPlKLEGR7E&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the source of data is :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://pomber.github.io/covid19/timeseries.json"&gt;https://pomber.github.io/covid19/timeseries.json&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the code you will find the json data updated everyday in my google sheet which is open to public read access:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://tinyurl.com/vrap9ox"&gt;https://tinyurl.com/vrap9ox&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Countries covered:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Afghanistan
Albania
Algeria
Andorra
Angola
Antigua and Barbuda
Argentina
Armenia
Australia
Austria
Azerbaijan
Bahamas
Bahrain
Bangladesh
Barbados
Belarus
Belgium
Benin
Bhutan
Bolivia
Bosnia and Herzegovina
Brazil
Brunei
Bulgaria
Burkina Faso
Cabo Verde
Cambodia
Cameroon
Canada
Central African Republic
Chad
Chile
China
Colombia
Congo (Brazzaville)
Congo (Kinshasa)
Costa Rica
Cote d'Ivoire
Croatia
Diamond Princess
Cuba
Cyprus
Czechia
Denmark
Djibouti
Dominican Republic
Ecuador
Egypt
El Salvador
Equatorial Guinea
Eritrea
Estonia
Eswatini
Ethiopia
Fiji
Finland
France
Gabon
Gambia
Georgia
Germany
Ghana
Greece
Guatemala
Guinea
Guyana
Haiti
Holy See
Honduras
Hungary
Iceland
India
Indonesia
Iran
Iraq
Ireland
Israel
Italy
Jamaica
Japan
Jordan
Kazakhstan
Kenya
Korea, South
Kuwait
Kyrgyzstan
Latvia
Lebanon
Liberia
Liechtenstein
Lithuania
Luxembourg
Madagascar
Malaysia
Maldives
Malta
Mauritania
Mauritius
Mexico
Moldova
Monaco
Mongolia
Montenegro
Morocco
Namibia
Nepal
Netherlands
New Zealand
Nicaragua
Niger
Nigeria
North Macedonia
Norway
Oman
Pakistan
Panama
Papua New Guinea
Paraguay
Peru
Philippines
Poland
Portugal
Qatar
Romania
Russia
Rwanda
Saint Lucia
Saint Vincent and the Grenadines
San Marino
Saudi Arabia
Senegal
Serbia
Seychelles
Singapore
Slovakia
Slovenia
Somalia
South Africa
Spain
Sri Lanka
Sudan
Suriname
Sweden
Switzerland
Taiwan*
Tanzania
Thailand
Togo
Trinidad and Tobago
Tunisia
Turkey
Uganda
Ukraine
United Arab Emirates
United Kingdom
Uruguay
US
Uzbekistan
Venezuela
Vietnam
Zambia
Zimbabwe
Dominica
Grenada
Mozambique
Syria
Timor-Leste
Belize
Laos
Libya
West Bank and Gaza
Guinea-Bissau
Mali
Saint Kitts and Nevis
Kosovo
Burma
MS Zaandam
Botswana
Burundi
Sierra Leone
Malawi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Stay home stay safe !&lt;/p&gt;

</description>
      <category>corona</category>
      <category>covid19</category>
      <category>datascience</category>
      <category>tableau</category>
    </item>
    <item>
      <title>Docker + Jenkins: Chemistry </title>
      <dc:creator>AnupamMahapatra</dc:creator>
      <pubDate>Thu, 02 Apr 2020 01:38:26 +0000</pubDate>
      <link>https://dev.to/anupamncsu/docker-jenkins-chemistry-4aoa</link>
      <guid>https://dev.to/anupamncsu/docker-jenkins-chemistry-4aoa</guid>
      <description>&lt;h3&gt;
  
  
  With jenkins you can use docker in two contexts.
&lt;/h3&gt;

&lt;p&gt;1 - use docker container from a private repo as the jenkins node to build your code&lt;br&gt;&lt;br&gt;
2 - use jenkins node to build your docker container and probably push it to your private repo&lt;/p&gt;

&lt;p&gt;This was confusing when i started and so i want to declutter this :&lt;/p&gt;

&lt;p&gt;Assuming &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you have a jenkins single agent cluster with docker daemon running on that agent. &lt;/li&gt;
&lt;li&gt;a private docker registry with credentials to pull and push&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  use docker container from a private repo as the jenkins node to build your code
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Install plugin &lt;strong&gt;pipeline model&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
(&lt;a href="https://jenkins.io/doc/book/pipeline/"&gt;https://jenkins.io/doc/book/pipeline/&lt;/a&gt;)&lt;br&gt;
(&lt;a href="https://plugins.jenkins.io/pipeline-model-definition/"&gt;https://plugins.jenkins.io/pipeline-model-definition/&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;to specify that you want Jenkins to pull the node:6.0 image from a private registry, and run the build inside it. This can pretty much be any image you like, your build steps and stages will run inside it. the registry credentials are stored in the jenkins credentials store for this purpose.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline {

    /* start with outmost.*/
    agent any

    /* This method fetches a container defination from the registered repo and build and runs the steps inside it.*/
    stages {
        stage('Test') {
            agent {
                    docker { 
                        image 'node:7-alpine' 
                        registryUrl 'https://my-docker-virtual.artifactory.rogers.com/'
                        registryCredentialsId 'ARTIFACTORY_USER'
                        }
            }
            steps {
                sh 'node --version'
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;IMPORTANT: Inside this container, i have no context of the docker which is running on bare metal jenkins. hence i have no context of the registry . I can not write a push command in the steps as i am inside the container.&lt;/li&gt;
&lt;li&gt;I can also use a docker file as a agent definition instead of fetching a docker definition from registry
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline {
    agent { dockerfile true }
    stages {
        stage('Test') {
            steps {
                sh 'node --version'
                sh 'svn --version'
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Here it is looking into the root directory for the Dockerfile which it will use as a build agent.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  use jenkins node to build your docker container and probably push it to your private repo
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node {

    checkout scm

    docker.withRegistry('https://registry.hub.docker.com','dockerhubcredentails'){
    def myImage= docker.build("digitaldockerimages/busybox:0.0.1")

    /*push container to the registry*/
    myImage.push()
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here i am using a private registry and fetching the credentials for the registry from jenkins secret storage.  &lt;/p&gt;

&lt;p&gt;Ref:&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=z32yzy4TrKM"&gt;https://www.youtube.com/watch?v=z32yzy4TrKM&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://jenkins.io/doc/book/pipeline/docker/#sidecar"&gt;https://jenkins.io/doc/book/pipeline/docker/#sidecar&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/jenkinsci/pipeline-model-definition-plugin/wiki/Controlling-your-build-environment"&gt;https://github.com/jenkinsci/pipeline-model-definition-plugin/wiki/Controlling-your-build-environment&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://issues.jenkins-ci.org/browse/JENKINS-39684"&gt;https://issues.jenkins-ci.org/browse/JENKINS-39684&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpc</category>
      <category>cloud</category>
      <category>subnets</category>
    </item>
    <item>
      <title>Pod Security Policy on EKS</title>
      <dc:creator>AnupamMahapatra</dc:creator>
      <pubDate>Mon, 30 Mar 2020 00:44:38 +0000</pubDate>
      <link>https://dev.to/anupamncsu/pod-security-policy-on-eks-mp9</link>
      <guid>https://dev.to/anupamncsu/pod-security-policy-on-eks-mp9</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cNIERU4N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jubdxnpnq08i8ckz0w5z.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cNIERU4N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jubdxnpnq08i8ckz0w5z.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Kubernetes is a platform for building platforms. It's a better place to start; not the endgame.
- Kelsey Hightower ( @kelseyhightower)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What is EKS?
&lt;/h2&gt;

&lt;p&gt;Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service running on AWS. It takes away the bulk of the pain of managing a Kubernetes service by running the master tier for you. As with all AWS services, security is a Shared Responsibility Model. Amazon ensure the security of the master tier, but what you run inside the cluster – that’s up to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Pod Security Policy?
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, workloads are deployed as Pods, which expose a lot of the functionality of running Docker containers.&lt;br&gt;
Pod Security Policies are cluster-wide resources that control security sensitive aspects of pod specification. PSP objects define a set of conditions that a pod must run with in order to be accepted into the system, as well as defaults for their related fields. PodSecurityPolicy is an optional admission controller that is enabled by default through the API, thus policies can be deployed without the PSP admission plugin enabled. This functions as a validating and mutating controller simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pod Security Policies allow you to control:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The running of privileged containers&lt;/li&gt;
&lt;li&gt;Usage of host namespaces&lt;/li&gt;
&lt;li&gt;Usage of host networking and ports&lt;/li&gt;
&lt;li&gt;Usage of volume types&lt;/li&gt;
&lt;li&gt;Usage of the host filesystem&lt;/li&gt;
&lt;li&gt;A white list of Flexvolume drivers&lt;/li&gt;
&lt;li&gt;The allocation of an FSGroup that owns the pod’s volumes&lt;/li&gt;
&lt;li&gt;Requirements for use of a read only root file system&lt;/li&gt;
&lt;li&gt;The user and group IDs of the container&lt;/li&gt;
&lt;li&gt;Escalations of root privileges&lt;/li&gt;
&lt;li&gt;Linux capabilities, SELinux context, AppArmor, seccomp, sysctl profile&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lets approach this subject from a application standpoint. Most applications are deployed into EKS in form of deployments running pods. Or sample deployment will be such:&lt;br&gt;
Assuming we have agreen-field EKS with no special security controls on cluster/namespaces :&lt;br&gt;
In the manifest &lt;strong&gt;alpine-restricted.yml&lt;/strong&gt; , we are defining a few security contexts at the pod and container level.&lt;br&gt;
&lt;strong&gt;runAsUser: 1000&lt;/strong&gt; means all containers in the pod will run as user UID 1000&lt;br&gt;
&lt;strong&gt;fsGroup: 2000&lt;/strong&gt; means the owner for mounted volumes and any files created in that volume will be GID 2000&lt;br&gt;
&lt;strong&gt;allowPrivilegeEscalation: false&lt;/strong&gt; means the container cannot escalate privileges&lt;br&gt;
&lt;strong&gt;readOnlyRootFilesystem: true&lt;/strong&gt; means the container can only read the root filesystem&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hrJarSwJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pwtsbifjjfu3ooyup3wu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hrJarSwJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pwtsbifjjfu3ooyup3wu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To enable these PSPs we need to delete a PSP. You see, by default pods are in fact validated against a PSP by default – it’s just that it allows everything and is accessible to everyone. it’s called &lt;strong&gt;eks.privileged&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Note that by doing this, you can bork your cluster – make sure you’re ready to &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/pod-security-policy.html"&gt;recreate it.&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete psp eks.privileged
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
    kind: Deployment
    metadata:
        name: alpine-restricted
        labels:
            app: alpine-restricted
    spec:
        replicas: 1
        selector:
        matchLabels:
            app: alpine-restricted
        template:
        metadata:
            labels:
            app: alpine-restricted
        spec:
            securityContext:
            runAsUser: 1000
            fsGroup: 2000
            volumes:
                - name: sec-ctx-vol
            emptyDir: {}
            containers:
                - name: alpine-restricted
            image: alpine:3.9
            command: ["sleep", "3600"]
            volumeMounts:
                - name: sec-ctx-vol
                    mountPath: /data/demo
            securityContext:
                allowPrivilegeEscalation: false
                readOnlyRootFilesystem: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upon deployment it will fail to launch with the following error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; kubectl apply -f alpine-restricted.yaml
&amp;gt; kubectl get deploy,rs,pod
# output

NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/alpine-test   0/1     0            0           67s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.extensions/alpine-test-85c976cdd   1         0         0       67s

&amp;gt; kubectl describe replicaset.extensions/alpine-test-85c976cdd | tail -n3
# output

 Type     Reason        Age                    From                   Message
  ----     ------        ----                   ----                   -------
  Warning  FailedCreate  114s (x16 over 4m38s)  replicaset-controller  Error creating: pods "alpine-test-85c976cdd-" is forbidden: unable to validate against any pod security policy: []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without Pod Security Policies, the replication controller cannot create the pod. Let's deploy a restricted PSP and create a ClusterRole and ClusterRolebinding, which will allow the replication-controller to use the PSP defined above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
    name: psp.restricted
    annotations:
        seccomp.security.alpha.kubernetes.io/defaultProfileName:  'docker/default'
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default'
spec:
    readOnlyRootFilesystem: true
    privileged: false
    allowPrivilegeEscalation: false
    runAsUser:
        rule: 'MustRunAsNonRoot'
    supplementalGroups:
        rule: 'MustRunAs'
        ranges:
            - min: 1
            max: 65535
    fsGroup:
        rule: 'MustRunAs'
    ranges:
        - min: 1
        max: 65535
    seLinux:
        rule: 'RunAsAny'
    volumes:
        - configMap
        - emptyDir
        - secret
--------
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
    name: psp:restricted
rules:
- apiGroups:
  - policy
  resourceNames:
  - psp.restricted
  resources:
  - podsecuritypolicies
  verbs:
  - use
--------
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: psp:restricted:binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: psp:restricted
subjects:
  # Apply PSP to service account of the controllers in kube-system that bring up pods
  # DaemonSet is excluded since they are used to interact with the host
  - kind: ServiceAccount
    namespace: kube-system
    name: replication-controller
  - kind: ServiceAccount
    namespace: kube-system
    name: replicaset-controller
  - kind: ServiceAccount
    namespace: kube-system
    name: job-controller
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the PSP &lt;strong&gt;psp.restricted&lt;/strong&gt; have some restrictions for pods :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the root filesystem must be &lt;strong&gt;readonly&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;it doesn't allow for privileged containers&lt;/li&gt;
&lt;li&gt;privilege escalations are by forbidden &lt;strong&gt;readOnlyRootFilesystem: true&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;the user must be nonroot &lt;strong&gt;runAsUser: rule: 'MustRunAsNonRoot'&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;fsGroup and supplementalGroups cannot be root &lt;strong&gt;defined allowed range&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;the pod can only use the volumes specified: &lt;strong&gt;configMap, emptyDir and secret&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;in the annotations section there is a seccomp profile, which is used by containers. Due to these annotations, the PSP will mutate the podspec before deploying it. If we check the pod manifest in Kubernetes, we will see &lt;strong&gt;seccomp&lt;/strong&gt; annotations defined there as well.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Re-deploy the &lt;strong&gt;alpine-restricted.yml&lt;/strong&gt; and inspect the pod annotation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pod/alpine-test-85c976cdd-k69cn -o jsonpath='{.metadata.annotations}'
# output
map[kubernetes.io/psp:psp.restricted seccomp.security.alpha.kubernetes.io/pod:docker/default]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that there are two annotations set:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kubernetes.io/psp:psp.restricted&lt;/li&gt;
&lt;li&gt;seccomp.security.alpha.kubernetes.io/pod:docker/default&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first is the PodSecurityPolicy(&lt;strong&gt;psp.restricted&lt;/strong&gt;) used by the pod. &lt;br&gt;
The second is the seccomp profile used by the pod. Seccomp (secure computing mode) is a Linux kernel feature used to restrict the actions available inside a container.&lt;/p&gt;

&lt;p&gt;Lets edit our deployment a bit and deploy a more privileged version: &lt;strong&gt;alpine-privileged.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: alpine-privileged
   namespace: privileged
   labels:
     app: alpine-privileged
 spec:
   replicas: 1
  selector:
    matchLabels:
      app: alpine-privileged
  template:
    metadata:
      labels:
        app: alpine-privileged
    spec:
      securityContext:
        runAsUser: 1000
        fsGroup: 2000
      volumes:
      - name: sec-ctx-vol
        emptyDir: {}
      containers:
      - name: alpine-privileged
        image: alpine:3.9
        command: ["sleep", "1800"]
        volumeMounts:
        - name: sec-ctx-vol
          mountPath: /data/demo
        securityContext:
          allowPrivilegeEscalation: true
          readOnlyRootFilesystem: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This deployment want to start a pod with write permissions on the root filesystem, and to enable privilege escalation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe replicaset.extensions/alpine-privileged-7bdb64b569 -n privileged | tail -n3
# output
  Type     Reason        Age                   From                   Message
  ----     ------        ----                  ----                   -------
  Warning  FailedCreate  29s (x17 over 5m56s)  replicaset-controller  Error creating: pods "alpine-privileged-7bdb64b569-" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.readOnlyRootFilesystem: Invalid value: false: ReadOnlyRootFilesystem must be set to true spec.containers[0].securityContext.allowPrivilegeEscalation: Invalid value: true: Allowing privilege escalation for containers is not allowed]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A long string of error, but basically it states the following:&lt;br&gt;&lt;br&gt;
The pod is created with the &lt;strong&gt;replicaset-controller&lt;/strong&gt; service account, which allows use of the &lt;strong&gt;psp.restricted&lt;/strong&gt; policy. Pod creation failed because podspec contains the &lt;strong&gt;allowPrivilegeEscalation: true&lt;/strong&gt; and &lt;strong&gt;readOnlyRootFilesystem: false&lt;/strong&gt; securityContexts - at this time, these are invalid values as far as &lt;strong&gt;psp.restricted&lt;/strong&gt; is concerned.&lt;/p&gt;

&lt;p&gt;To deploy this, let's make the deployment go through a service account that can allow it do get deployed. Making the following modification in the deployment manifest file, making it use the &lt;strong&gt;privileged-sa&lt;/strong&gt; service account&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
  template:
    metadata:
      labels:
        app: alpine-privileged
    spec:
      serviceAccountName: privileged-sa
      securityContext:
        runAsUser: 1
        fsGroup: 1
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;deploy a psp.privileged policy and create a Role and a Rolebinding to allow the privileged-sa to use the defined PSP&lt;br&gt;
Create a separate namespace and a service account in it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create sa privileged-sa -n privileged
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;next create the PSP and the role and role-binding to go with it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: policy/v1beta1
 kind: PodSecurityPolicy
 metadata:
   creationTimestamp: null
   name: psp.privileged
 spec:
   readOnlyRootFilesystem: false
   privileged: true
   allowPrivilegeEscalation: true
  runAsUser:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'
  seLinux:
    rule: 'RunAsAny'
  volumes:
  - configMap
  - emptyDir
  - secret
--------
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: psp:privileged
  namespace: privileged
rules:
- apiGroups:
  - policy
  resourceNames:
  - psp.privileged
  resources:
  - podsecuritypolicies
  verbs:
  - use
--------
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: psp:privileged:binding
  namespace: privileged
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: psp:privileged
subjects:
  - kind: ServiceAccount
    name: privileged-sa
    namespace: privileged
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The psp.privileged policy contains &lt;strong&gt;readOnlyRootFilesystem: false&lt;/strong&gt; and &lt;strong&gt;allowPrivilegeEscalation: true.&lt;/strong&gt; The &lt;strong&gt;privileged-sa&lt;/strong&gt; service account in the privileged namespace allows us to use the &lt;strong&gt;psp.privileged&lt;/strong&gt; policy, so, if we deploy the modified &lt;strong&gt;alpine-privileged.yml&lt;/strong&gt;, the pod should start.&lt;br&gt;
Deploy the pod and inspect the pod annotation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pod/alpine-privileged-667dc5c859-q7mf6 -n privileged -o jsonpath='{.metadata.annotations}'
# output
map[kubernetes.io/psp:psp.privileged]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What if we have multiple PSP
&lt;/h2&gt;

&lt;p&gt;When multiple policies are available, the pod security policy controller selects policies in the following order:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If policies successfully validate the pod without altering it, they are used.&lt;/li&gt;
&lt;li&gt;In the event of a pod creation request, the first valid policy in alphabetical order is used.&lt;/li&gt;
&lt;li&gt;Otherwise, if there's a pod update request an error is returned, because pod mutations are disallowed during update operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;we'll create two Pod Security Policies with RBAC:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: policy/v1beta1
 kind: PodSecurityPolicy
 metadata:
   name: psp.1
   annotations:
     seccomp.security.alpha.kubernetes.io/defaultProfileName:  'docker/default'
     seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default'
 spec:
   readOnlyRootFilesystem: false
  privileged: false
  allowPrivilegeEscalation: true
  runAsUser:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'
  seLinux:
    rule: 'RunAsAny'
  volumes:
  - configMap
  - emptyDir
  - secret
--------
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.2
  annotations:
    seccomp.security.alpha.kubernetes.io/defaultProfileName:  'docker/default'
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default'
spec:
  readOnlyRootFilesystem: false
  privileged: false
  allowPrivilegeEscalation: false
  runAsUser:
    rule: 'MustRunAsNonRoot'
  supplementalGroups:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'
  seLinux:
    rule: 'RunAsAny'
  volumes:
  - configMap
  - emptyDir
  - secret
--------
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: psp:multy
  namespace: multy
rules:
- apiGroups:
  - policy
  resourceNames:
  - psp.1
  - psp.2
  resources:
  - podsecuritypolicies
  verbs:
  - use
--------
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: psp:multy:binding
  namespace: multy
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: psp:multy
subjects:
  - kind: ServiceAccount
    name: replicaset-controller
    namespace: kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;there are two PSPs in psp-multiple.yaml, which are almost identical. Now, Role and RoleBinding in the multy namespace will allow us to use a predefined &lt;strong&gt;psp.1&lt;/strong&gt; and &lt;strong&gt;psp.2&lt;/strong&gt; for the replicaset-controller&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get psp

# output
NAME             PRIV    CAPS   SELINUX    RUNASUSER          FSGROUP     SUPGROUP    READONLYROOTFS   VOLUMES
...
psp.1            false          RunAsAny   RunAsAny           RunAsAny    RunAsAny    true             configMap,emptyDir,secret
psp.2            false          RunAsAny   MustRunAsNonRoot   RunAsAny    RunAsAny    true             configMap,emptyDir,secret
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating a deployment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: alpine-multy
   namespace: multy
   labels:
     app: alpine-multy
 spec:
   replicas: 1
  selector:
    matchLabels:
      app: alpine-multy
  template:
    metadata:
      labels:
        app: alpine-multy
    spec:
      securityContext:
        runAsUser: 1000
        fsGroup: 2000
      containers:
      - name: alpine-multy
        image: alpine:3.9
        command: ["sleep", "2400"]
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Checking the annotation again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pod alpine-multy-5864df9bc8-mgg68 -n multy -o jsonpath='{.metadata.annotations}'
# output
map[kubernetes.io/psp:psp.1 seccomp.security.alpha.kubernetes.io/pod:docker/default]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, this pod uses the psp.1 PodSecurityPolicy, which is the first valid policy in alphabetical order, though it is the second rule in the PSP selection.  &lt;/p&gt;

&lt;p&gt;Let's modify &lt;strong&gt;psp.2&lt;/strong&gt; by deleting seccomp's annotations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.2
spec:
  readOnlyRootFilesystem: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;delete and deploy the pod again after modifying the &lt;strong&gt;psp.2&lt;/strong&gt; check the annotation again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pod alpine-multy-5864df9bc8-lj2fz -n multy -o jsonpath='{.metadata.annotations}'
# output
map[kubernetes.io/psp:psp.2]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, the pod uses &lt;strong&gt;psp.2&lt;/strong&gt; due to the annotations being deleted, since, in this case, the policy successfully validated the pod without altering it; it's the 1st rule in PSP selection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The security model around nodes is well thought out and you cannot escalate cluster admin just by compromising a node. However, without PSPs, anything already running on that node is fair game. After configuring appropriate PSPs these vulnerabilities cannot be exploited.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>security</category>
    </item>
    <item>
      <title>AWS Virtual Private Cloud setup as a City Analogy</title>
      <dc:creator>AnupamMahapatra</dc:creator>
      <pubDate>Fri, 20 Mar 2020 07:07:24 +0000</pubDate>
      <link>https://dev.to/anupamncsu/aws-virtual-private-cloud-setup-as-a-city-analogy-3ekc</link>
      <guid>https://dev.to/anupamncsu/aws-virtual-private-cloud-setup-as-a-city-analogy-3ekc</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mxMrJ23j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nt47un3f1wnmnb53cvrr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mxMrJ23j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nt47un3f1wnmnb53cvrr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this post, we're going to walk through the core concepts of AWS Virtual Private Clouds (VPCs) in the context of an analogy. &lt;br&gt;
My main Objectives are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Explore each core component a part of the analogy&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Relate each component to the overall setup&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Create a Cloudformation to create the setup in AWS&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When we start using AWS we probably don't want all of our servers, services, etc just thrown into a big melting pot. In this type of ecosystem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;everything shares the same network&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;everything can step on everything else's toes&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;resource management becomes an army of naming conventions&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, this can work if you're just using an S3 bucket, a random EC2 instance or experimenting. But when we go to build a serious cloud infrastructure for an enterprise setup, this Hill Billy ecosystem isn't going to cut it.&lt;br&gt;
An Enterprise set up among other things asks for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Control over the organization of resources.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Control of security.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Control of traffic between our services.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Control to keep differing architectures completely separate from each other.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These controls can be related to the analogy of creating a new city. If you ever played games like SimCity you will know that we can not throw things around randomly. The City should be organized. Airports, schools, Houses, Roads need to be created such that things can be controlled and organized. In this post, we will draw from that analogy and build a city like VPC. Call it Inception City.&lt;br&gt;
Before we dive into the specifics, the overall city will look as such.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3I7EEehE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rkj1b8uz7hj9kp5eurhm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3I7EEehE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rkj1b8uz7hj9kp5eurhm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The VPC: Inception City
&lt;/h2&gt;

&lt;p&gt;In the analogy of the city, we will draw a piece of land to make the city. The land must be vast enough to accommodate various zonal divide. East, West, and Central. Now to validate the city to the rest of the world, we need an address / Zipcode. This is the CIDR in the computer world.&lt;br&gt;
In the resource section of our cloudformation(City Plan) , we define the VPC as such&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resource:
VPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock:  !Ref VpcCIDR
      EnableDnsSupport: true
      EnableDnsHostnames: true
      Tags:
      - Key: Name
        Value: Inception-VPC
      - Key: "Year"
        Value: "2020"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The !Ref is getting that as an input during creation from the parameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Parameters:
VpcCIDR:
    Type: String
    Default: 192.168.0.0/16
    Description: The CIDR range for the VPC. This should be a valid    private (RFC 1918) CIDR range.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Availability Zones: City zones
&lt;/h2&gt;

&lt;p&gt;Like every good city plan inception will have different isolated zones. Typically the Suburbs, Downtown, and the Industrial zones. Each represents a physically different location in our analogy and very much so in the cloud architecture. This arrangement helps us to isolate any physical failure that might affect our application. The Cloud region you choose comes with the zones and we don't need to create it. But we have to actively use it to build our city.&lt;/p&gt;

&lt;h2&gt;
  
  
  Subnets: Neighbourhoods
&lt;/h2&gt;

&lt;p&gt;Inception has a vast resource of land/postal codes (CIDR addresses), but we don't want to build things randomly. Cities plan this out. Build hospitals closer to houses. Airports are closer to industries. Stadiums and museums in downtown. Again we don't have all those buildings(applications) open to anyone in the world. Only a City member can enjoy the library. Only a city member uses a municipal office. But a Museum is open to everyone. So it is an airport. To plan these out, we divide the entire city into divisions called Subnets(Neighbourhoods). Some are public(open to outside world) and some are private(the outside world can not directly go in). We define them as such:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resources:
PublicSubnet01:
    Type: AWS::EC2::Subnet
    Metadata:
      Comment: Public Subnet 01
    Properties:
      AvailabilityZone:
        Fn::Select:
        - '0'
        - Fn::GetAZs:
            Ref: AWS::Region
      CidrBlock:
        Ref: PublicSubnet01CIDR
      VpcId:
        Ref: VPC
      Tags:
      - Key: Name
        Value: Inception-Public-Subnet-01
PrivateSubnet01:
    Type: AWS::EC2::Subnet
    Metadata:
      Comment: Private Subnet 01
    Properties:
      AvailabilityZone:
        Fn::Select:
        - '0'
        - Fn::GetAZs:
            Ref: AWS::Region
      CidrBlock:
        Ref: PrivateSubnet01CIDR
      VpcId:
        Ref: VPC
      Tags:
      - Key: Name
        Value: Inception-Private-Subnet-01
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snapshot creates a public subnet and a private subnet. CIDR coming from the parameter section as input. Now we have to make sure that the CIDR(postal code) of each subnet is a subset of the city(VPC) postal code(CIDR). In our template it will be as such:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Parameters:
PublicSubnet01CIDR:
    Type: String
    Default: 192.168.0.0/19
    Description: CidrBlock for subnet 01 within the VPC.
PrivateSubnet01CIDR:
    Type: String
    Default: 192.168.128.0/19
    Description: CidrBlock for subnet 01 within the VPC.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Route Tables: Roads
&lt;/h2&gt;

&lt;p&gt;We've defined our separate postal codes within our city. Everything is split into geographical regions. Now we need to give "traffic" a way to move in and out of these different areas. What do we do? We build roads. So a postal code has a series of roads that allow traffic to move in and out of it.&lt;br&gt;
In VPCs, even though we have these different subnets, we need to allow traffic to flow through them. We do this with Route Tables. &lt;strong&gt;A Route Table is just a list of CIDR blocks (IP ranges) that our traffic can leave and come from.&lt;/strong&gt; By default, newly created Route Tables will have the CIDR of our VPC defined. This means that traffic from anywhere within our VPC is allowed.&lt;br&gt;
In addition to a list of IP ranges that our Route Table connect traffic between, it also has Subnet Associations. Simply put, these are "which subnets use this route table." For our city analogy, it'd be "which postal codes are connected to these roads." A Route Table can have many subnets, but a subnet can belong to only one Route Table.&lt;br&gt;
To sum up- &lt;strong&gt;A subnet is associated with a Route Table and the Route Table dictates what traffic can enter and leave the subnet.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resource:
  PublicRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref VPC
      Tags:
      - Key: Name
        Value: Inception-Public-RouteTable
      - Key: Network
        Value: Public
      - Key: "Year"
        Value: "2020"
Route:
    DependsOn: VPCGatewayAttachment
    Type: AWS::EC2::Route
    Properties:
      RouteTableId: !Ref PublicRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref InternetGateway
PrivateRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref VPC
      Tags:
      - Key: Name
        Value: Inception-Private-RouteTable
      - Key: Network
        Value: Private
      - Key: "Year"
        Value: "2020"
PrivateRoute1:            
      Type: AWS::EC2::Route
      Properties:
        RouteTableId: !Ref PrivateRouteTable
        DestinationCidrBlock: 0.0.0.0/0
        # Route traffic through the NAT Gateway:
        NatGatewayId: !Ref NATGateway01
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Internet Gateway: The Highway connection to Inception city
&lt;/h2&gt;

&lt;p&gt;Since the default Route Table starts with all subnets only allowed to route to traffic within the VPC, it's known as a private subnet. How do we make it public? Well first, what's public even mean? It just means it can connect to the internet. How do we do that? We just tell our Route Table it's allowed to do so by attaching an Internet Gateway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;An Internet Gateway is a portal to the internet. In terms of an analogy, think of it like the highway.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine that our Government and Private Warehousing postal codes both share a series of roads that only navigate within our city. This makes them private. Now, imagine that one of our commercial codes uses a series of roads that also connect to both the rest of the city AND the highway. That makes it public.&lt;/p&gt;

&lt;p&gt;Similarly, &lt;strong&gt;a subnet that's associated with a Route Table that's connected to an internet gateway is public. A subnet with a Route Table that's not connected to an internet gateway is private.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resources:
InternetGateway:
    Type: "AWS::EC2::InternetGateway"
    Properties:
      Tags:
      - Key: Name
        Value: Inception-IGW
      - Key: "Year"
        Value: "2020"
VPCGatewayAttachment:
    Type: "AWS::EC2::VPCGatewayAttachment"
    Properties:
      InternetGatewayId: !Ref InternetGateway
      VpcId: !Ref VPC
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  NAT Gateway: Private Society Gate
&lt;/h2&gt;

&lt;p&gt;In our city, traffic in the Government and Private Warehouse postal codes can't get to the highway. Traffic can only move about within the city. What if it needs to leave? Instead of building more highway on-ramps, we'd probably just tell the traffic to (a) navigate to the government postal code and (b) get on the highway from there.&lt;br&gt;
That's essentially what a NAT Gateway does. &lt;strong&gt;When our Subnets connected to the Private Route Table need access to the internet, we set up a NAT Gateway in the public Subnet.&lt;/strong&gt; We then add a rule to our Private Route Table saying that all traffic looking to go to the internet should point to the NAT Gateway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resource:
ElasticIPAddress01:
    Type: AWS::EC2::EIP
    Properties:
      Domain: VPC
NATGateway01:
    Type: AWS::EC2::NatGateway
    Properties:
      AllocationId: !GetAtt ElasticIPAddress01.AllocationId
      SubnetId: !Ref PublicSubnet01
      Tags:
      - Key: Name
        Value: Inception-NATGateway01
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Network ACLs: City Border and Immigration Office
&lt;/h2&gt;

&lt;p&gt;It turns out we've built our city in a very dark time where not everyone can be allowed to just come and go as they please. We need security gates and perimeters outside of each of our postal code areas to ensure only the right traffic is entering and leaving. That means we'll check traffic as it comes in to see if it's permitted AND we'll check as it leaves to make sure it can exit.&lt;br&gt;
In our VPC, these are the &lt;strong&gt;Network ACLs. They dictate what traffic is allowed to enter and leave the subnets they're associated with.&lt;/strong&gt;&lt;br&gt;
NACLs are very stateful. They control what comes in and goes out explicitly. We will see a very similar resource going forward called "Security Groups". Security groups are more stateless. Whatever is allowed to go in are also allowed to go out implicitly. Hence they are stateless.&lt;/p&gt;

&lt;h2&gt;
  
  
  Servers and Services: The Buildings
&lt;/h2&gt;

&lt;p&gt;Our city has all sorts of surrounding infrastructure and logic, but we're missing buildings! When we create these buildings, given that we now have logical, gated and accessible areas, we can group them better. When we create a building it also will receive its own address.&lt;br&gt;
This is the simplest comparison of all. &lt;strong&gt;Servers and Services launched into our VPC are the buildings of our city.&lt;/strong&gt; They receive a "private IP address" when created. We can also set up our subnets to assign "public IP addresses" as well. These are required if we'd like our server/service launched to be able to communicate with the internet via Internet Gateway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Groups: Building Security
&lt;/h2&gt;

&lt;p&gt;It turns out that our city isn't just in a dark time, it's practically organized chaos. Because of this, we have security guards outside of every building. These guards concern themselves with what traffic is allowed to enter and leave the building. They're not concerned with what traffic is DENIED to enter or leave.&lt;br&gt;
In our VPC, these are our &lt;strong&gt;Security Groups. They protect our servers/services at the resource level instead of at a subnet level.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unlike Network ACLs, Security Groups only care about whether or not traffic is allowed to enter or leave.&lt;/strong&gt; I'm reiterating this because of how subtle and yet how different this is.&lt;br&gt;
With ACLs, we can explicitly deny traffic coming from a specific IP CIDR block. With Security Groups we cannot. Instead, we can only allow traffic from specific IP CIDR blocks.&lt;br&gt;
Therefore if we wanted to block traffic from some known idiot IP, with Network ACLs, we could just slap a deny to inbound traffic from that IP. With Security Groups, we'd have to go through and allow everything EXCEPT that IP.&lt;br&gt;
Additionally, any responses to outbound traffic, such as a request that service from within our VPC initiates, are allowed back in. For example, an instance (server) of ours makes an API call, the response from that API call is allowed back in EVEN if we do not allow traffic from that IP range.&lt;br&gt;
Now, this analogy needs one more addition to make it really complete. &lt;strong&gt;Our Security Groups are like a group of security guard employees that work for a security company.&lt;br&gt;
Instead of picking individual security guards, our buildings pick a security company to guard them. The benefit is that buildings that belong to the same security company share the same set of rules.&lt;/strong&gt;&lt;br&gt;
This means we can define a set of security rules on one Security Group, and have it used on multiple servers and services. You can also tell Security Groups to allow traffic from other Security Groups, which in our analogy would be like saying, "any buildings that belong to Security Corp BRAVO can enter buildings that Belong to Security Corp ALFA".&lt;br&gt;
Finally, unlike ACLs and Route Tables, we can attach multiple security groups to servers/services. The resulting security is the sum of the security group rules, i.e. if one allows HTTP traffic and the other allows SSH traffic, the result is allowed traffic from both.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Analogy all together:
&lt;/h2&gt;

&lt;p&gt;Please fork this analogy from my GitHub and run it on your AWS account to spin up your own city.&lt;br&gt;
&lt;a href="https://github.com/anupam-ncsu/AWS-CloudResources/tree/master/Inception"&gt;https://github.com/anupam-ncsu/AWS-CloudResources/tree/master/Inception&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NOTE: the charges associated with the spinup is due to the Elastic IP attached to the NAT gateway, but it is in cents. However, be responsible and delete the city once the job is done.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpc</category>
      <category>cloud</category>
      <category>subnets</category>
    </item>
    <item>
      <title>Kubernetes Secrets</title>
      <dc:creator>AnupamMahapatra</dc:creator>
      <pubDate>Fri, 06 Mar 2020 15:10:26 +0000</pubDate>
      <link>https://dev.to/anupamncsu/kubernetes-secrets-1cb8</link>
      <guid>https://dev.to/anupamncsu/kubernetes-secrets-1cb8</guid>
      <description>&lt;h2&gt;
  
  
  Secrets
&lt;/h2&gt;

&lt;p&gt;Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible. Kubernetes uses the feature internally for generating access token for its API.&lt;br&gt;&lt;br&gt;
Kubernetes is managed and distributed internally. Secret can be used in the following ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Secret as &lt;strong&gt;environment variables&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Secret as a &lt;strong&gt;file&lt;/strong&gt; which needs a volume to be mounted with the file in it.&lt;/li&gt;
&lt;li&gt;store secret as a separate image in a private registry to get pulled along with your container.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Create Secret
&lt;/h2&gt;
&lt;h4&gt;
  
  
  Generate secret from file
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub

 secret "ssh-key-secret" created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Generate secret using a yaml
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  username: YWRtaW4=
  password: MWYyZDFlMmU2N2Rm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;the values are Base64 values of the actual string.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create -f secret.yaml

secret "mysecret" created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using Secret
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Pod using secret as env variable
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: secret-env-pod
spec:
  containers:
  - name: mycontainer
    image: redis
    env:
      - name: SECRET_USERNAME
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: username
      - name: SECRET_PASSWORD
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Pod using secret from a volume
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
  labels:
    name: secret-test
spec:
  volumes:
  - name: secret-volume
    secret:
      secretName: ssh-key-secret
  containers:
  - name: ssh-test-container
    image: mySshImage
    volumeMounts:
    - name: secret-volume
      readOnly: true
      mountPath: "/etc/secret-volume"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;a volume is created of the type secret and it fetches and stores the secret from the k8 secrets. the secret is then used by the container.&lt;br&gt;&lt;br&gt;
here the container can now access the secret from the path&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/etc/secret-volume/ssh-publickey
/etc/secret-volume/ssh-privatekey
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;create a secret.yaml file and deploy it. Secret are now stored on k8 cluster&lt;/li&gt;
&lt;li&gt;In the pod deployment use a volume that is fetching the secret and the pod must be mounting the volume to read the secret.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
  </channel>
</rss>
