<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: marocz</title>
    <description>The latest articles on DEV Community by marocz (@marocz).</description>
    <link>https://dev.to/marocz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/marocz"/>
    <language>en</language>
    <item>
      <title>Automating EKS Deployment and NGINX Setup Using Helm with AWS CDK in Python</title>
      <dc:creator>marocz</dc:creator>
      <pubDate>Thu, 25 Apr 2024 23:00:00 +0000</pubDate>
      <link>https://dev.to/marocz/automating-eks-deployment-and-nginx-setup-using-helm-with-aws-cdk-in-python-27mn</link>
      <guid>https://dev.to/marocz/automating-eks-deployment-and-nginx-setup-using-helm-with-aws-cdk-in-python-27mn</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Amazon Elastic Kubernetes Service (EKS) simplifies the process of running Kubernetes on AWS. When combined with the power of Helm and the AWS Cloud Development Kit (CDK), you can automate the deployment of Kubernetes resources and applications efficiently. This guide will walk you through deploying an EKS cluster and setting up NGINX using Helm, all automated with AWS CDK in Python.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI configured with administrator access.&lt;/li&gt;
&lt;li&gt;AWS CDK installed (&lt;code&gt;npm install -g aws-cdk&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Python 3.x installed.&lt;/li&gt;
&lt;li&gt;Docker installed (for building the CDK app).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Bootstrap Your CDK Project
&lt;/h2&gt;

&lt;p&gt;First, create a new directory for your CDK project and initialize a new CDK app:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;eks-cdk-nginx
&lt;span class="nb"&gt;cd &lt;/span&gt;eks-cdk-nginx
cdk init app &lt;span class="nt"&gt;--language&lt;/span&gt; python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq4uf0vv4nijecp7cogv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq4uf0vv4nijecp7cogv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2:
&lt;/h2&gt;

&lt;p&gt;Ensure you have all the required dependencies on the requirement.txt file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
aws-cdk-lib==2.133.0
constructs&amp;gt;=10.0.0,&amp;lt;11.0.0
python-dotenv==1.0.1
boto3==1.34.71
pytest==6.2.5
moto==5.0.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Install the necessary CDK libraries for EKS:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirement.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Understanding the AWS CDK Initialization Process
&lt;/h3&gt;

&lt;p&gt;When you initialize a new AWS Cloud Development Kit (CDK) project with &lt;code&gt;cdk init app --language python&lt;/code&gt;, several things happen:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Project Structure Creation: CDK creates a new directory with the name of your project and establishes a standard project structure within it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Initialization of CDK App: A CDK application is initialized within this directory. This app will serve as the container for your CDK stacks and constructs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generation of Configuration Files and Directories: The command generates several configuration files and directories essential for your project, including:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- `app.py`: The entry point for your CDK application.
- `cdk.json`: Contains configuration for the CDK app, like which command to use for synthesizing CloudFormation templates.
- `requirements.txt`: Lists the Python packages required by your CDK app.
- `setup.py`: A setup script for installing the module (app) and its dependencies.
- A `.env` directory for your virtual environment and a `source.bat` or `source.sh` script (depending on your OS) to activate it.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Virtual Environment Setup: It suggests commands to set up a virtual environment for Python and to install the dependencies listed in &lt;code&gt;requirements.txt&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  How to Structure Your CDK Project
&lt;/h3&gt;

&lt;p&gt;Your AWS CDK project should be structured in a way that supports scalability and maintainability:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;App and Stack Files: The &lt;code&gt;app.py&lt;/code&gt; file is where you instantiate your app and stacks. Each stack should be defined in its separate Python file for clarity, e.g., &lt;code&gt;my_stack.py&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resource Constructs: Individual constructs, representing AWS resources, should be defined within stack files or in separate files if they are custom constructs or if you plan to reuse them across different stacks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lib Directory: For larger projects, you might want to organize your constructs and stacks in a &lt;code&gt;lib/&lt;/code&gt; directory. Each stack can have its own file within this directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test Directory: Tests for your CDK constructs and stacks should be placed in a &lt;code&gt;test/&lt;/code&gt; directory. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Assets: Store any assets (like Lambda code or Dockerfiles) in an &lt;code&gt;assets/&lt;/code&gt; or &lt;code&gt;resources/&lt;/code&gt; directory.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating a Construct
&lt;/h3&gt;

&lt;p&gt;A construct in CDK is a building block of your AWS infrastructure, representing an AWS resource or a group of related resources. Here’s how to create a simple S3 bucket construct within a stack:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Define Your Construct&lt;/strong&gt;: In your stack file (&lt;code&gt;my_stack.py&lt;/code&gt;), import the necessary AWS modules and define your construct:&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;aws_s3&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;s3&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;super&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MyFirstBucket&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;versioned&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;removal_policy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RemovalPolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DESTROY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Creating a Stack
&lt;/h3&gt;

&lt;p&gt;A stack in CDK represents a collection of AWS resources that you deploy together. The &lt;code&gt;MyStack&lt;/code&gt; class in the example above is already a stack. You instantiate this stack in your &lt;code&gt;app.py&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;my_stack&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MyStack&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;App&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nc"&gt;MyStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MyStack&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synth&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Running &lt;code&gt;cdk synth&lt;/code&gt; and What to Expect
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;cdk synth&lt;/code&gt; command synthesizes your CDK code into an AWS CloudFormation template.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Command&lt;/strong&gt;: Run &lt;code&gt;cdk synth&lt;/code&gt; from the root directory of your CDK project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Output&lt;/strong&gt;: This command outputs a CloudFormation template in YAML format to your terminal. This template describes all the AWS resources you've defined in your CDK code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CloudFormation Template&lt;/strong&gt;: The template can be found in the &lt;code&gt;cdk.out&lt;/code&gt; directory, named after your stack, e.g., &lt;code&gt;MyStack.template.json&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What to Look For: Verify that the resources defined in your CDK stack, such as the S3 bucket in our example, are correctly represented in the CloudFormation template.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This synthesized template is what AWS CloudFormation uses to deploy and manage the defined cloud resources, making &lt;code&gt;cdk synth&lt;/code&gt; an essential step in the development process to validate your infrastructure definitions before deployment.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Define Your EKS Cluster in CDK
&lt;/h2&gt;

&lt;p&gt;In the &lt;code&gt;eks_cdk_nginx&lt;/code&gt; directory, modify the &lt;code&gt;eks_cdk_nginx_stack.py&lt;/code&gt; file to define your EKS cluster. The following example creates an EKS cluster and an EC2 instance as the worker node:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;aws_eks&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;eks&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;aws_ec2&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;ec2&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;EksCdkNginxStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;construct_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;super&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;construct_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Define the VPC
&lt;/span&gt;        &lt;span class="n"&gt;vpc_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vpc-xxxxxxxx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;vpc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ec2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_lookup&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vpc&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vpc_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;vpc_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;This is the vpc &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vpc_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;vpc_subnets&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;subnetType&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ec2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SubnetType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PUBLIC&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;
        &lt;span class="c1"&gt;#    vpc_subnets=vpc.select_subnets(subnet_type=ec2.SubnetType.PUBLIC).subnets
&lt;/span&gt;
        &lt;span class="c1"&gt;# create eks admin role
&lt;/span&gt;        &lt;span class="n"&gt;eks_master_role&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;iam&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Role&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;EksMasterRole&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                                   &lt;span class="n"&gt;role_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;EksAdminRole&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                                   &lt;span class="n"&gt;assumed_by&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;iam&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;AccountRootPrincipal&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                                   &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Define the EKS cluster
&lt;/span&gt;       &lt;span class="n"&gt;cluster&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Cluster&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Cluster&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                              &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                              &lt;span class="n"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;KubernetesVersion&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;V1_25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                              &lt;span class="n"&gt;masters_role&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;eks_master_role&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                              &lt;span class="n"&gt;default_capacity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                              &lt;span class="n"&gt;vpc_subnets&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;vpc_subnets&lt;/span&gt;
                              &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Step 3: Add NGINX Ingress Using Helm
&lt;/h2&gt;

&lt;p&gt;The AWS CDK’s EKS module allows you to define Helm charts as part of your infrastructure. Extend your stack to include the NGINX ingress controller from Helm:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
        &lt;span class="c1"&gt;# Add NGINX ingress using Helm
&lt;/span&gt;        &lt;span class="n"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;HelmChart&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;NginxIngress&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;cluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;chart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ingress-nginx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;repository&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://kubernetes.github.io/ingress-nginx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ingress-nginx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;values&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;helm_values&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Step 4: Deploy Your CDK Stack
&lt;/h2&gt;

&lt;p&gt;Ensure you are in the root of your CDK project and run the following commands to deploy your EKS cluster along with NGINX:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cdk synth
cdk deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This process might take several minutes as it sets up the EKS cluster and deploys the NGINX ingress controller.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Verify the NGINX Deployment
&lt;/h2&gt;

&lt;p&gt;Once the deployment is complete, you can verify the NGINX ingress controller is running by fetching the Helm releases in your cluster:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update your &lt;code&gt;kubeconfig&lt;/code&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks update-kubeconfig &lt;span class="nt"&gt;--name&lt;/span&gt; MyCluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;List Helm releases to see NGINX installed:&lt;/li&gt;
&lt;/ol&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Adding Unit Tests to Your AWS CDK Project&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;Unit testing is an essential part of the development process, ensuring that your infrastructure as code behaves as expected. AWS CDK projects, being code-based, allow for straightforward unit testing of your infrastructure definitions. This section will guide you through setting up and writing unit tests for your AWS CDK project using Python.&lt;/p&gt;

&lt;h4&gt;
  
  
  Setting Up Your Testing Environment
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Install Testing Libraries&lt;/strong&gt;: To write and run tests, you'll need to install some additional Python libraries. &lt;code&gt;pytest&lt;/code&gt; is a popular testing framework, and &lt;code&gt;aws-cdk.assertions&lt;/code&gt; provides utilities for asserting CDK-specific conditions. Add these to your &lt;code&gt;requirements.txt&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pytest
aws-cdk.assertions
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, install them using pip:&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Organize Test Directory&lt;/strong&gt;: Create a &lt;code&gt;tests&lt;/code&gt; directory in your project root. This is where all your test files will reside. You might structure it further into &lt;code&gt;unit&lt;/code&gt; and &lt;code&gt;integration&lt;/code&gt; tests if needed.&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;tests
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Writing Unit Tests
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create a Test File&lt;/strong&gt;: Inside the &lt;code&gt;tests&lt;/code&gt; directory, create a Python file for your tests, for example, &lt;code&gt;test_my_stack.py&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Import Testing Modules&lt;/strong&gt;: At the beginning of your test file, import the necessary modules, including the CDK constructs you wish to test, &lt;code&gt;pytest&lt;/code&gt;, and any CDK assertions you need.&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pytest&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aws_cdk.assertions&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Template&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;my_cdk_app.my_stack&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MyStack&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Write Test Cases&lt;/strong&gt;: Write functions to test various aspects of your stack. Use the &lt;code&gt;Template.from_stack&lt;/code&gt; function to create a template object from your stack, which you can then assert against.&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_s3_bucket_created&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;App&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;stack&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MyStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MyStack&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;template&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Template&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_stack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stack&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;template&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;has_resource_properties&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AWS::S3::Bucket&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;VersioningConfiguration&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Enabled&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This test checks that your stack creates an S3 bucket with versioning enabled.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Testing Custom Constructs&lt;/strong&gt;: If you have custom constructs, you can test them in isolation by instantiating them within a test stack in your test case, then making assertions about their template representation.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Running Your Tests
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Execute your tests by running &lt;code&gt;pytest&lt;/code&gt; in your project's root directory. &lt;code&gt;pytest&lt;/code&gt; will automatically discover and run all test files in the &lt;code&gt;tests&lt;/code&gt; directory.&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pytest
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If your tests pass, &lt;code&gt;pytest&lt;/code&gt; will indicate success. If not, it will provide detailed output about which tests failed and why.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Conclusion on Unit Testing
&lt;/h4&gt;

&lt;p&gt;Unit testing your AWS CDK project is crucial for maintaining high-quality infrastructure code. By testing your stacks and constructs, you ensure that your cloud infrastructure behaves as expected, reducing the likelihood of deployment errors and potential runtime issues. Incorporating these tests into a CI/CD pipeline can further automate your testing and deployment process, leading to more reliable and efficient infrastructure management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You've successfully automated the deployment of an Amazon EKS cluster and set up NGINX using Helm, all with the AWS Cloud Development Kit (CDK) in Python. This approach not only simplifies the process of deploying and managing Kubernetes resources on AWS but also leverages the power of infrastructure as code for repeatable and consistent deployments.&lt;/p&gt;

&lt;p&gt;Embrace the flexibility and efficiency of automating your cloud infrastructure with CDK, and explore further integrations and optimizations for your Kubernetes deployments on AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrnzh7usqh8wkq0yl7of.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrnzh7usqh8wkq0yl7of.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cdk</category>
      <category>helm</category>
      <category>eks</category>
    </item>
    <item>
      <title>Building Container Images Securely on AWS EKS with Kaniko</title>
      <dc:creator>marocz</dc:creator>
      <pubDate>Thu, 11 Apr 2024 23:00:00 +0000</pubDate>
      <link>https://dev.to/marocz/building-container-images-securely-on-aws-eks-with-kaniko-153n</link>
      <guid>https://dev.to/marocz/building-container-images-securely-on-aws-eks-with-kaniko-153n</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;In the world of Kubernetes, building container images securely and efficiently is a common challenge. This is where Kaniko comes in. Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster, without requiring root access. This post will delve into Kaniko's capabilities and provide a guide on setting it up within an AWS Elastic Kubernetes Service (EKS) cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Kaniko?
&lt;/h2&gt;

&lt;p&gt;Kaniko is an open-source tool developed by Google to build container images from a Dockerfile, securely in a Kubernetes cluster. Unlike traditional Docker builds that require privileged root access to perform tasks, Kaniko doesn't need Docker or privileged access, mitigating the security risks associated with container image builds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1c5x4fm5ge4qbqm1he7.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1c5x4fm5ge4qbqm1he7.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Kaniko on AWS EKS?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Builds container images without Docker daemon, reducing the attack surface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: Integrates seamlessly with various CI/CD tools and services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency&lt;/strong&gt;: Leverages Kubernetes cluster resources for image builds.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting Up Kaniko on AWS EKS
&lt;/h2&gt;

&lt;p&gt;Let's walk through setting up Kaniko on an AWS EKS cluster to build and push a container image to Amazon Elastic Container Registry (ECR).&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with access to EKS and ECR.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl&lt;/code&gt; configured to interact with your EKS cluster.&lt;/li&gt;
&lt;li&gt;AWS CLI configured on your machine.&lt;/li&gt;
&lt;li&gt;Docker installed on your machine.&lt;/li&gt;
&lt;li&gt;Credentials configured for Amazon ECR.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 1: Create an ECR Repository
&lt;/h3&gt;

&lt;p&gt;First, create a repository in Amazon ECR where Kaniko will push the built images.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws ecr create-repository &lt;span class="nt"&gt;--repository-name&lt;/span&gt; my-kaniko-example


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Step 2: Configure IAM Permissions
&lt;/h3&gt;

&lt;p&gt;Kaniko needs permissions to push images to ECR. Create an IAM policy that grants the required permissions and attach it to the role associated with your EKS nodes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create an IAM policy&lt;/strong&gt; (kaniko-ecr-policy.json):&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ecr:GetAuthorizationToken"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ecr:BatchCheckLayerAvailability"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ecr:CompleteLayerUpload"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ecr:InitiateLayerUpload"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ecr:PutImage"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ecr:UploadLayerPart"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create the policy&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws iam create-policy &lt;span class="nt"&gt;--policy-name&lt;/span&gt; KanikoECRPolicy &lt;span class="nt"&gt;--policy-document&lt;/span&gt; file://kaniko-ecr-policy.json


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Attach the policy to your EKS node role&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After creating the IAM policy that grants Kaniko permissions to push images to Amazon ECR, you need to attach this policy to the IAM role associated with your EKS nodes. This step is essential to grant the Kaniko pod the required AWS permissions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Find Your EKS Node IAM Role
&lt;/h4&gt;

&lt;p&gt;First, identify the IAM role used by your EKS nodes. You can find this information in the Amazon EKS console or by describing your EKS node group via AWS CLI:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws eks describe-nodegroup &lt;span class="nt"&gt;--cluster-name&lt;/span&gt; your-cluster-name &lt;span class="nt"&gt;--nodegroup-name&lt;/span&gt; your-nodegroup-name


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Look for the &lt;code&gt;nodeRole&lt;/code&gt; in the output, which will be the ARN of the IAM role.&lt;/p&gt;

&lt;h4&gt;
  
  
  Attach the IAM Policy to the EKS Node Role
&lt;/h4&gt;

&lt;p&gt;Once you have your EKS node IAM role ARN, attach the KanikoECRPolicy to it. You can do this through the AWS Management Console or the AWS CLI.&lt;/p&gt;

&lt;p&gt;Using the AWS CLI:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws iam attach-role-policy &lt;span class="nt"&gt;--role-name&lt;/span&gt; YourEKSNodeRoleName &lt;span class="nt"&gt;--policy-arn&lt;/span&gt; arn:aws:iam::your-account-id:policy/KanikoECRPolicy


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Replace &lt;code&gt;YourEKSNodeRoleName&lt;/code&gt; with the name of your EKS node IAM role (not the ARN) and &lt;code&gt;your-account-id&lt;/code&gt; with your AWS account ID.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verifying the Policy Attachment
&lt;/h3&gt;

&lt;p&gt;Ensure the policy was attached successfully by listing the policies attached to your EKS node role:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws iam list-attached-role-policies &lt;span class="nt"&gt;--role-name&lt;/span&gt; YourEKSNodeRoleName


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should see &lt;code&gt;KanikoECRPolicy&lt;/code&gt; in the list of attached policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Prepare Your Kubernetes Cluster
&lt;/h3&gt;

&lt;p&gt;Create a Kubernetes secret to store your ECR credentials, which Kaniko will use to authenticate.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl create secret docker-registry regcred &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--docker-server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;AWS_REGION&amp;gt;.amazonaws.com &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--docker-username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;AWS &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--docker-password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws ecr get-login-password&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--docker-email&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;YOUR_EMAIL&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Step 4: Deploy Kaniko Pod
&lt;/h3&gt;

&lt;p&gt;Deploy a pod that uses Kaniko to build and push an image to your ECR repository. Define your deployment in a YAML file (&lt;code&gt;kaniko-pod.yaml&lt;/code&gt;):&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kaniko&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kaniko&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io/kaniko-project/executor:latest&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--dockerfile=Dockerfile"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
           &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--context=git://github.com/&amp;lt;your-repo&amp;gt;.git#refs/heads/master"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
           &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--destination=&amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.&amp;lt;AWS_REGION&amp;gt;.amazonaws.com/my-kaniko-example:latest"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kaniko-secret&lt;/span&gt;
        &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/kaniko/.docker&lt;/span&gt;
  &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Never&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kaniko-secret&lt;/span&gt;
      &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;regcred&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Replace placeholders with your specific details. Deploy the pod:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kaniko-pod.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Step 5: Verify the Image Build and Push
&lt;/h3&gt;

&lt;p&gt;Monitor the pod's logs to ensure the build and push process completes successfully:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl logs kaniko


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Additional Method: Running Kaniko Inside a Docker Container
&lt;/h3&gt;

&lt;p&gt;For local development or in CI/CD pipelines where Kubernetes isn't available, you can use Docker to run Kaniko using the Executor image from GCR. This approach simulates the Kubernetes environment, allowing you to build and push container images to a registry like Amazon ECR without needing a full Kubernetes setup.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 1: Pull the Kaniko Executor Image
&lt;/h4&gt;

&lt;p&gt;First, pull the Kaniko Executor image from Google Container Registry:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker pull gcr.io/kaniko-project/executor:latest


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Step 2: Prepare Your Build Context and Dockerfile
&lt;/h4&gt;

&lt;p&gt;Ensure your Dockerfile and any necessary files for the build (the build context) are located in a specific directory. This directory will be mounted into the Docker container running Kaniko.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 3: Run Kaniko in Docker
&lt;/h4&gt;

&lt;p&gt;To build and push an image using Kaniko inside a Docker container, use the following command, adjusting paths and variables for your environment:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /path/to/your/build/context:/workspace &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /path/to/.docker:/kaniko/.docker &lt;span class="se"&gt;\&lt;/span&gt;
  gcr.io/kaniko-project/executor:latest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dockerfile&lt;/span&gt; /workspace/Dockerfile &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--context&lt;/span&gt; &lt;span class="nb"&gt;dir&lt;/span&gt;:///workspace/ &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--destination&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.&amp;lt;AWS_REGION&amp;gt;.amazonaws.com/my-kaniko-example:latest


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/path/to/your/build/context&lt;/code&gt; is the local path to your Dockerfile and any files it needs.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/path/to/.docker&lt;/code&gt; should contain your &lt;code&gt;config.json&lt;/code&gt; with the ECR credentials.&lt;/li&gt;
&lt;li&gt;Adjust the &lt;code&gt;--destination&lt;/code&gt; flag to point to your target ECR repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 4: Authenticate Docker with ECR
&lt;/h4&gt;

&lt;p&gt;Ensure Docker is authenticated with Amazon ECR to allow pushing the built image. You can authenticate Docker using the AWS CLI:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;p&gt;aws ecr get-login-password &lt;span class="nt"&gt;--region&lt;/span&gt; &amp;lt;AWS_REGION&amp;gt; | docker login &lt;span class="nt"&gt;--username&lt;/span&gt; AWS &lt;span class="nt"&gt;--password-stdin&lt;/span&gt; &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.&amp;lt;AWS_REGION&amp;gt;.amazonaws.com&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Step 5: Verify the Image in ECR&lt;br&gt;
&lt;/h4&gt;

&lt;p&gt;After the build completes, check your Amazon ECR repository to verify that the image has been pushed successfully.&lt;/p&gt;

&lt;h2&gt;
  
  
  Combining the Best of Both Worlds
&lt;/h2&gt;

&lt;p&gt;Running Kaniko within a Docker container offers a versatile solution for building container images, especially when Kubernetes isn't part of your immediate toolchain. This method provides a bridge between local development environments and cloud-native technologies, allowing for a seamless transition to Kubernetes when ready.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kaniko emerges as a robust tool for building container images securely, whether in a Kubernetes cluster with AWS EKS or locally using Docker. Its ability to run without Docker Daemon makes it a safer and more compliant choice for CI/CD pipelines, fitting well into various development and deployment workflows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2yllxfqtkplw05jxy87.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2yllxfqtkplw05jxy87.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Mastering Kubernetes Admission Controllers: Setup and Use Cases</title>
      <dc:creator>marocz</dc:creator>
      <pubDate>Sun, 07 Apr 2024 13:20:15 +0000</pubDate>
      <link>https://dev.to/marocz/mastering-kubernetes-admission-controllers-setup-and-use-cases-2n1</link>
      <guid>https://dev.to/marocz/mastering-kubernetes-admission-controllers-setup-and-use-cases-2n1</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Kubernetes Admission Controllers are pivotal in the Kubernetes API server pipeline, playing a crucial role in governing and regulating the objects being created, modified, or deleted. These controllers act as gatekeepers, enforcing policies and ensuring that the cluster's state is consistent and secure. This guide explores the functionality of Admission Controllers, their importance, and how you can set up a basic one for your cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Kubernetes Admission Controllers?
&lt;/h2&gt;

&lt;p&gt;Admission Controllers are plugins that intercept requests to the Kubernetes API server before the persistence of the object but after the request is authenticated and authorized. They can mutate (modify) or validate requests, offering a powerful mechanism to introduce custom logic and enforce policies across all Kubernetes resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69mmfyj3myd4ivvdzz7u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69mmfyj3myd4ivvdzz7u.png" alt="Image description" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Types of Admission Controllers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Validating Admission Webhooks&lt;/strong&gt;: These inspect the requests and determine whether they should be allowed based on specific criteria.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mutating Admission Webhooks&lt;/strong&gt;: They can modify requests (e.g., adding labels or annotations) before they are processed by the validating webhooks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Use Admission Controllers?
&lt;/h2&gt;

&lt;p&gt;Admission Controllers enable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security Enhancements&lt;/strong&gt;: Enforcing best practices and security policies, like preventing privileged containers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Management&lt;/strong&gt;: Ensuring that resources request limits or namespaces follow specific rules.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance and Governance&lt;/strong&gt;: Applying organizational policies and compliance requirements automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting Up a Kubernetes Admission Controller
&lt;/h2&gt;

&lt;p&gt;Let’s set up a simple Validating Admission Webhook to understand the process. We’ll create a webhook to validate Pods, ensuring they have a specific label before being admitted to the cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Deploy a Webhook Server
&lt;/h3&gt;

&lt;p&gt;First, you need a server that Kubernetes can call to validate objects. For this example, let’s assume you have a server running with an endpoint &lt;code&gt;/validate&lt;/code&gt; that validates if incoming Pods have the label &lt;code&gt;secure: "true"&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create a TLS Certificate
&lt;/h3&gt;

&lt;p&gt;Admission Webhooks require HTTPS endpoints with a valid TLS certificate signed by a CA that the Kubernetes API server trusts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Generate a self-signed certificate and key&lt;/span&gt;
openssl req &lt;span class="nt"&gt;-x509&lt;/span&gt; &lt;span class="nt"&gt;-newkey&lt;/span&gt; rsa:4096 &lt;span class="nt"&gt;-sha256&lt;/span&gt; &lt;span class="nt"&gt;-days&lt;/span&gt; 3650 &lt;span class="nt"&gt;-nodes&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-keyout&lt;/span&gt; tls.key &lt;span class="nt"&gt;-out&lt;/span&gt; tls.crt &lt;span class="nt"&gt;-subj&lt;/span&gt; &lt;span class="s2"&gt;"/CN=admission-controller.default.svc"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Create a Kubernetes Secret
&lt;/h3&gt;

&lt;p&gt;Store the generated certificate and key as a secret in your Kubernetes cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admission-tls&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls.crt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(base64 -w0 &amp;lt; tls.crt)&lt;/span&gt;
  &lt;span class="na"&gt;tls.key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(base64 -w0 &amp;lt; tls.key)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Register the Admission Webhook
&lt;/h3&gt;

&lt;p&gt;Define a &lt;code&gt;ValidatingWebhookConfiguration&lt;/code&gt; that points to your webhook server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admissionregistration.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ValidatingWebhookConfiguration&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-validating-webhook&lt;/span&gt;
&lt;span class="na"&gt;webhooks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example.validator.local&lt;/span&gt;
    &lt;span class="na"&gt;clientConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admission-controller&lt;/span&gt;
        &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/validate"&lt;/span&gt;
      &lt;span class="na"&gt;caBundle&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(cat tls.crt | base64 | tr -d '\n')&lt;/span&gt;
    &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;operations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CREATE"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;apiVersions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;v1"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pods"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;admissionReviewVersions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;v1"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;sideEffects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;None&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;caBundle&lt;/code&gt; with the base64 encoded content of your &lt;code&gt;tls.crt&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Testing the Admission Controller
&lt;/h3&gt;

&lt;p&gt;Deploy a Pod to your cluster and observe if it gets admitted based on the presence of the &lt;code&gt;secure: "true"&lt;/code&gt; label.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-pod&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;secure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.14.2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes Admission Controllers are a powerful feature for enhancing cluster security, enforcing policies, and ensuring compliance across all Kubernetes resources. By setting up your Admission Controller, you can take control of what gets deployed in your cluster, making your infrastructure more secure and reliable. Dive deeper into specific controllers and explore how they can help meet your organizational needs.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Comprehensive Guide: Deploying a Java App to Google Cloud Functions with gcloud</title>
      <dc:creator>marocz</dc:creator>
      <pubDate>Fri, 19 Jan 2024 23:00:00 +0000</pubDate>
      <link>https://dev.to/marocz/comprehensive-guide-deploying-a-java-app-to-google-cloud-functions-with-gcloud-2i6o</link>
      <guid>https://dev.to/marocz/comprehensive-guide-deploying-a-java-app-to-google-cloud-functions-with-gcloud-2i6o</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Google Cloud Functions offers a serverless execution environment, perfect for running lightweight Java applications. This guide provides an end-to-end walkthrough on deploying a Java app to Google Cloud Functions, including detailed gcloud setup and function configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A Google Cloud account with billing enabled.&lt;/li&gt;
&lt;li&gt;Basic knowledge of Java and Maven.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Installing and Configuring gcloud
&lt;/h2&gt;

&lt;p&gt;Before deploying your Java application, set up the Google Cloud SDK (gcloud) on your local machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing gcloud
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Download the Google Cloud SDK installer&lt;/strong&gt; for your operating system from the &lt;a href="https://cloud.google.com/sdk/docs/install" rel="noopener noreferrer"&gt;Google Cloud SDK page&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Follow the installation instructions&lt;/strong&gt; specific to your OS.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Authenticating gcloud
&lt;/h3&gt;

&lt;p&gt;After installation, authenticate gcloud and configure your project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud auth login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command opens a new browser window asking you to log in with your Google credentials.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting the Default Project
&lt;/h3&gt;

&lt;p&gt;Set your default project in gcloud:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud config set project YOUR_PROJECT_ID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace YOUR_PROJECT_ID with your actual Google Cloud project ID.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Setting Up Your Java Application
&lt;/h2&gt;

&lt;p&gt;Create a basic Java function to deploy. This example uses Maven for building the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yheabyjgoaw4dsaqbof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yheabyjgoaw4dsaqbof.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a Simple Java Function
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a new Maven project.&lt;/li&gt;
&lt;li&gt;Add Google Cloud Functions dependency in your pom.xml:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
  &amp;lt;groupId&amp;gt;com.google.cloud.functions&amp;lt;/groupId&amp;gt;
  &amp;lt;artifactId&amp;gt;functions-framework-api&amp;lt;/artifactId&amp;gt;
  &amp;lt;version&amp;gt;1.0.4&amp;lt;/version&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a Java class implementing HttpFunction:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import com.google.cloud.functions.HttpFunction;
import com.google.cloud.functions.HttpRequest;
import com.google.cloud.functions.HttpResponse;
import java.io.BufferedWriter;

public class HelloWorld implements HttpFunction {
  @Override
  public void service(HttpRequest request, HttpResponse response)
      throws Exception {
    BufferedWriter writer = response.getWriter();
    writer.write("Hello World");
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Configuring Your Cloud Function
&lt;/h2&gt;

&lt;p&gt;Before deploying, you need to configure your Cloud Function in the Java project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Defining the Function Handler
&lt;/h3&gt;

&lt;p&gt;The entry point is the HelloWorld class. Ensure this class is correctly referenced in the deployment step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building the Deployment Artifact
&lt;/h3&gt;

&lt;p&gt;Build your application into a deployable JAR file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mvn clean package
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Deploying to Google Cloud Functions
&lt;/h2&gt;

&lt;p&gt;Now, deploy your function using the gcloud CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying the Function
&lt;/h3&gt;

&lt;p&gt;Deploy the function with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud functions deploy hello-world-function \
  --entry-point HelloWorld \
  --runtime java11 \
  --trigger-http \
  --memory 512MB \
  --allow-unauthenticated
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;hello-world-function is your function's name.&lt;/li&gt;
&lt;li&gt;'--entry-point' specifies the main class.&lt;/li&gt;
&lt;li&gt;Other flags configure the runtime, trigger, memory, and authentication.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Verifying the Deployment
&lt;/h3&gt;

&lt;p&gt;After deployment, gcloud will output the URL of your function. Test it by sending a request to this URL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You've now successfully deployed a Java application to Google Cloud Functions using gcloud. This serverless solution is perfect for applications that require scalability and minimal infrastructure management. Experiment with more complex functions and integrate them with other Google Cloud services for advanced use cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuatl4bww02zouowbbqsl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuatl4bww02zouowbbqsl.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gcloud</category>
      <category>cloudfunctions</category>
      <category>serverless</category>
      <category>java</category>
    </item>
    <item>
      <title>Comprehensive Guide: Setting Up Cert-Manager on EKS, GKE, and AKS Using Terraform</title>
      <dc:creator>marocz</dc:creator>
      <pubDate>Wed, 17 Jan 2024 23:00:00 +0000</pubDate>
      <link>https://dev.to/marocz/comprehensive-guide-setting-up-cert-manager-on-eks-gke-and-aks-using-terraform-2hce</link>
      <guid>https://dev.to/marocz/comprehensive-guide-setting-up-cert-manager-on-eks-gke-and-aks-using-terraform-2hce</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Managing SSL/TLS certificates in a Kubernetes environment can be challenging. Cert-manager simplifies this by automating the process of obtaining, renewing, and using those certificates. This post will guide you through setting up cert-manager on three major Kubernetes services: Amazon EKS, Google GKE, and Azure AKS using Terraform. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is Cert-Manager?
&lt;/h2&gt;

&lt;p&gt;Cert-manager is a Kubernetes add-on that automates the management and issuance of TLS certificates from various issuing sources such as Let’s Encrypt. It ensures certificates are valid and up to date, renewing them before they expire.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Use Cert-Manager with Terraform?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automation&lt;/strong&gt;: Terraform automates the deployment and configuration of cert-manager across different cloud providers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt;: Ensures a consistent setup across various Kubernetes environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code&lt;/strong&gt;: Leverage the benefits of defining your Kubernetes resources and cert-manager configuration as code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step-by-Step Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Common Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Terraform installed on your local machine.&lt;/li&gt;
&lt;li&gt;Access to an existing Kubernetes cluster on EKS, GKE, or AKS.&lt;/li&gt;
&lt;li&gt;kubectl installed and configured for cluster access.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 1: Define the Terraform Variable for the Issuer Email
&lt;/h3&gt;

&lt;p&gt;Add the following variable to your Terraform configuration. This variable will be used for the Cert-Manager cluster issuer's email.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"ISSUER_EMAIL"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"cert manager cluster issuer email"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Deploy Cert-Manager Using Helm
&lt;/h3&gt;

&lt;p&gt;Utilize the helm_release resource to deploy Cert-Manager into your EKS cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "helm_release" "cert-manager" {
  name             = "cert-manager"
  repository       = "https://charts.jetstack.io"
  chart            = "cert-manager"
  version          = "v1.12.4"
  create_namespace = true
  namespace        = "cert-manager"
  cleanup_on_fail  = true

  set {
    name  = "installCRDs"
    value = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Create a ClusterIssuer Resource
&lt;/h3&gt;

&lt;p&gt;After deploying Cert-Manager, define a kubernetes_manifest resource for the ClusterIssuer, using the email variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_manifest" "clusterissuer_letsencrypt_prod" {
  depends_on = [
    helm_release.cert-manager
  ]
  manifest = {
    "apiVersion" = "cert-manager.io/v1"
    "kind" = "ClusterIssuer"
    "metadata" = {
      "name" = "letsencrypt-prod"
    }
    "spec" = {
      "acme" = {
        "email" = var.ISSUER_EMAIL
        "privateKeySecretRef" = {
          "name" = "letsencrypt-prod"
        }
        "server" = "https://acme-v02.api.letsencrypt.org/directory"
        "solvers" = [
          {
            "http01" = {
              "ingress" = {
                "class" = "nginx"
              }
            }
          }
        ]
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setting up Cert-Manager on Azure AKS,EKS, GKE using Terraform
&lt;/h3&gt;

&lt;p&gt;The steps for setting up Cert-Manager on Azure AKS,EKS or GKE are similar to above, with the main difference being the specific configurations and credentials for the respective clouds. Ensure you have the Azure provider configured in your Terraform setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cgYqy4dg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ajqg2xgmksbkodjri2nx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cgYqy4dg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ajqg2xgmksbkodjri2nx.jpg" alt="Image description" width="800" height="549"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Define the Terraform Variable
&lt;/h4&gt;

&lt;p&gt;Ensure the ISSUER_EMAIL variable is present in your Azure Terraform configuration.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Deploy Cert-Manager and Create a ClusterIssuer
&lt;/h4&gt;

&lt;p&gt;Follow the same steps as in the EKS setup. The Terraform code remains largely the same for deploying Cert-Manager and creating a ClusterIssuer in an AKS environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Setting Up Providers and Backend
&lt;/h4&gt;

&lt;p&gt;Before deploying Cert-Manager, configure your Terraform providers and backend. This configuration is crucial for managing the state of your resources and interacting with the cloud services.&lt;/p&gt;

&lt;h4&gt;
  
  
  For Google Cloud (GCP):
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="s2"&gt;"gcs"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;bucket&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"your-bucket-name"&lt;/span&gt;
    &lt;span class="nx"&gt;prefix&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform/state"&lt;/span&gt;
    &lt;span class="nx"&gt;credentials&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"path/to/credentials.json"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;google&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/google"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;= 4.52.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;kubernetes&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/kubernetes"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"= 2.17.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;helm&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/helm"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"= 2.8.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"google"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PROJECT_ID&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REGION&lt;/span&gt;
  &lt;span class="nx"&gt;credentials&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GCP_CRED&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"google-beta"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PROJECT_ID&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REGION&lt;/span&gt;
  &lt;span class="nx"&gt;credentials&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GCP_CRED&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;config_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~/.kube/config"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"helm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;kubernetes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;config_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~/.kube/config"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;With Cert-Manager now set up in your Amazon EKS and Azure AKS clusters, you have automated the management of TLS certificates, ensuring secure communications within your Kubernetes environments. The power of Terraform allows you to replicate this setup across different environments and cloud providers, maintaining consistency and efficiency in your infrastructure management.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ljja_9xr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/grb31a8z1ybv8xu6628e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ljja_9xr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/grb31a8z1ybv8xu6628e.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>certmanager</category>
      <category>devops</category>
    </item>
    <item>
      <title>Integrating Trivy with GitLab CI, Azure DevOps, and GitHub Actions for Enhanced Security</title>
      <dc:creator>marocz</dc:creator>
      <pubDate>Sun, 31 Dec 2023 23:00:00 +0000</pubDate>
      <link>https://dev.to/marocz/integrating-trivy-with-gitlab-ci-azure-devops-and-github-actions-for-enhanced-security-7jo</link>
      <guid>https://dev.to/marocz/integrating-trivy-with-gitlab-ci-azure-devops-and-github-actions-for-enhanced-security-7jo</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the world of continuous integration and deployment (CI/CD), security is paramount. Trivy, a simple and comprehensive vulnerability scanner, is a key tool for scanning your applications and infrastructure for security issues. In this post, I'll discuss how to integrate Trivy into GitLab CI, Azure DevOps, and GitHub Actions, enhancing the security of your CI/CD pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trivy: A Brief Overview
&lt;/h2&gt;

&lt;p&gt;Trivy is an open-source vulnerability scanner for container images and filesystems. It's easy to integrate into CI/CD pipelines and provides comprehensive vulnerability detection.&lt;/p&gt;

&lt;p&gt;Trivy is a comprehensive and easy-to-use vulnerability scanner designed for modern CI/CD pipelines. It specializes in scanning container images and filesystems for security vulnerabilities. Here are some key features that make Trivy stand out:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Wide Range of Vulnerability Detections&lt;/strong&gt;: Trivy can detect vulnerabilities from various sources, including OS packages (Alpine, Red Hat, etc.) and application dependencies (NPM, RubyGems, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simple Installation and Operation&lt;/strong&gt;: Unlike other scanners that require pre-requisites or complex setup, Trivy is easy to install and can be run with a single command, making it ideal for integration into CI/CD pipelines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High Accuracy&lt;/strong&gt;: Trivy minimizes false positives and negatives, providing reliable and accurate scanning results. It regularly updates its vulnerability database to ensure it can detect the latest known vulnerabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DevSecOps Friendly&lt;/strong&gt;: Trivy fits perfectly in the DevSecOps model, allowing developers and security teams to work together. Its integration into CI/CD pipelines ensures that security is a shared responsibility and part of the daily workflow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Comprehensive Reports&lt;/strong&gt;: Trivy generates detailed and understandable reports, making it easier for developers to identify and address vulnerabilities.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Integrating Trivy with GitLab CI
&lt;/h2&gt;

&lt;p&gt;GitLab CI/CD is a powerful platform for automating your software development process. To integrate Trivy with GitLab CI, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create a &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; File&lt;/strong&gt;&lt;br&gt;
Start by creating a &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file in your repository. This file defines your CI pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Add Trivy Scan Job&lt;/strong&gt;&lt;br&gt;
Within the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt;, define a job for Trivy scanning:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   trivy_scan:
     image: docker:latest
     services:
       - docker:dind
     script:
       - docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd):/root aquasec/trivy:latest image &amp;lt;your_image_name&amp;gt;
     only:
       - master

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace  with the name of the Docker image you want to scan.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Trivy with Azure DevOps
&lt;/h2&gt;

&lt;p&gt;For Azure DevOps users, integrating Trivy into your pipelines is straightforward.&lt;/p&gt;

&lt;h3&gt;
  
  
  Edit Your Azure Pipeline
&lt;/h3&gt;

&lt;p&gt;In your Azure DevOps project, edit your pipeline YAML file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add Trivy Task
&lt;/h3&gt;

&lt;p&gt;Add the following task to your pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- script: |
    docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v $(System.DefaultWorkingDirectory):/root aquasec/trivy:latest image &amp;lt;your_image_name&amp;gt;
  displayName: 'Run Trivy vulnerability scanner'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, replace  with your Docker image name.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Trivy with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;GitHub Actions makes it easy to automate all your software workflows. To add Trivy scanning to a GitHub Actions workflow:&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Workflow File
&lt;/h3&gt;

&lt;p&gt;In your repository, create a new file under .github/workflows/ (e.g., trivy-scan.yml).&lt;/p&gt;

&lt;h3&gt;
  
  
  Define the Trivy Scan Workflow
&lt;/h3&gt;

&lt;p&gt;Use the following template for your workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Trivy Scan

on:
  push:
    branches: [ master ]

jobs:
  trivy_scan:
    runs-on: ubuntu-latest

    steps:
    - name: Check out code
      uses: actions/checkout@v2

    - name: Run Trivy vulnerability scanner
      run: |
        docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd):/root aquasec/trivy:latest image &amp;lt;your_image_name&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Modify  to match your Docker image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Integrating Trivy into your CI/CD pipelines is a crucial step in identifying and mitigating vulnerabilities early in the development process. Whether you're using GitLab CI, Azure DevOps, or GitHub Actions, adding Trivy ensures that your deployments are more secure and reliable. Stay vigilant and proactive in your approach to software security!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t3Kqxu_H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jemto9oki4j0aral6ls2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t3Kqxu_H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jemto9oki4j0aral6ls2.jpeg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>trivy</category>
      <category>security</category>
    </item>
    <item>
      <title>In-Depth Guide: Setting Up a NAT Gateway in AWS Using CloudFormation</title>
      <dc:creator>marocz</dc:creator>
      <pubDate>Thu, 28 Dec 2023 16:26:44 +0000</pubDate>
      <link>https://dev.to/marocz/in-depth-guide-setting-up-a-nat-gateway-in-aws-using-cloudformation-5c54</link>
      <guid>https://dev.to/marocz/in-depth-guide-setting-up-a-nat-gateway-in-aws-using-cloudformation-5c54</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Managing network traffic and ensuring secure internet access for resources in AWS is a critical aspect of cloud architecture. A Network Address Translation (NAT) Gateway plays a pivotal role in this. In this comprehensive guide, we'll explore what a NAT Gateway is, its features, and step-by-step instructions on setting it up in AWS using CloudFormation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a NAT Gateway?
&lt;/h2&gt;

&lt;p&gt;A NAT (Network Address Translation) Gateway in AWS allows resources within a private subnet to access the internet or other AWS services, while preventing the Internet from initiating a connection with those resources. It's used to provide internet traffic to EC2 instances in a private subnet in a secure manner.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of NAT Gateway
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: It allows instances in a private subnet to initiate outbound IPv4 traffic to the internet, while not allowing inbound traffic from the internet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Availability&lt;/strong&gt;: AWS NAT Gateway is designed to be highly available within an Availability Zone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bandwidth Scaling&lt;/strong&gt;: Automatically scales its bandwidth up to 45 Gbps without any manual intervention.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Need for Patching&lt;/strong&gt;: Being a managed service, it does not require any patch management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account&lt;/li&gt;
&lt;li&gt;Basic knowledge of AWS VPC, subnets, and CloudFormation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step-by-Step Setup Using CloudFormation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Understanding the Architecture
&lt;/h3&gt;

&lt;p&gt;The architecture involves a VPC with both public and private subnets. The NAT Gateway is placed in the public subnet, providing outbound internet access to instances in the private subnet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Writing the CloudFormation Template
&lt;/h3&gt;

&lt;p&gt;Create a file named &lt;code&gt;nat-gateway.yaml&lt;/code&gt;. This CloudFormation script creates the necessary components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VPC (MyVPC)&lt;/strong&gt;: This acts as the networking backbone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subnets (PublicSubnet and PrivateSubnet)&lt;/strong&gt;: For segregating resources. The NAT Gateway resides in the public subnet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internet Gateway (InternetGateway)&lt;/strong&gt;: To provide access to the internet for the public subnet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Elastic IP (NatGatewayEIP)&lt;/strong&gt;: A static IPv4 address used by the NAT Gateway for sending traffic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NAT Gateway (NatGateway)&lt;/strong&gt;: The managed NAT service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Route Tables and Associations&lt;/strong&gt;: To route traffic appropriately from the private subnet to the NAT Gateway and from the public subnet to the internet.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: '2010-09-09'
Description: 'CloudFormation Template for NAT Gateway Setup'

Resources:
  MyVPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: 10.0.0.0/16
      EnableDnsSupport: true
      EnableDnsHostnames: true

  PublicSubnet:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.1.0/24
      MapPublicIpOnLaunch: true

  PrivateSubnet:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.2.0/24

  InternetGateway:
    Type: AWS::EC2::InternetGateway

  AttachGateway:
    Type: AWS::EC2::VPCGatewayAttachment
    Properties:
      VpcId: !Ref MyVPC
      InternetGatewayId: !Ref InternetGateway

  NatGatewayEIP:
    Type: AWS::EC2::EIP
    DependsOn: AttachGateway
    Properties:
      Domain: vpc

  NatGateway:
    Type: AWS::EC2::NatGateway
    Properties:
      AllocationId: !GetAtt NatGatewayEIP.AllocationId
      SubnetId: !Ref PublicSubnet

  RouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref MyVPC

  PublicRoute:
    Type: AWS::EC2::Route
    DependsOn: AttachGateway
    Properties:
      RouteTableId: !Ref RouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref InternetGateway

  PrivateRoute:
    Type: AWS::EC2::Route
    DependsOn: NatGateway
    Properties:
      RouteTableId: !Ref RouteTable
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId: !Ref NatGateway

  AssociatePublicSubnet:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PublicSubnet
      RouteTableId: !Ref RouteTable

  AssociatePrivateSubnet:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PrivateSubnet
      RouteTableId: !Ref RouteTable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Deploying the Template
&lt;/h3&gt;

&lt;p&gt;To deploy this template, navigate to the AWS CloudFormation console, choose 'Create stack', and upload the nat-gateway.yaml file. Follow the prompts to create the stack. You can also use the AWS CLI to deploy the stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudformation create-stack --stack-name my-nat-gateway --template-body file://nat-gateway.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You have successfully created a NAT Gateway in your AWS environment using CloudFormation. This setup will enable your instances in a private subnet to securely access the internet while maintaining the security and privacy of your resources. The power of CloudFormation allows you to easily replicate this setup in different environments or regions, ensuring consistency and efficiency in your cloud infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F6oYrc48--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/alem33l52x5ddxug20t9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F6oYrc48--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/alem33l52x5ddxug20t9.png" alt="Image description" width="626" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudformation</category>
      <category>natgateway</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deploying SentinelOne Agent to EKS Using Terraform</title>
      <dc:creator>marocz</dc:creator>
      <pubDate>Sat, 28 Oct 2023 20:41:06 +0000</pubDate>
      <link>https://dev.to/marocz/deploying-sentinelone-agent-to-eks-using-terraform-5a5a</link>
      <guid>https://dev.to/marocz/deploying-sentinelone-agent-to-eks-using-terraform-5a5a</guid>
      <description>&lt;h4&gt;
  
  
  A step-by-step guide to deploy SentinelOne Agent and S1 Helper to your EKS cluster using Terraform.
&lt;/h4&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;When it comes to managing and securing Kubernetes clusters, having the right set of tools is crucial. SentinelOne, a cybersecurity solution, provides an agent that helps in monitoring and protecting your EKS (Elastic Kubernetes Service) cluster. In this guide, I will walk you through the process of deploying the SentinelOne Agent and S1 Helper to your EKS cluster using Terraform, which will provide an automated and reproducible deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account and an EKS cluster up and running.&lt;/li&gt;
&lt;li&gt;Terraform installed on your local machine.&lt;/li&gt;
&lt;li&gt;SentinelOne account with necessary credentials.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7fRzjvsE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9cvemnipa1upjtkyc8z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7fRzjvsE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9cvemnipa1upjtkyc8z.png" alt="Image description" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Preparing Your Terraform Environment
&lt;/h2&gt;

&lt;p&gt;Before we dive into the Terraform code, ensure you have your AWS credentials configured properly. You can set up your credentials using the AWS CLI or by configuring environment variables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-access-key-id"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-secret-access-key"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_DEFAULT_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-region"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Setting Up Terraform Configuration
&lt;/h2&gt;

&lt;p&gt;Create a file named &lt;code&gt;main.tf&lt;/code&gt; and add the following Terraform configuration to define your provider and the required resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-west-2"&lt;/span&gt;  &lt;span class="c1"&gt;# Change to your AWS region&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;config_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~/.kube/config"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes_namespace"&lt;/span&gt; &lt;span class="s2"&gt;"s1"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;metadata&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"sentinelone"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes_deployment"&lt;/span&gt; &lt;span class="s2"&gt;"s1_agent"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;metadata&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s1-agent"&lt;/span&gt;
    &lt;span class="nx"&gt;namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;kubernetes_namespace&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;s1&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;spec&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;replicas&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;

    &lt;span class="nx"&gt;selector&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;match_labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s1-agent"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;metadata&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s1-agent"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;

      &lt;span class="nx"&gt;spec&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"sentinelone/agent:latest"&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with the correct image&lt;/span&gt;
          &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s1-agent"&lt;/span&gt;

          &lt;span class="nx"&gt;env&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"S1_API_TOKEN"&lt;/span&gt;
            &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"your-s1-api-token"&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Deploying S1 Helper
&lt;/h2&gt;

&lt;p&gt;The S1 Helper is a crucial component that assists in the management of the SentinelOne Agent. Add the following to your &lt;code&gt;main.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes_deployment"&lt;/span&gt; &lt;span class="s2"&gt;"s1_helper"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;metadata&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s1-helper"&lt;/span&gt;
    &lt;span class="nx"&gt;namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;kubernetes_namespace&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;s1&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;spec&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;replicas&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;

    &lt;span class="nx"&gt;selector&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;match_labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s1-helper"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;metadata&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s1-helper"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;

      &lt;span class="nx"&gt;spec&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"sentinelone/helper:latest"&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with the correct image&lt;/span&gt;
          &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s1-helper"&lt;/span&gt;

          &lt;span class="nx"&gt;env&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"S1_API_TOKEN"&lt;/span&gt;
            &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"your-s1-api-token"&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Applying Your Configuration
&lt;/h2&gt;

&lt;p&gt;With your configuration ready, initialize Terraform and apply your configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You've now automated the deployment of SentinelOne Agent and S1 Helper to your EKS cluster using Terraform. This setup not only enhances the security posture of your cluster but also provides a streamlined and reproducible deployment process. Feel free to tweak the Terraform configurations to meet your specific use case and security requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZES-3uMk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rnp2iuef8vqxu0b6snof.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZES-3uMk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rnp2iuef8vqxu0b6snof.jpeg" alt="Image description" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>sentinelone</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Caching Content with AWS CloudFront: A Detailed Guide</title>
      <dc:creator>marocz</dc:creator>
      <pubDate>Sat, 14 Oct 2023 23:29:52 +0000</pubDate>
      <link>https://dev.to/marocz/caching-content-with-aws-cloudfront-a-detailed-guide-12ck</link>
      <guid>https://dev.to/marocz/caching-content-with-aws-cloudfront-a-detailed-guide-12ck</guid>
      <description>&lt;p&gt;Hello, Community! &lt;/p&gt;

&lt;p&gt;Today, we're exploring the acceleration of web content delivery using AWS CloudFront. &lt;br&gt;
Additionally, we'll delve into automating this setup with Terraform, ensuring you have an efficient, replicable, and maintainable infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding AWS CloudFront
&lt;/h2&gt;

&lt;p&gt;AWS CloudFront is a content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Edge Locations&lt;/strong&gt;: CloudFront caches copies of your content in edge locations across the globe ensuring fast delivery to users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Origin Fetches&lt;/strong&gt;: When content is not cached, CloudFront fetches it from specified origins, like S3 buckets or HTTP servers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content Delivery&lt;/strong&gt;: CloudFront provides a secure and optimized delivery of your content to users via HTTPS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Invalidation&lt;/strong&gt;: You can remove cached content to refresh it with updated versions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customization&lt;/strong&gt;: Customize content delivery by configuring cache behaviors, geo-restrictions, and more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration&lt;/strong&gt;: Seamlessly integrate CloudFront with other AWS services like AWS WAF, AWS Shield, and Lambda@Edge for enhanced security and functionality.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Benefits:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Reduced latency due to proximity-based content delivery.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Smooth handling of traffic spikes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration&lt;/strong&gt;: Compatibility with other AWS services like Amazon S3, EC2, and Lambda.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Features HTTPS, AWS WAF integration, and DDoS protection.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;CloudFront Cache Invalidation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you update your content and want to remove the old content from CloudFront edge locations, you need to create an invalidation.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the Distribution.&lt;/li&gt;
&lt;li&gt;Invalidations tab → Create Invalidation.&lt;/li&gt;
&lt;li&gt;Enter the path for the content to invalidate (e.g., /images/*).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cn9CuKAc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ltw2cv3l0azwzel0sddk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cn9CuKAc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ltw2cv3l0azwzel0sddk.png" alt="AWS Cloudfront Integration with S3 bucket" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Setting Up CloudFront Manually&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Prerequisites:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account.&lt;/li&gt;
&lt;li&gt;Content to distribute, e.g., a website on S3 or EC2.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Procedure:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Login&lt;/strong&gt; to the AWS Management Console.&lt;/li&gt;
&lt;li&gt;Go to &lt;strong&gt;CloudFront&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create Distribution&lt;/strong&gt; and choose &lt;strong&gt;Web&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure Distribution&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Origin Settings&lt;/strong&gt;: Define where CloudFront fetches content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Default Cache Behavior Settings&lt;/strong&gt;: Set policies, like redirecting HTTP to HTTPS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distribution Settings&lt;/strong&gt;: Define price class, logging, SSL, etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Create Distribution&lt;/strong&gt;. Upon creation, you'll receive a unique CloudFront URL.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Testing&lt;/strong&gt;: Access content via the CloudFront URL to verify.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Setting Up AWS CloudFront using AWS CLI&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create an S3 Bucket&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3api create-bucket &lt;span class="nt"&gt;--bucket&lt;/span&gt; my-bucket-name &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Configure CloudFront Distribution&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to CloudFront in the AWS Management Console.&lt;/li&gt;
&lt;li&gt;Select 'Create Distribution'.&lt;/li&gt;
&lt;li&gt;Choose 'Web' and specify your S3 bucket as the origin.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws cloudfront create-distribution &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--origin-domain-name&lt;/span&gt; my-bucket-name.s3.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Set Cache Behavior&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Choose suitable caching rules under 'Cache Behavior Settings'.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws cloudfront create-distribution &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--default-cache-behavior&lt;/span&gt; &lt;span class="nv"&gt;AllowedMethods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;GET,HEAD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Automating Setup with Terraform&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Prerequisites:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Terraform installed and configured.&lt;/li&gt;
&lt;li&gt;AWS CLI set up with the necessary permissions.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Procedure using Terraform:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Initialize Configuration&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-west-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Define S3 Bucket&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket"&lt;/span&gt; &lt;span class="s2"&gt;"b"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-tf-test-bucket"&lt;/span&gt;
  &lt;span class="nx"&gt;acl&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"private"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Define CloudFront Distribution&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_cloudfront_distribution"&lt;/span&gt; &lt;span class="s2"&gt;"s3_distribution"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;origin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;domain_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bucket_regional_domain_name&lt;/span&gt;
    &lt;span class="nx"&gt;origin_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"S3-BUCKET-ORIGIN-ID"&lt;/span&gt;

    &lt;span class="nx"&gt;s3_origin_config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;origin_access_identity&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"origin-access-identity/cloudfront/ID_GOES_HERE"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;enabled&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;is_ipv6_enabled&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;default_root_object&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"index.html"&lt;/span&gt;

  &lt;span class="nx"&gt;default_cache_behavior&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;allowed_methods&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"DELETE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"HEAD"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"OPTIONS"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"PATCH"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"POST"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"PUT"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;cached_methods&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"HEAD"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;target_origin_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"S3-BUCKET-ORIGIN-ID"&lt;/span&gt;

    &lt;span class="nx"&gt;forwarded_values&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;query_string&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

      &lt;span class="nx"&gt;cookies&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;forward&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"none"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;viewer_protocol_policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"allow-all"&lt;/span&gt;
    &lt;span class="nx"&gt;min_ttl&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;default_ttl&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt;
    &lt;span class="nx"&gt;max_ttl&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;86400&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;price_class&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"PriceClass_100"&lt;/span&gt;

  &lt;span class="nx"&gt;restrictions&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;geo_restriction&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;restriction_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"none"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;viewer_certificate&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;cloudfront_default_certificate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Deploy&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;terraform init&lt;/code&gt; to initialize.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform plan&lt;/code&gt; to preview.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform apply&lt;/code&gt; to deploy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Conclusion:&lt;br&gt;
AWS CloudFront is a powerful tool to cache and deliver content efficiently. By creating an S3 bucket, configuring a CloudFront distribution, and setting up cache behaviors, you can significantly accelerate content delivery to end-users.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Best Practices&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Use CloudFront with &lt;strong&gt;S3 Origin Access Identity&lt;/strong&gt; to restrict direct bucket access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable Gzip compression&lt;/strong&gt; for optimized data transfer.&lt;/li&gt;
&lt;li&gt;Employ &lt;strong&gt;Lambda@Edge&lt;/strong&gt; for advanced content handling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implement asset versioning&lt;/strong&gt; to reduce the need for cache invalidations.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Pairing AWS CloudFront with Terraform offers both speed in content delivery and efficiency in infrastructure management. Whether serving small sites or global apps, this combo ensures swift, secure content delivery. Happy caching! 🚀&lt;/p&gt;




&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x8OBMx56--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x3izxtp8cg4u1q1w74bc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x8OBMx56--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x3izxtp8cg4u1q1w74bc.png" alt="AWS Cloud Front" width="670" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscloudfront</category>
      <category>terraform</category>
      <category>awscli</category>
    </item>
    <item>
      <title>My Journey Upgrading GitLab and GitLab Runner on AWS EC2</title>
      <dc:creator>marocz</dc:creator>
      <pubDate>Sun, 08 Oct 2023 19:13:06 +0000</pubDate>
      <link>https://dev.to/marocz/my-journey-upgrading-gitlab-and-gitlab-runner-on-aws-ec2-4anf</link>
      <guid>https://dev.to/marocz/my-journey-upgrading-gitlab-and-gitlab-runner-on-aws-ec2-4anf</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Recently, I embarked on the journey of upgrading our GitLab server and GitLab Runner hosted on an AWS EC2 instance. Though the process is documented, there's nothing like hands-on experience to understand the nuances. In this article, I'll share my step-by-step approach, the challenges I faced, and the commands that saved my day!&lt;/p&gt;

&lt;h2&gt;
  
  
  📦 Prerequisites:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;An EC2 instance with GitLab already humming along.&lt;/li&gt;
&lt;li&gt;A GitLab runner, tirelessly building and testing our projects.&lt;/li&gt;
&lt;li&gt;Trusty SSH access to the EC2 instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛡️ Step 1: Backing Up Data
&lt;/h2&gt;

&lt;p&gt;Before diving into the upgrade, I made sure to backup. Can't stress this enough!&lt;/p&gt;

&lt;h2&gt;
  
  
  GitLab:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gitlab-rake gitlab:backup:create
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  GitLab Runner:
&lt;/h2&gt;

&lt;p&gt;Backing up the runner configuration was a breeze:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp /etc/gitlab-runner/config.toml ~/gitlab-runner-config-backup.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🚀 Step 2: Upgrading the GitLab Server:
&lt;/h2&gt;

&lt;p&gt;Checking Available Versions:&lt;br&gt;
My curiosity led me to check which versions were available:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum list available gitlab-ce --showduplicates | sort -r
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Updating Repository Information:&lt;/p&gt;

&lt;p&gt;Ensuring I had the latest repository info was crucial:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.rpm.sh | sudo bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Actual Upgrade:&lt;/p&gt;

&lt;p&gt;Choosing the desired version, I proceeded:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install gitlab-ce-&amp;lt;version_number&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verification:&lt;br&gt;
Post-upgrade, I had to ensure everything was in order:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gitlab-rake gitlab:env:info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🏃 Step 3: Upgrading the GitLab Runner:
&lt;/h2&gt;

&lt;p&gt;Repository Update:&lt;br&gt;
A quick refresh of the repository information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.rpm.sh | sudo bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Runner Upgrade:&lt;br&gt;
With a leap of faith:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install gitlab-runner
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🔍 Step 4: Verifying the GitLab Runner Upgrade:
&lt;/h2&gt;

&lt;p&gt;I made sure the runner was back on its feet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gitlab-runner restart
sudo gitlab-runner status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🚒 Step 5: Potential Rollback:
&lt;/h2&gt;

&lt;p&gt;Though everything went smoothly, I was prepared for a rollback:&lt;/p&gt;

&lt;p&gt;GitLab:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gitlab-rake gitlab:backup:restore BACKUP=&amp;lt;backup timestamp&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GitLab Runner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp ~/gitlab-runner-config-backup.toml /etc/gitlab-runner/config.toml
sudo gitlab-runner restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🌟 Conclusion:
&lt;/h3&gt;

&lt;p&gt;My adventure upgrading GitLab and GitLab Runner on AWS EC2 was both challenging and rewarding. While the process is generally straightforward, it's the small nuances that make the experience unique. &lt;/p&gt;

&lt;p&gt;To all the DevOps enthusiasts out there, always backup, stay updated, and happy coding!&lt;/p&gt;

</description>
      <category>gitlab</category>
      <category>aws</category>
      <category>devops</category>
      <category>ec2</category>
    </item>
    <item>
      <title>My Journey with AWS LocalStack and Terraform: A Local AWS Environment Setup</title>
      <dc:creator>marocz</dc:creator>
      <pubDate>Fri, 29 Sep 2023 08:15:53 +0000</pubDate>
      <link>https://dev.to/marocz/my-journey-with-aws-localstack-and-terraform-a-local-aws-environment-setup-i50</link>
      <guid>https://dev.to/marocz/my-journey-with-aws-localstack-and-terraform-a-local-aws-environment-setup-i50</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the realm of cloud computing, gaining hands-on experience is pivotal. However, it can sometimes be expensive or complex to do this on real AWS environments. That's where AWS LocalStack paired with Terraform comes into the picture. This setup allows you to mock AWS resources locally, providing a cost-effective and straightforward way to test your cloud applications. In this post, I'll share how I set up a local AWS environment using Terraform and LocalStack to manage resources like S3, EC2, and ECR.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI&lt;/li&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;Terraform&lt;/li&gt;
&lt;li&gt;LocalStack&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Installing LocalStack
&lt;/h2&gt;

&lt;p&gt;Ensure Docker is running on your machine, then install LocalStack using pip:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;p&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;localstack&lt;br&gt;
Certainly! Below is your post formatted according to the Dev.to markdown styling:&lt;/p&gt;

&lt;p&gt;&lt;span class="sb"&gt;```&lt;/span&gt;markdown&lt;br&gt;
&lt;span class="nt"&gt;---&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class="c"&gt;## Step 2: Configuring LocalStack&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Create a &lt;span class="sb"&gt;&lt;code&gt;&amp;lt;/span&amp;gt;docker-compose.yaml&amp;lt;span class="sb"&amp;gt;&lt;/code&gt;&lt;/span&gt; file with the following content to configure the services you&lt;span class="s1"&gt;'ll need:&lt;/span&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;/span&amp;gt;3.1&amp;lt;span&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;class="s1"&amp;gt;'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;localstack&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localstack/localstack&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4566:4566"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4571:4571"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SERVICES=s3,ec2,ecr&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;DEBUG=1&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;DATA_DIR=/tmp/localstack/data&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./tmp/localstack:/tmp/localstack"&lt;/span&gt;


&lt;span class="s"&gt;&amp;lt;/span&amp;gt;&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;h2&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;a name="step-3-launching-localstack" href="#step-3-launching-localstack"&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;/a&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;Step 3&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Launching LocalStack&lt;/span&gt;
&lt;span class="s"&gt;&amp;lt;/h2&amp;gt;&lt;/span&gt;

&lt;span class="s"&gt;&amp;lt;p&amp;gt;To launch LocalStack, run the following command:&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;&amp;lt;div class="highlight"&amp;gt;&amp;lt;pre class="highlight shell"&amp;gt;&amp;lt;code&amp;gt;&lt;/span&gt;

&lt;span class="s"&gt;docker-compose up&lt;/span&gt;


&lt;span class="s"&gt;&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;h2&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;a name="step-4-prepping-terraform" href="#step-4-prepping-terraform"&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;/a&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;Step 4&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prepping Terraform&lt;/span&gt;
&lt;span class="s"&gt;&amp;lt;/h2&amp;gt;&lt;/span&gt;

&lt;span class="s"&gt;&amp;lt;p&amp;gt;Construct a &amp;lt;code&amp;gt;main.tf&amp;lt;/code&amp;gt; file with the subsequent content to establish the AWS resources using Terraform:&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;&amp;lt;div class="highlight"&amp;gt;&amp;lt;pre class="highlight hcl"&amp;gt;&amp;lt;code&amp;gt;&lt;/span&gt;

&lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;provider&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"aws"&amp;lt;/span&amp;gt; &amp;lt;span class="p"&amp;gt;{&amp;lt;/span&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;endpoints&amp;lt;/span&amp;gt; &amp;lt;span class="p"&amp;gt;{&amp;lt;/span&amp;gt;&lt;/span&gt;
    &lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;s3&amp;lt;/span&amp;gt;      &amp;lt;span class="p"&amp;gt;=&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"http://localhost:4566"&amp;lt;/span&amp;gt;&lt;/span&gt;
    &lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;ec2&amp;lt;/span&amp;gt;     &amp;lt;span class="p"&amp;gt;=&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"http://localhost:4566"&amp;lt;/span&amp;gt;&lt;/span&gt;
    &lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;ecr&amp;lt;/span&amp;gt;     &amp;lt;span class="p"&amp;gt;=&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"http://localhost:4566"&amp;lt;/span&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;span class="p"&amp;gt;}&amp;lt;/span&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;region&amp;lt;/span&amp;gt;                      &amp;lt;span class="p"&amp;gt;=&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"us-east-1"&amp;lt;/span&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;skip_credentials_validation&amp;lt;/span&amp;gt; &amp;lt;span class="p"&amp;gt;=&amp;lt;/span&amp;gt; &amp;lt;span class="kc"&amp;gt;true&amp;lt;/span&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;skip_metadata_api_check&amp;lt;/span&amp;gt;     &amp;lt;span class="p"&amp;gt;=&amp;lt;/span&amp;gt; &amp;lt;span class="kc"&amp;gt;true&amp;lt;/span&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;skip_requesting_account_id&amp;lt;/span&amp;gt;  &amp;lt;span class="p"&amp;gt;=&amp;lt;/span&amp;gt; &amp;lt;span class="kc"&amp;gt;true&amp;lt;/span&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;&amp;lt;span class="p"&amp;gt;}&amp;lt;/span&amp;gt;&lt;/span&gt;

&lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;resource&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"aws_s3_bucket"&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"my_bucket"&amp;lt;/span&amp;gt; &amp;lt;span class="p"&amp;gt;{&amp;lt;/span&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;bucket&amp;lt;/span&amp;gt; &amp;lt;span class="p"&amp;gt;=&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"my-bucket"&amp;lt;/span&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;&amp;lt;span class="p"&amp;gt;}&amp;lt;/span&amp;gt;&lt;/span&gt;

&lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;resource&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"aws_ec2_instance"&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"my_instance"&amp;lt;/span&amp;gt; &amp;lt;span class="p"&amp;gt;{&amp;lt;/span&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;ami&amp;lt;/span&amp;gt;           &amp;lt;span class="p"&amp;gt;=&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"ami-abcdef01"&amp;lt;/span&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;instance_type&amp;lt;/span&amp;gt; &amp;lt;span class="p"&amp;gt;=&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"t2.micro"&amp;lt;/span&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;&amp;lt;span class="p"&amp;gt;}&amp;lt;/span&amp;gt;&lt;/span&gt;

&lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;resource&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"aws_ecr_repository"&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"my_repo"&amp;lt;/span&amp;gt; &amp;lt;span class="p"&amp;gt;{&amp;lt;/span&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;span class="nx"&amp;gt;name&amp;lt;/span&amp;gt; &amp;lt;span class="p"&amp;gt;=&amp;lt;/span&amp;gt; &amp;lt;span class="s2"&amp;gt;"my-repo"&amp;lt;/span&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;&amp;lt;span class="p"&amp;gt;}&amp;lt;/span&amp;gt;&lt;/span&gt;


&lt;span class="s"&gt;&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;h2&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;a name="step-5-applying-terraform-configuration" href="#step-5-applying-terraform-configuration"&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;/a&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;Step 5&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Applying Terraform Configuration&lt;/span&gt;
&lt;span class="s"&gt;&amp;lt;/h2&amp;gt;&lt;/span&gt;

&lt;span class="s"&gt;&amp;lt;p&amp;gt;Now, initialize Terraform and apply the configuration:&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;&amp;lt;div class="highlight"&amp;gt;&amp;lt;pre class="highlight shell"&amp;gt;&amp;lt;code&amp;gt;&lt;/span&gt;

&lt;span class="s"&gt;terraform init&lt;/span&gt;
&lt;span class="s"&gt;terraform apply&lt;/span&gt;


&lt;span class="s"&gt;&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;h2&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;a name="conclusion" href="#conclusion"&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;/a&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;Conclusion&lt;/span&gt;
&lt;span class="s"&gt;&amp;lt;/h2&amp;gt;&lt;/span&gt;

&lt;span class="s"&gt;&amp;lt;p&amp;gt;This setup has become a part of my daily workflow, enabling me to test and develop cloud applications locally with ease. &amp;lt;/p&amp;gt;&lt;/span&gt;

&lt;span class="s"&gt;&amp;lt;p&amp;gt;The amalgamation of Terraform and LocalStack provides a robust platform to mock AWS resources, significantly smoothing the development and testing phases. &amp;lt;/p&amp;gt;&lt;/span&gt;

&lt;span class="s"&gt;&amp;lt;p&amp;gt;Feel free to adapt this setup to your needs and explore other AWS resources available in LocalStack. Happy coding!&amp;lt;/p&amp;gt;&lt;/span&gt;

&lt;span class="na"&gt;&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Note&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ensure you replace placeholder values like "ami-abcdef01" and "my-bucket" with your specific values or references.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;/span&gt;

&lt;span class="s"&gt;&amp;lt;p&amp;gt;&amp;lt;img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9qgk3c3g5c8c85mwtslb.jpeg" alt="Image description"&amp;gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>localstack</category>
      <category>devops</category>
    </item>
    <item>
      <title>Title: How to Spin Up an AWS EKS Cluster Using Terraform</title>
      <dc:creator>marocz</dc:creator>
      <pubDate>Mon, 11 Sep 2023 10:01:36 +0000</pubDate>
      <link>https://dev.to/marocz/title-how-to-spin-up-an-aws-eks-cluster-using-terraform-2h43</link>
      <guid>https://dev.to/marocz/title-how-to-spin-up-an-aws-eks-cluster-using-terraform-2h43</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w7GndxGE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jhvlw2967id2n45fcsp5.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w7GndxGE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jhvlw2967id2n45fcsp5.jpeg" alt="Image description" width="800" height="466"&gt;&lt;/a&gt;&lt;br&gt;
Introduction:&lt;/p&gt;

&lt;p&gt;Elastic Kubernetes Service (EKS) is Amazon's managed Kubernetes solution that makes it easier to run Kubernetes on AWS without managing the underlying infrastructure. In this post, we'll walk through the process of deploying an EKS cluster using Terraform.&lt;/p&gt;

&lt;p&gt;Prerequisites:&lt;/p&gt;

&lt;p&gt;An AWS account&lt;br&gt;
AWS CLI installed and configured&lt;br&gt;
Terraform installed&lt;br&gt;
kubectl installed&lt;/p&gt;

&lt;p&gt;Step 1: Set up Terraform:&lt;/p&gt;

&lt;p&gt;Before you can use Terraform to create resources in AWS, ensure you've set it up correctly:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ terraform init&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Sure! Here's a basic outline and content for a Dev.to post on creating an AWS EKS Cluster using Terraform:&lt;/p&gt;

&lt;p&gt;Title: How to Spin Up an AWS EKS Cluster Using Terraform&lt;/p&gt;

&lt;p&gt;Introduction:&lt;br&gt;
Elastic Kubernetes Service (EKS) is Amazon's managed Kubernetes solution that makes it easier to run Kubernetes on AWS without managing the underlying infrastructure. In this post, we'll walk through the process of deploying an EKS cluster using Terraform.&lt;/p&gt;

&lt;p&gt;Prerequisites:&lt;/p&gt;

&lt;p&gt;An AWS account&lt;br&gt;
AWS CLI installed and configured&lt;br&gt;
Terraform installed&lt;br&gt;
kubectl installed&lt;/p&gt;

&lt;p&gt;Step 1: Set up Terraform:&lt;/p&gt;

&lt;p&gt;Before you can use Terraform to create resources in AWS, ensure you've set it up correctly:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ terraform init&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Step 2: Define Your Infrastructure:&lt;/p&gt;

&lt;p&gt;Create a file named eks-cluster.tf and define your AWS provider and EKS resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-west-2"
}

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = "my-cluster"
  cluster_version = "1.20"
  subnets         = ["subnet-abcde012", "subnet-bcde012a", "subnet-cde012ab"]

  node_groups = {
    eks_nodes = {
      desired_capacity = 2
      max_capacity     = 3
      min_capacity     = 1

      instance_type = "m5.large"
      key_name      = var.key_name
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 3: Initialize and Apply:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform init
$ terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running the terraform apply command, Terraform will show you the changes it plans to make and ask for confirmation. If everything looks good, type yes.&lt;/p&gt;

&lt;p&gt;Sure! Here's a basic outline and content for a Dev.to post on creating an AWS EKS Cluster using Terraform:&lt;/p&gt;

&lt;p&gt;Title: How to Spin Up an AWS EKS Cluster Using Terraform&lt;/p&gt;

&lt;p&gt;Introduction:&lt;br&gt;
Elastic Kubernetes Service (EKS) is Amazon's managed Kubernetes solution that makes it easier to run Kubernetes on AWS without managing the underlying infrastructure. In this post, we'll walk through the process of deploying an EKS cluster using Terraform.&lt;/p&gt;

&lt;p&gt;Prerequisites:&lt;/p&gt;

&lt;p&gt;An AWS account&lt;br&gt;
AWS CLI installed and configured&lt;br&gt;
Terraform installed&lt;br&gt;
kubectl installed&lt;/p&gt;

&lt;p&gt;Step 1: Set up Terraform:&lt;br&gt;
Before you can use Terraform to create resources in AWS, ensure you've set it up correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 2: Define Your Infrastructure:&lt;/p&gt;

&lt;p&gt;Create a file named eks-cluster.tf and define your AWS provider and EKS resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-west-2"
}

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = "my-cluster"
  cluster_version = "1.20"
  subnets         = ["subnet-abcde012", "subnet-bcde012a", "subnet-cde012ab"]

  node_groups = {
    eks_nodes = {
      desired_capacity = 2
      max_capacity     = 3
      min_capacity     = 1

      instance_type = "m5.large"
      key_name      = var.key_name
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 3: Initialize and Apply:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform init
$ terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running the terraform apply command, Terraform will show you the changes it plans to make and ask for confirmation. If everything looks good, type yes.&lt;/p&gt;

&lt;p&gt;Step 4: Configure kubectl:&lt;br&gt;
Once your EKS cluster is up, you need to configure kubectl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws eks --region us-west-2 update-kubeconfig --name my-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 5: Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Conclusion:&lt;/p&gt;

&lt;p&gt;Using Terraform, you can easily deploy an EKS cluster and manage its lifecycle. This method is repeatable, version-controlled, and can be extended with more advanced features and configurations.&lt;/p&gt;

&lt;p&gt;Further Reading:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest"&gt;Terraform AWS EKS Introduction&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/eks/index.html"&gt;Amazon EKS documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can now use kubectl to view your nodes:&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>containers</category>
      <category>terraform</category>
    </item>
  </channel>
</rss>
