<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: waqas_ahmed01</title>
    <description>The latest articles on DEV Community by waqas_ahmed01 (@waqasahmed).</description>
    <link>https://dev.to/waqasahmed</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/waqasahmed"/>
    <language>en</language>
    <item>
      <title>Securely Encrypting Secrets Using Open Source Tools: SOPS and AGE</title>
      <dc:creator>waqas_ahmed01</dc:creator>
      <pubDate>Sun, 21 May 2023 14:32:52 +0000</pubDate>
      <link>https://dev.to/waqasahmed/securely-encrypting-secrets-using-open-source-tools-sops-and-age-3g46</link>
      <guid>https://dev.to/waqasahmed/securely-encrypting-secrets-using-open-source-tools-sops-and-age-3g46</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt; In today's world, data security is of utmost importance, especially when dealing with sensitive information like passwords, API keys, and other confidential data. Encrypting secrets ensures that even if unauthorized individuals gain access to the data, they won't be able to decipher its contents. In this blog post, we will explore two powerful open-source tools, SOPS and AGE, that enable secure encryption of secrets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing SOPS and AGE: 
&lt;/h2&gt;

&lt;p&gt;Before we dive into encrypting secrets, let's ensure that we have SOPS and AGE installed on our system. You can download and install SOPS from the official GitHub repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SOPS → https://github.com/mozilla/sops/releases 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, AGE can be installed from its GitHub repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AGE → https://github.com/FiloSottile/age/releases
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating the Encryption Key: 
&lt;/h2&gt;

&lt;p&gt;To get started, we need to generate an encryption key using AGE. Open your terminal and execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;age-keygen -o key.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the output of this command will be&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;age-keygen -o key.txt
Public key: age1rua2rfy0uhzywprgwclavsp39uhfwmrxpanutt4y3zfcjurjs3msa0hnu9
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create an encryption key file named "key.txt". Next, copy this file to the location ~/.sops/key.txt. You can do this by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp key.txt ~/.sops/key.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring SOPS Environment: 
&lt;/h2&gt;

&lt;p&gt;To configure SOPS to use the AGE encryption key, we need to make an entry in our shell configuration file. If you're using the Zsh shell, you can open the configuration file using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano ~/.zshrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;add following line to the .zshrc file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export SOPS_AGE_KEY_FILE=~/$HOME/.sops/key.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the file and exit the editor. This configures SOPS to use the AGE encryption key file we generated earlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Encrypting the Secret.yaml File: 
&lt;/h2&gt;

&lt;p&gt;Now, let's encrypt the "secret.yaml" file that contains the secrets we want to protect. Here is the content of the "secret.yaml" file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
    creationTimestamp: null
    name: dev-db-secret
data:
    username: root
    password: supersecretpassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To encrypt this file, run the following command in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sops --encrypt --age $(cat $SOPS_AGE_KEY_FILE | grep -oP "public key: \K(.*)") --encrypted-regex '^(data|stringData)$' --in-place ./secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command uses SOPS along with the AGE encryption key to encrypt the file in-place. The --encrypted-regex option specifies the fields that should be encrypted (in this case, all fields under data and stringData).&lt;br&gt;
Now the secret file has been encrypted as&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
    creationTimestamp: null
    name: dev-db-secret
data:
    username: ENC[AES256_GCM,data:hW4VXQ==,iv:nkM9UHvHwTx6oUvjcfq/olO/FcuijHvrVmJZfT2eB6k=,tag:pBV7nvNbGSauOCSy5Bar4Q==,type:str]
    password: ENC[AES256_GCM,data:QX7Bb5Idlyf+0sVsTbRaQd4afQ==,iv:VHu8vnW5vfSW7c4fuBNUAznhH+j2QTfij6iPFd9ww0U=,tag:RSd8e6JNbPhGuYFqOcHNAg==,type:str]
sops:
    kms: []
    gcp_kms: []
    azure_kv: []
    hc_vault: []
    age:
        - recipient: age1w6mnsqrank3f3e9rxv6xz4nnpnvrr9zyed2zsm8jkyya8gq5zazqzt58sm
          enc: |
            -----BEGIN AGE ENCRYPTED FILE-----
            YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBUdzhOQjV0VlBjT0cvSUtk
            VlVBcU11RG5DRkNWTzROdlNrVHo2bUhVK0NnCmNvR3hJSjY5VHFoMm5Va2ViMFho
            b1k1MFhaM0hOa2p1ODh0R25Vb3NsOWsKLS0tIHZSdy93MVhtZGlwL2M5UktpSDds
            UkdHa0VqcTl3TGM0MXpzMXlJeEJrdUEKdVQmdzWWndJQ1V3WZjgIEB5vQXPM5QfZ
            zv7WhnpN0gHMn2G8oZYbSmIPPT0UFI7+JaySZ5EkZeP/vqcK1Qhmow==
            -----END AGE ENCRYPTED FILE-----
    lastmodified: "2023-05-21T14:20:25Z"
    mac: ENC[AES256_GCM,data:kAr8Meo9jeMvfAgiMwhSWIVwaLVd7sU9XHck51hgA67qenE6ORlm2yZcZ75LWo1WkTGoZ+sUdByyYKMFR+zc2SHTT9fnYtLrREtBv9xHz6Kbn/rOEDGDmCNQcBLhQbPdRjzA67rrA8M0V337IJYiIywID2ur8OSXlOSF2M2vW8I=,iv:ZKLCPfpJBcLr8oG/sIVqLbZqd74UMKpS9+YSgQdDsy8=,tag:p6WgJJ3DsCdNbZB9coolhg==,type:str]
    pgp: []
    encrypted_regex: ^(data|stringData)$
    version: 3.7.3

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Decrypting the Secret.yaml File:
&lt;/h2&gt;

&lt;p&gt; If you need to access the decrypted contents of the "secret.yaml" file, you can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sops --decrypt --age $(cat $SOPS_AGE_KEY_FILE | grep -oP "public key: \K(.*)") --encrypted-regex '^(data|stringData)$' --in-place ./secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will decrypt the encrypted fields in the file, allowing you to view and modify the secrets as needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: 
&lt;/h2&gt;

&lt;p&gt;Encrypting secrets is crucial for maintaining data security. In this blog post, we explored the usage of two open-source tools, SOPS and AGE, to encrypt and decrypt secrets. By following the steps outlined, you can effectively protect sensitive information and ensure its confidentiality. Remember to always store your encryption keys securely and follow best practices for secret management to maintain a robust security posture in your projects.&lt;/p&gt;

&lt;p&gt;Youtube --&amp;gt; &lt;a href="https://www.youtube.com/watch?v=JOQBVUaKD0g"&gt;YoutubeVideo&lt;/a&gt;&lt;/p&gt;

</description>
      <category>sops</category>
      <category>encryption</category>
      <category>age</category>
      <category>secrets</category>
    </item>
    <item>
      <title>Ingest VPC Flow Logs into NewRelic</title>
      <dc:creator>waqas_ahmed01</dc:creator>
      <pubDate>Tue, 11 Oct 2022 18:59:25 +0000</pubDate>
      <link>https://dev.to/waqasahmed/ingest-vpc-flow-logs-into-newrelic-2ob5</link>
      <guid>https://dev.to/waqasahmed/ingest-vpc-flow-logs-into-newrelic-2ob5</guid>
      <description>&lt;p&gt;There are many use cases where we wanted to monitor the VPC Flow Logs to view the data going IN / OUT into our VPC. These network traces helps us to troubleshoot many network-related issues. &lt;/p&gt;

&lt;p&gt;We do have a choice in AWS to save VPC Flow Log either into &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS CloudWatch or &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS S3 Buckets. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, both of these solutions don't provide a good user-friendly view and can become cumbersome when trying to find a specific IP Address or Port. &lt;/p&gt;

&lt;p&gt;Well thanks to Kinesis Data Firehose to provide us pretty much option to cope up these situation. We can ingest the data from many possible AWS services into Kinesis Data Firehose and send that to 3rd party monitoring solution to create some AWSome custom Dashboard and monitor the logs. &lt;/p&gt;

&lt;p&gt;I will walk you through step by step to configure this solution in this blog. We can divide this into 3 parts&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create Kinesis Data Firehose&lt;/li&gt;
&lt;li&gt;Create the VPC Flow Logs&lt;/li&gt;
&lt;li&gt;Transform the Log using Lambda function (Optional)&lt;/li&gt;
&lt;li&gt;Send the Logs to NewRelic Monitoring Solution&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create Kinesis Data Firehose
&lt;/h2&gt;

&lt;p&gt;Create a Kinesis Data firehouse and select Source as &lt;strong&gt;Direct PUT&lt;/strong&gt; and Destination as &lt;strong&gt;New Relic&lt;/strong&gt;. Please note that Kinesis Data Firehose is near real time solution but not the real time solution as Kinesis.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xexTxHN3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/713sqkco8rvkzxl4ij61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xexTxHN3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/713sqkco8rvkzxl4ij61.png" alt="Chose Source &amp;amp; Destination" width="880" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under Destination Setting - Select HTTP Endpoint URL as &lt;strong&gt;NEW Relic Log - US&lt;/strong&gt;. Enter the API KEY (Copy the API Key form New Relic)&lt;/p&gt;

&lt;p&gt;Click on following URL, this will land you to NewRelic API-Key screen, as shown below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://one.newrelic.com/admin-portal/api-keys/home"&gt;https://one.newrelic.com/admin-portal/api-keys/home&lt;/a&gt;?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lt-Riuxb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c0yc9aigg501vqk8rpat.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lt-Riuxb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c0yc9aigg501vqk8rpat.png" alt="NewRelic API Key" width="880" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qUXvTWEY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/97m1p2ul3272fvwlz05w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qUXvTWEY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/97m1p2ul3272fvwlz05w.png" alt="Kinesis Data Firehose Destination Configuration" width="880" height="956"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create VPC Flow Log
&lt;/h2&gt;

&lt;p&gt;Go to VPC --&amp;gt; Action and click on Create flow log&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--snndA3xL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6tenjdrwf2j8n3za0i4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--snndA3xL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6tenjdrwf2j8n3za0i4c.png" alt="VPC Flow Log" width="880" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Filter&lt;/strong&gt;, select weather you only want to monitor the ACCEPTED Traffic, REJECTED Traffic or ALL Traffic.&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Destination&lt;/strong&gt;, select &lt;strong&gt;Send to Kinesis Data Firehose in same account&lt;/strong&gt; and select the Kinesis Data Firehose &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U7Ajf8KN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbvkonormpw5n2naod1m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U7Ajf8KN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbvkonormpw5n2naod1m.png" alt="Filter VPC Log" width="880" height="1652"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will take few seconds and then you will start seeing the data into NewRelic platform &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bnhiMMjL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2nkw43oefg3hlvzownn1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bnhiMMjL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2nkw43oefg3hlvzownn1.png" alt="NewRelic Logs" width="880" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you like this article then don't forget to hit the like and share with others :) &lt;/p&gt;

</description>
      <category>aws</category>
      <category>kinesis</category>
      <category>newrelic</category>
      <category>amazonwebservices</category>
    </item>
    <item>
      <title>Secure the S3 Bucket with MFA</title>
      <dc:creator>waqas_ahmed01</dc:creator>
      <pubDate>Fri, 07 Oct 2022 18:24:41 +0000</pubDate>
      <link>https://dev.to/aws-builders/secure-the-s3-bucket-with-mfa-4b62</link>
      <guid>https://dev.to/aws-builders/secure-the-s3-bucket-with-mfa-4b62</guid>
      <description>&lt;p&gt;Do you know that you can secure your S3 Bucket by integrating the MFA to avoid any object deletion accidently?&lt;/p&gt;

&lt;p&gt;The answer is Yes...!!&lt;br&gt;
You can enable the MFA on S3 bucket but first you will need to enable the versioning on the bucket. Also the MFA can't be enable via AWS Management Console so either use AWS CLI or AWS SDK to enable MFA. In this article, I will walk you though the step by step instruction to enable MFA.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step - 1: Configure MFA Device
&lt;/h2&gt;

&lt;p&gt;AWS Support multiple types of MFA device both physical hardware on virtual. In this blog, we will configure virtual MFA&lt;/p&gt;

&lt;p&gt;Login in your AWS Account, on right top click on Avatar and select &lt;strong&gt;Security Credentials&lt;/strong&gt; , select the first option &lt;strong&gt;Authentication App&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AKp3Eg7---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zy86w6m7cjpc4z3cnykt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AKp3Eg7---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zy86w6m7cjpc4z3cnykt.png" alt="AWS Supported MFA Devices" width="880" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will be using Twillo Authy app for authentication. Generate the secret key and enter into Authy app to configure new account, as shown in figure below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ML9snyfP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1fwc5jk1rpi5ob8e1jx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ML9snyfP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1fwc5jk1rpi5ob8e1jx0.png" alt="Twillo Authy App" width="880" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FHxANgwP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hx25hzwmnyx2r3j3a1mc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FHxANgwP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hx25hzwmnyx2r3j3a1mc.png" alt="Image description" width="496" height="822"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once configure it'll show virtual device under the MFA on AWS Console&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mk1tu-vm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6n02xc09wbt9h1hefscg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mk1tu-vm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6n02xc09wbt9h1hefscg.png" alt="Virtual Device Type AWS Console" width="880" height="413"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step - 2: Enable the Versioning on S3 Bucket
&lt;/h2&gt;

&lt;p&gt;If versioning is not enable on S3 bucket make sure to enable that before enabling the MFA. We will use AWS CLI to configure the Versioning.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3api put-bucket-versioning --bucket &amp;lt;bucket_name&amp;gt; --versioning-configuration Status=Enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H36wspvp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/07kb0a502ehfmasi1ol0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H36wspvp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/07kb0a502ehfmasi1ol0.png" alt="Enable Versioning for S3 Bucket" width="880" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step - 3: Enable the MFA
&lt;/h2&gt;

&lt;p&gt;We will be using following AWS CLI command to enable versioning.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3api put-bucket-versioning --bucket &amp;lt;bucket_name&amp;gt;--versioning-configuration Status=Enabled,MFADelete=Enabled --mfa "arn:aws:iam::&amp;lt;&amp;gt;:mfa/root-account-mfa-device Passcode"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;MFA Serial can be found into AWS Console&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ptn3VNV7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qbkjdu03djpyulvvbhyb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ptn3VNV7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qbkjdu03djpyulvvbhyb.png" alt="Virtual MFA Device Serial No:" width="880" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dwS9yjXw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mv38lf7ka0v1z7l8yljm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dwS9yjXw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mv38lf7ka0v1z7l8yljm.png" alt="Image description" width="880" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;tarrahhhh! Congratulation, you have configured the MFA for S3 Bucket. &lt;/p&gt;

&lt;p&gt;If you like this article than don't forget to share it with others ;) &lt;/p&gt;

</description>
      <category>s3</category>
      <category>mfa</category>
      <category>aws</category>
    </item>
    <item>
      <title>Create the Jenkins Server in AWS using Terraform</title>
      <dc:creator>waqas_ahmed01</dc:creator>
      <pubDate>Sun, 04 Sep 2022 11:40:24 +0000</pubDate>
      <link>https://dev.to/waqasahmed/create-the-jenkins-server-in-aws-using-terraform-4mhm</link>
      <guid>https://dev.to/waqasahmed/create-the-jenkins-server-in-aws-using-terraform-4mhm</guid>
      <description>&lt;p&gt;In this article, we will learn how to setup the Jenkins Server using the power of Terraform. &lt;br&gt;
There are number of ways to setup the Jenkins Server but that took some efforts and time to setup that. We can use Terraform to setup the infrastructure and configure the Jenkins Server without knowing any code complexity&lt;/p&gt;
&lt;h2&gt;
  
  
  Pre-requisite:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Basic knowledge of Terraform&lt;/li&gt;
&lt;li&gt;Terraform should be installed on your machine&lt;/li&gt;
&lt;li&gt;An Active AWS account to be used in this lab&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Terraform Configuration
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Provider
&lt;/h3&gt;

&lt;p&gt;In the provider block, specify the name of provider (in our case its AWS) and region where you want to configure the infrastructure&lt;br&gt;
Optional - you can provide the Access_KEY &amp;amp; SECRET_KEY inside the block code but it's not recommended to provide the keys in the code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-east-1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Networking
&lt;/h3&gt;

&lt;p&gt;In this lab we will be setting up our own VPC, Subnets, NAT Gateway, Route Table etc&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Internet gateway to reach the internet
resource "aws_vpc" "web_vpc" {
  cidr_block = var.network_cidr
}

resource "aws_internet_gateway" "web_igw" {
  vpc_id = aws_vpc.web_vpc.id
}
# Route table with a route to the internet
resource "aws_route_table" "public_rt" {
  vpc_id = aws_vpc.web_vpc.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.web_igw.id
  }
  #   tags {
  #     Name = "Public Subnet Route Table"
  #   }
}
# Subnets with routes to the internet
resource "aws_subnet" "public_subnet" {
  # Use the count meta-parameter to create multiple copies
  count             = 2
  vpc_id            = aws_vpc.web_vpc.id
  cidr_block        = cidrsubnet(var.network_cidr, 2, count.index + 2)
  availability_zone = element(var.availability_zones, count.index)
  #   tags =  {
  #     Name = "Public Subnet ${count.index + 1}"
  #   }
}

# Associate public route table with the public subnets
resource "aws_route_table_association" "public_subnet_rta" {
  count          = 2
  subnet_id      = aws_subnet.public_subnet.*.id[count.index]
  route_table_id = aws_route_table.public_rt.id
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will be using Amazon Linux-2 AMI, instead of hardcoding the code with the AMI id, we will take help of DataSource block to retrieve the value of AMI from AWS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_ami" "AmazonLinux2" {
  most_recent = true

  filter {
    name   = "name"
    values = ["amzn2-ami-kernel-5.10-hvm-2.0.20220805.0-x86_64-gp2"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  owners = ["137112412989"] # Canonical
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now it's time to create the EC2 and configure the Jenkins Sever, however one last piece of the puzzle here. I want to expose my Jenkins Server on port 8080 and expose the InitialAdminPassword to configure Jenkins Server on port 80 so we can retrieve that information without login into the EC2 instance. For this we will have to create the Security Group and allow both of these ports&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "jenkins-sg" {
  name        = "jenkins-sg-12345678"
  description = "Allow incoming HTTP traffic from the internet"
  vpc_id      = aws_vpc.web_vpc.id
  ingress {
    from_port = 8080
    to_port   = 8080
    protocol  = "tcp"
    #cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 80
    to_port   = 80
    protocol  = "tcp"
    #cidr_blocks = ["0.0.0.0/0"]
  }
  # Allow all outbound traffic
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So now we are ready to create the EC2 instance and configure the Jenkins Server. I will be using the bash script and pass that in user data using the file("path") function&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
            yum install update -y
            sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
            sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
            sudo yum install fontconfig java-11-openjdk
            sudo  yum install jenkins
            sudo systemctl start jenkins
            sudo amazon-linux-extras install nginx1 -y
            sudo systemctl start nginx
            cp /var/lib/jenkins/secrets/initialAdminPassword /var/www/html/index.html
            sudo amazon-linux-extras install java-openjdk11
            chown -R root:root /var/lib/jenkins 
            chown -R root:root /var/cache/jenkins 
            chown -R root:root /var/log/jenkins
            sudo systemctl restart docker.service
            sed -i 's/JENKINS_USER="jenkins"/JENKINS_USER="root"/' /etc/sysconfig/jenkins
            sudo systemctl restart jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this Bash script file as install_jenkins.sh&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Jenkins Server
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "jenkins" {
  ami                         = data.aws_ami.AmazonLinux2.id
  instance_type               = var.ec2_instance_type
  vpc_security_group_ids      = [aws_security_group.jenkins-sg.id]
  subnet_id                   = aws_subnet.public_subnet[0].id
  associate_public_ip_address = true
  user_data                   = file("install_jenkins.sh")
  root_block_device {
    delete_on_termination = true
    volume_size           = "20"
  }
  tags = {
    Name        = "Jenkins_Server"
    Environment = "Dev"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Variables Block
&lt;/h3&gt;

&lt;p&gt;To make our code more dynamic, we use couple of variables above so you will have to define the variable.tf file and declare those variable&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "ec2_instance_type" {
  type        = string
  default     = "t2.micro"
  description = "Please enter the instance type, if you want to provision different than T2 Micro"
}

variable "service_ports" {
  type        = list(number)
  description = "list of ingress ports"
  default     = [8080, 80]
}

variable "network_cidr" {
  default     = "192.168.0.0/24"
  description = "Please enter the CIDR"
}

variable "availability_zones" {
  type        = list(any)
  default     = ["us-east-1a", "us-east-1d"]
  description = "Please enter the AZs"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ola, we have successfully configured the Jenkins Server without going to the AWS Console.&lt;/p&gt;

&lt;p&gt;In order to retrieve the IP address of newly created instance, we will use the output { } block. &lt;/p&gt;

&lt;h3&gt;
  
  
  Output
&lt;/h3&gt;

&lt;p&gt;To make the output more menaningful I'm using a temp file and will display output in more customize way. Save the following code as outputtemp.tpl&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%{for ports in port ~}
    backend = ${ip_addr}:${ports}
%{endfor ~}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now save the following to output.tf. The output will get the value form Port list and using for loop, will print Jenkins Sever IP and nginx IP on separate line&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "IPAddress" {
  value = templatefile("outputtemp.tpl", { port = ["8080", "80"], ip_addr = aws_instance.jenkins.public_ip })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have successfully setup the Jenkins Sever. To destroy the above created infrastructure&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy --auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thank you for reading my article, if you like this article then don't forget to share it with others. In case you have any question or suggestion please left the comments below, I would love to answer your queries&lt;/p&gt;

&lt;p&gt;You can follow me on&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/waqas-ahmed-tech-expert/"&gt;LinkedIn&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/waqasahmed1992"&gt;GitHub&lt;/a&gt;&lt;br&gt;
&lt;a href="https://registry.terraform.io/namespaces/waqasahmed1992"&gt;Terraform&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>iaac</category>
      <category>jenkins</category>
    </item>
  </channel>
</rss>
