<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kuljot Biring</title>
    <description>The latest articles on DEV Community by Kuljot Biring (@kuljotbiring).</description>
    <link>https://dev.to/kuljotbiring</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kuljotbiring"/>
    <language>en</language>
    <item>
      <title>Tines - SOAR Tool</title>
      <dc:creator>Kuljot Biring</dc:creator>
      <pubDate>Wed, 12 Nov 2025 21:31:51 +0000</pubDate>
      <link>https://dev.to/kuljotbiring/tines-soar-tool-37fh</link>
      <guid>https://dev.to/kuljotbiring/tines-soar-tool-37fh</guid>
      <description>&lt;p&gt;In recent times, I have been working in Tines as we have shifted to this platform for our SOAR needs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tines SOAR is a Security Orchestration, Automation, and Response platform that uses a no-code/low-code interface to help security teams automate and manage security workflows, making them faster and more efficient. It integrates with various security tools through an API-centric approach, allowing it to automate tasks like responding to alerts, triaging incidents, and enriching threat intelligence data&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I have been enjoying using this tool a great deal as it allows a lot of flexibility and comes with a large number of built-in integrations.&lt;/p&gt;

&lt;p&gt;Tines also has great performance as I have noted that the platform can run actions in parallel which reduces run time as compared to some other SOAR products.&lt;/p&gt;

&lt;p&gt;Using Tines, I have been able to automate 50 hours (and counting) of security work so that security engineers can spend time on more value added projects.&lt;/p&gt;

&lt;p&gt;As part of my learning on this platform I have completed both the Tines Core Certification as well as the Tines Advanced Certification.&lt;/p&gt;

&lt;p&gt;These certifications contain a number of pre-labs which contain a lot of good hands-on material and tips and creates to create robust/efficient stories.&lt;/p&gt;

&lt;p&gt;I am looking forward to further my knowledge/skills on the platform in order to increase efficiencies via automation!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fislo41nll1ko4xcln7g1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fislo41nll1ko4xcln7g1.png" alt="Tines Core Certification" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc67mkhbgau952mik8zyn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc67mkhbgau952mik8zyn.png" alt="Tines Advanced Certification" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>tines</category>
      <category>soar</category>
      <category>security</category>
      <category>automation</category>
    </item>
    <item>
      <title>Cybr - [LAB] Importing AWS resources into Terraform</title>
      <dc:creator>Kuljot Biring</dc:creator>
      <pubDate>Wed, 16 Jul 2025 21:24:05 +0000</pubDate>
      <link>https://dev.to/kuljotbiring/cybr-lab-importing-aws-resources-into-terraform-6ao</link>
      <guid>https://dev.to/kuljotbiring/cybr-lab-importing-aws-resources-into-terraform-6ao</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy67kquy7m339ux6p4ar2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy67kquy7m339ux6p4ar2.png" alt="AWS and Terraform" width="720" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today we are going to solve the challenge lab &lt;a href="https://cybr.com/courses/terraform-on-aws-from-zero-to-cloud-infrastructure/lessons/challengelab-import-an-aws-resource/" rel="noopener noreferrer"&gt;Import an AWS resource&lt;/a&gt; created by &lt;a href="https://cybr.com" rel="noopener noreferrer"&gt;Cybr&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This lab will test our skills to see if we can successfully import an S3 bucket which has already been created into our terraform state and configuration.&lt;/p&gt;

&lt;p&gt;So let's take a look at the scenario presented:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Scenario 👨‍🔬&lt;/p&gt;

&lt;p&gt;For our scenario, let’s pretend that you’ve already completed this course, and you go to your team to tell them you need to start using Terraform and IaC to manage all of your infrastructure in AWS accounts. You decide to start with one of the easiest accounts that has the fewest resources. That account has an Amazon S3 bucket that was manually created. You’d like to start by importing that resource.&lt;/p&gt;

&lt;p&gt;It’s imperative that you not change any of the bucket’s existing settings/configurations! You are only importing the existing resource, not applying any changes to that bucket.&lt;/p&gt;

&lt;p&gt;You’ve completed this step when you get the following message:&lt;/p&gt;

&lt;p&gt;❯ terraform plan&lt;br&gt;
...&lt;br&gt;
No changes. Your infrastructure matches the configuration.&lt;/p&gt;

&lt;p&gt;Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.&lt;/p&gt;

&lt;p&gt;Once you’ve important the bucket successfully, go ahead and delete it with terraform destroy!&lt;/p&gt;

&lt;p&gt;You’ve completed this step when you get the following message:&lt;/p&gt;

&lt;p&gt;❯ terraform destroy&lt;br&gt;
aws_s3_bucket.bucket: Refreshing state... &lt;br&gt;
...&lt;/p&gt;

&lt;p&gt;Plan: 0 to add, 0 to change, 1 to destroy.&lt;/p&gt;

&lt;p&gt;Do you really want to destroy all resources?&lt;br&gt;
  Terraform will destroy all your managed infrastructure, as shown above.&lt;br&gt;
  There is no undo. Only 'yes' will be accepted to confirm.&lt;/p&gt;

&lt;p&gt;Enter a value: yes&lt;/p&gt;

&lt;p&gt;aws_s3_bucket.bucket: Destroying... [id=cybrlab-import-bucket-272281913033]&lt;br&gt;
aws_s3_bucket.bucket: Destruction complete after 0s&lt;/p&gt;

&lt;p&gt;Destroy complete! Resources: 1 destroyed.&lt;/p&gt;

&lt;p&gt;And when running this command returns zero buckets:&lt;/p&gt;

&lt;p&gt;❯ aws s3api list-buckets&lt;/p&gt;

&lt;p&gt;Good luck and have fun!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We are also given the following hints:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tip #1&lt;/p&gt;

&lt;p&gt;Here’s the Amazon S3 CLI documentation and list-buckets will probably be helpful.&lt;/p&gt;

&lt;p&gt;Tip #2&lt;/p&gt;

&lt;p&gt;Here’s a link to the AWS provider documentation for convenience.&lt;/p&gt;

&lt;p&gt;Tip #3&lt;/p&gt;

&lt;p&gt;A good starting point is to create three files: main.tf, provider.tf, variables.tf. Start by configuring those.&lt;/p&gt;

&lt;p&gt;Tip #4&lt;/p&gt;

&lt;p&gt;You are very likely to encounter errors when importing resources with Terraform, especially when running terraform plan after importing. This is normal and part of the troubleshooting process! Read the error codes — they are usually very helpful.&lt;/p&gt;

&lt;p&gt;Bonus Points&lt;/p&gt;

&lt;p&gt;For bonus points, if you get this warning, find a way to get rid of it!&lt;/p&gt;

&lt;p&gt;Warning: Argument is deprecated&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note: There are several valid methods of completing this lab. I am just choosing the one of them. Also note that the resource blocks and outputs have been sanitized - you will need to fill these in with the values you get back&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We are first going to set up our profile with provided credentials. Press &lt;code&gt;Start Lab&lt;/code&gt; on the lab webpage to reveal our Access Key ID and Secret Access Key.  In our terminal we enter: aws configure --profile cybr. We enter the generated values as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfruj9nc2xivtdjdp326.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfruj9nc2xivtdjdp326.png" alt="Profile setup" width="350" height="57"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are good to go. Let's start creating our required files.&lt;/p&gt;

&lt;p&gt;We begin with our &lt;code&gt;provider.tf&lt;/code&gt; file. To get the latest provider version we head over to the &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest" rel="noopener noreferrer"&gt;Terraform AWS Registry&lt;/a&gt; and in the top right of the page select &lt;code&gt;USE PROVIDER&lt;/code&gt; which will drop down the code blocks we need.&lt;/p&gt;

&lt;p&gt;We are going to add some configuration to our AWS provider block to use a variable for the region (which we will set soon) and the profile we want to use (which we set up earlier).&lt;/p&gt;

&lt;p&gt;Our &lt;code&gt;provider.tf&lt;/code&gt; will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "6.3.0"
    }
  }
}

provider "aws" {
  # Configuration options
  region = var.aws_region
  profile = "cybr"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we write our variables.tf file like so:&lt;/p&gt;

&lt;p&gt;variable "aws_region" {&lt;br&gt;
  description = "The AWS region to deploy in"&lt;br&gt;
  type = string&lt;br&gt;
  default = "us-east-1"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Here we are setting our region to &lt;code&gt;us-east-1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It's time to implement out &lt;code&gt;main.tf&lt;/code&gt; file. Before we attempt to create any resource blocks, we are going to need to find the existing S3 bucket. In our terminal we run the command &lt;code&gt;aws s3api list-buckets --profile cybr&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Nowe we get back a JSON object that shows the details of the existing bucket that will look similar to this (sanitized and anonymized):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
{
  "Buckets": [
    {
      "Name": "demo-import-bucket",
      "CreationDate": "2025-07-16T20:26:20+00:00"
    }
  ],
  "Owner": {
    "DisplayName": "example-user",
    "ID":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  }
  "Prefix": null
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we grab the name we can write our code using a minimal resource block as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {
  to = aws_s3_bucket.bucket_to_import
  id = "demo-import-bucket"
}

resource "aws_s3_bucket" "bucket_to_import" {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the resource block replace id with the actual name of the bucket you got back from the terminal command above.&lt;/p&gt;

&lt;p&gt;Now, in the terminal run a &lt;code&gt;terraform init&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It would be a good idea to also run a &lt;code&gt;terraform validate&lt;/code&gt; to ensure our code doesn't have any glaring issues.&lt;/p&gt;

&lt;p&gt;Now we run a &lt;code&gt;terraform plan&lt;/code&gt; followed by a &lt;code&gt;terraform apply&lt;/code&gt;. We will see our resource being imported and the state file being created.&lt;/p&gt;

&lt;p&gt;Follow that up with a &lt;code&gt;terraform state list&lt;/code&gt;. We should see &lt;code&gt;aws_s3_bucket.bucket_to_import&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let's see the details so we can grab them and drop them into our resource block. We run a &lt;code&gt;terraform state show aws_s3_bucket.bucket_to_import&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Copy the following corresponding values from the output; &lt;code&gt;bucket&lt;/code&gt; and &lt;code&gt;force_destroy&lt;/code&gt; and update the resource block. Additionally, we are going to set &lt;code&gt;force_destory = true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Our main.tf will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {
  to = aws_s3_bucket.bucket_to_import
  id = "demo-import-bucket"
}

resource "aws_s3_bucket" "bucket_to_import" {
    bucket = "demo-import-bucket"
    force_destroy = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run a &lt;code&gt;terraform plan&lt;/code&gt; again to confirm what changes we are going to make.&lt;/p&gt;

&lt;p&gt;If satisfied, run a &lt;code&gt;terraform apply -auto-approve&lt;/code&gt; which will confirm and apply the changes.&lt;/p&gt;

&lt;p&gt;Great! Now our rogue resource is being managed by terraform.&lt;/p&gt;

&lt;p&gt;It's time to clean up our resources. We can (optionally) remove the import block in our &lt;code&gt;main.tf&lt;/code&gt; file as we no longer need it. Subsequently, we can run a terraform destroy and confirm our choice.&lt;/p&gt;

&lt;p&gt;This will destroy our S3 bucket. We can confirm this with an &lt;code&gt;aws s3api list-buckets --profile cybr&lt;/code&gt; command run in the terminal. We should see no buckets listed.&lt;/p&gt;

&lt;p&gt;If everything has worked as expected, press the terminate lab button and you have successfully completed the challenge lab.&lt;/p&gt;

&lt;p&gt;To view the files we created in their final form, see the below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/cyber-0ps/Cybr/blob/main/Terraform%20on%20AWS/LAB%20import%20an%20AWS%20resource/provider.tf" rel="noopener noreferrer"&gt;provider.tf&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cyber-0ps/Cybr/blob/main/Terraform%20on%20AWS/LAB%20import%20an%20AWS%20resource/main.tf" rel="noopener noreferrer"&gt;main.tf&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cyber-0ps/Cybr/blob/main/Terraform%20on%20AWS/LAB%20import%20an%20AWS%20resource/variables.tf" rel="noopener noreferrer"&gt;variables.tf&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>cloud</category>
      <category>s3</category>
    </item>
    <item>
      <title>Cybr - Beginner's Guide to AWS CloudTrail for Security</title>
      <dc:creator>Kuljot Biring</dc:creator>
      <pubDate>Tue, 15 Jul 2025 16:24:40 +0000</pubDate>
      <link>https://dev.to/kuljotbiring/cybr-beginners-guide-to-aws-cloudtrail-for-security-17f4</link>
      <guid>https://dev.to/kuljotbiring/cybr-beginners-guide-to-aws-cloudtrail-for-security-17f4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytt0jnr7p3p1vnbj6rmt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytt0jnr7p3p1vnbj6rmt.png" alt="CloudTrail Security" width="800" height="592"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few days ago I completed the course &lt;a href="https://cybr.com/courses/beginners-guide-to-aws-cloudtrail-for-security/" rel="noopener noreferrer"&gt;Beginner's Guide to AWS CloudTrail for Security&lt;/a&gt; by &lt;a href="https://cybr.com" rel="noopener noreferrer"&gt;Cybr&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This course succinctly covers the most important aspects of CloudTrail and it's integration with other services. There are several demo lessons which give a details walk-through of setting up CloudTrail and the various considerations that need to be taken into account with its different settings.&lt;/p&gt;

&lt;p&gt;Some of the things you will learn are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CloudTrail Essentials&lt;/li&gt;
&lt;li&gt;CloudWatch Logs &amp;amp; SNS notifications&lt;/li&gt;
&lt;li&gt;CloudTrail Insights&lt;/li&gt;
&lt;li&gt;CloudTrail Lake&lt;/li&gt;
&lt;li&gt;Monitoring CloudTrail itself&lt;/li&gt;
&lt;li&gt;IAM&lt;/li&gt;
&lt;li&gt;Log file integrity&lt;/li&gt;
&lt;li&gt;Encryption&lt;/li&gt;
&lt;li&gt;Useful CloudTrail CLI commands&lt;/li&gt;
&lt;li&gt;AWS Security Monitoring Best Practices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall this a great course that gives you great insight on how to use CloudTrail and the various nuances that you must consider when enabling the service and using it in combination with other AWS services.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>cloud</category>
      <category>cloudtrail</category>
    </item>
    <item>
      <title>Cybr - [LAB] [Challenge] Configure security groups and NACLs to specific requirements</title>
      <dc:creator>Kuljot Biring</dc:creator>
      <pubDate>Thu, 10 Jul 2025 23:36:02 +0000</pubDate>
      <link>https://dev.to/kuljotbiring/cybr-lab-challenge-configure-security-groups-and-nacls-to-specific-requirements-4kk6</link>
      <guid>https://dev.to/kuljotbiring/cybr-lab-challenge-configure-security-groups-and-nacls-to-specific-requirements-4kk6</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnon11kz9f79pf2pi5qkb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnon11kz9f79pf2pi5qkb.png" alt=" " width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this walk-through we are going to solve the lab &lt;a href="https://cybr.com/courses/introduction-to-aws-security/lessons/lab-configure-security-groups-and-nacls-to-specific-requirements/" rel="noopener noreferrer"&gt;Configure security groups and NACLs to specific requirements&lt;/a&gt; created by &lt;a href="https://cybr.com/" rel="noopener noreferrer"&gt;Cybr&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, the twist is we are going to solve this using Terraform!&lt;/p&gt;

&lt;p&gt;Let's have a look at the requirements:&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;&lt;u&gt;Lab Details&lt;/u&gt;&lt;/strong&gt; 👨‍🔬&lt;br&gt;
    Length of time: ~30 minutes&lt;br&gt;
    Cost: $0 &lt;br&gt;
    Difficulty: Moderate&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Scenario&lt;/u&gt;&lt;/strong&gt; 🧪&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create four separate security groups&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Security Group #1&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
    Name it Web Servers&lt;br&gt;
    Provide open access to two commonly used ports for application servers: 80 and 443&lt;br&gt;
    This open access should work for both IPv4 and IPv6&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Security Group #2&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
    Name it App Servers&lt;br&gt;
    Provide open access for instances in the Web Servers SG to be able to communicate with your app servers&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Security Group #3&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
    Name it IT Administration&lt;br&gt;
    Provide open access for your organization’s IT admins to be able to SSH and/or RDP into the cloud instances&lt;/p&gt;

&lt;p&gt;Your IT admins should only ever have access those instances from the following two IP addresses:&lt;br&gt;
        172.16.0.0&lt;br&gt;
        192.168.0.0&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Security Group #4&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
    Name it Database&lt;br&gt;
    Provide open access for application servers to be able to communicate with your MySQL database&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create two separate NACLs:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;NACL #1&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
    Name it Public Subnets&lt;br&gt;
    Provide open access to allow all traffic that would be allowed by the security groups for resources that would be launched in the public subnets&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;NACL #2&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
    Name it Private Subnets&lt;br&gt;
    Provide open access to allow all traffic that would be allowed by the security groups for resources that would be launched in the private subnets&lt;/p&gt;



&lt;p&gt;Let's get started. We are going to create four files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;terraform.tf&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;variables.tf&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;main.tf&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;outputs.tf&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We start off with out &lt;code&gt;terraform.tf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;First, we are going to add out Terraform block like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "&amp;gt;= 1.3.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "&amp;gt;= 5.0"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This block performs two functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;requires that the Terraform version must be 1.3.0 or higher.&lt;/li&gt;
&lt;li&gt;Declares the AWS provider by specifying its source (from HashiCorp) and requires that its version be 5.0 or higher.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By specifying versions, we also ensure compatibility with specific Terraform and provider versions.&lt;/p&gt;

&lt;p&gt;Next, in this same file we add the provider block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = var.aws_region
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This block configures the AWS provider. It sets the region to use via a variable which we will set soon in a &lt;code&gt;variables.tf&lt;/code&gt; file. By using a variable, we can re-configure our region without hardcoding it, allowing for easy updates.&lt;/p&gt;

&lt;p&gt;Let's now create out &lt;code&gt;variables.tf&lt;/code&gt; file. Our purpose of creating this file is to; define input variables in one place, allow easy modification of infrastructure settings, allow our code to be flexible and reusable, supports dynamic configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "aws_region" {
  description = "The AWS region to deploy resources into"
  type        = string
  default     = "us-east-1"
}

variable "vpc_cidr" {
  description = "CIDR block for the VPC"
  type        = string
  default     = "10.0.0.0/16"
}

variable "it_admin_ips" {
  description = "List of IPs for IT admin access"
  type        = list(string)
  default     = ["172.16.0.0/16", "192.168.0.0/16"]
}

variable "web_sg_name" {
  type    = string
  default = "web-server"
}

variable "app_sg_name" {
  type    = string
  default = "app-servers"
}

variable "it_sg_name" {
  type    = string
  default = "it-administrator"
}

variable "db_sg_name" {
  type    = string
  default = "database"
}

variable "public_nacl_name" {
  type    = string
  default = "public-subnets"
}

variable "private_nacl_name" {
  type    = string
  default = "private-subnets"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our code we are primarily setting the name of the various resources as variables per the lab instructions. &lt;/p&gt;

&lt;p&gt;For &lt;code&gt;aws_region&lt;/code&gt;, we are setting the region to &lt;code&gt;us-east-1&lt;/code&gt;. For &lt;code&gt;vpc_cidr&lt;/code&gt;, we set the VPC CIDR block. We also set the list of IP for the &lt;code&gt;it_admin_ips&lt;/code&gt; security group as mentioned by the lab specifications. Lastly, the rest of the variable names correspond to the names that the lab requires us to have for our resources.&lt;/p&gt;

&lt;p&gt;Now, we will being writing our &lt;code&gt;main.tf&lt;/code&gt; file and creating the resource blocks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_support   = true
  enable_dns_hostnames = true

  tags = {
    Name = "cybr-lab"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We begin with creating our VPC resource block giving it the name 'main'. We set the CIDR block configuration to utilize the variable we set in &lt;code&gt;variables.tf&lt;/code&gt; to define the CIDR range of the VPC for our lab. The DNS settings are to enable DNS resolution as well as allowing public DNS hostnames. Lastly, we tag the resource to allow for tracking and identifying the resource.&lt;/p&gt;

&lt;p&gt;Our next resource we will build is the Webserver Security Group. Let's have a look at what this resource block will look like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "web_servers" {
  name        = var.web_sg_name
  description = "Allow HTTP and HTTPs from anywhere (IPv4 and IPv6)"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port         = 80
    to_port           = 80
    protocol          = "tcp"
    ipv6_cidr_blocks  = ["::/0"]
  }

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port         = 443
    to_port           = 443
    protocol          = "tcp"
    ipv6_cidr_blocks  = ["::/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    ipv6_cidr_blocks = ["::/0"]
  }

  tags = {
    Name = var.web_sg_name
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looking at our resource, we'll start from the top. In the first part, we are assigning the name of the Web Server security group from our variables file which we created earlier. We add a helpful description and associate the security group with the previously created VPC.&lt;/p&gt;

&lt;p&gt;The next two blocks we add ingress on port &lt;code&gt;80 (HTTP)&lt;/code&gt; for &lt;code&gt;IPv4&lt;/code&gt; and IPv6 from any address. We do the same for port &lt;code&gt;443 (HTTPS)&lt;/code&gt; in the next two blocks.&lt;/p&gt;

&lt;p&gt;Now we have 2 egress blocks which technically are not required as security groups are stateful. So why would we add these blocks? We are adding them for two reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We need to allow IPv6 egress &lt;code&gt;(::/0)&lt;/code&gt;, which the default rule does not include.&lt;/li&gt;
&lt;li&gt;We want to document explicitly for clarity or to prepare for future restrictions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: the &lt;code&gt;-1&lt;/code&gt; for protocol refers to all protocols/network traffic.&lt;/p&gt;

&lt;p&gt;Lastly, we tag our resource for identification and tracking purposes.&lt;/p&gt;

&lt;p&gt;Let's now build the web app security group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "app_servers" {
  name        = var.app_sg_name
  description = "Allow traffic from web servers"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port       = 0
    to_port         = 65535
    protocol        = "tcp"
    security_groups = [aws_security_group.web_servers.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = var.app_sg_name
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We start by assigning the name for the resource from our variables file. We also associate the resource with our VPC.&lt;/p&gt;

&lt;p&gt;In the ingress block we allow the TCP protocol and the associated ports &lt;code&gt;[0 - 65535]&lt;/code&gt;. We also reference the web server security group as we want to allow access from the web servers per the lab requirements. For the egress block we allow all outbound access.&lt;/p&gt;

&lt;p&gt;Finally we tag the resource for tracking and identification.&lt;/p&gt;

&lt;p&gt;Moving on to the next resource, we build our IT administration security group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "it_admin" {
  name        = var.it_sg_name
  description = "Allow SSH and RDP from IT admin IPs"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = var.it_admin_ips
  }

  ingress {
    from_port   = 3389
    to_port     = 3389
    protocol    = "tcp"
    cidr_blocks = var.it_admin_ips
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = var.it_sg_name
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similar to out other resources we have created thus far, we assign the name from the relevant variable block in our &lt;code&gt;variables.tf&lt;/code&gt; file and associate it with our VPC.&lt;/p&gt;

&lt;p&gt;We create the ingress blocks for both port 22 (SSH) and port 33389 (RDP). For the CIDR blocks we reference the allowable IPs &lt;code&gt;[172.16.0.0, 192.168.0.0]&lt;/code&gt; from the lab specs using variables we have set in &lt;code&gt;variables.tf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Again, we add an egress block for this resource as explained earlier.&lt;/p&gt;

&lt;p&gt;As usual we tag our resource at the end.&lt;/p&gt;

&lt;p&gt;Now, we create the Database security group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "database" {
  name        = var.db_sg_name
  description = "Allow MySQL access from app servers"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port       = 3306
    to_port         = 3306
    protocol        = "tcp"
    security_groups = [aws_security_group.app_servers.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = var.db_sg_name
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We set the name as usual and associate it with our VPC. Our ingress limits access to the resources having the app servers security group via port &lt;code&gt;3306 (MySQL)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Similar to our other resources, we have our egress and tags blocks.&lt;/p&gt;

&lt;p&gt;Next we are going to create our public NACL and the associated rules. Keep in mind since NACL are stateless, we need matching rules for inbound and outbound traffic. We also space the rule numbers 10 apart as to allow room to add further rules in the future if needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_network_acl" "public" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = var.public_nacl_name
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this block, we create the NACL, associate it with out VPC, and tag it with the relevant variable name.&lt;/p&gt;

&lt;p&gt;Now we set up the public ingress rules.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_network_acl_rule" "public_ingress" {
  network_acl_id = aws_network_acl.public.id
  rule_number    = 100
  egress         = false
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "0.0.0.0/0"
  from_port      = 0
  to_port        = 0
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We set up our IPv4 ingress by associating it with our public NACL and  allowing all inbound traffic and assign it a rule number.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_network_acl_rule" "public_ingress_ipv6" {
  network_acl_id = aws_network_acl.public.id
  rule_number    = 110
  egress         = false
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "::/0"
  from_port      = 0
  to_port        = 0
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We do the same for IPv6. Note the the egress = false means that these are for ingress rules.&lt;/p&gt;

&lt;p&gt;Since NACLs are stateless we have to have matching egress rules. Similar to how we set up the ingress rules we do the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_network_acl_rule" "public_egress" {
  network_acl_id = aws_network_acl.public.id
  rule_number    = 120
  egress         = true
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "0.0.0.0/0"
  from_port      = 0
  to_port        = 0
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For IPv4 egress.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_network_acl_rule" "public_egress_ipv6" {
  network_acl_id = aws_network_acl.public.id
  rule_number    = 130
  egress         = true
  protocol       = "-1"
  rule_action    = "allow"
  ipv6_cidr_block = "::/0"
  from_port      = 0
  to_port        = 0
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For IPv6 egress.&lt;/p&gt;

&lt;p&gt;If you're following along, you note that the lab requires a private NACL as well. We are doing to do this very similar to what we did for the public NACL with the obvious exceptions for creating a private NACL and having the rules we create associated with that.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_network_acl" "private" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = var.private_nacl_name
  }
}

resource "aws_network_acl_rule" "private_ingress" {
  network_acl_id = aws_network_acl.private.id
  rule_number    = 100
  egress         = false
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "0.0.0.0/0"
  from_port      = 0
  to_port        = 0
}

resource "aws_network_acl_rule" "private_egress_ipv6" {
  network_acl_id = aws_network_acl.private.id
  rule_number    = 100
  egress         = false
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "0.0.0.0/0"
  from_port      = 0
  to_port        = 0
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. That is our &lt;code&gt;main.tf&lt;/code&gt; file. We are now ready to create our &lt;code&gt;outputs.tf&lt;/code&gt; file. For this file are are simply outputting the ID of each resource for our reference.&lt;/p&gt;

&lt;p&gt;The file will be set up like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "vpc_id" {
  value = aws_vpc.main.id
}

output "web_sg_id" {
  value = aws_security_group.web_servers.id
}

output "app_sg_id" {
  value = aws_security_group.app_servers.id
}

output "it_admin_sg_id" {
  value = aws_security_group.it_admin.id
}

output "database_sg_id" {
  value = aws_security_group.database.id
}

output "public_nacl_id" {
  value = aws_network_acl.public.id
}

output "private_nacl_id" {
  value = aws_network_acl.private.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have all the files we need. We run a &lt;code&gt;terraform fmt&lt;/code&gt; -recursive to format our code properly. Next we run a &lt;code&gt;terraform init&lt;/code&gt; to initialize our providers, followed by a terraform validate to ensure there are no errors.&lt;/p&gt;

&lt;p&gt;Let's run a terraform plan. Review the resources that will be created. If you are satisfied feel free to run terraform apply -auto-approve to build our resources.&lt;/p&gt;

&lt;p&gt;Congratulations, we have successfully completed the lab. Although these resources will cost us no money (at least at the time of this write-up), you may want to run a terraform destroy to remove them at some point to clean up the resources.&lt;/p&gt;

&lt;p&gt;You can find the complete files here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/cyber-0ps/Cybr/blob/main/Introduction%20to%20AWS%20Security/Infrastructure_Security/VPC_Lab/terraform.tf" rel="noopener noreferrer"&gt;terraform.tf&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cyber-0ps/Cybr/blob/main/Introduction%20to%20AWS%20Security/Infrastructure_Security/VPC_Lab/variables.tf" rel="noopener noreferrer"&gt;variables.tf&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cyber-0ps/Cybr/blob/main/Introduction%20to%20AWS%20Security/Infrastructure_Security/VPC_Lab/main.tf" rel="noopener noreferrer"&gt;main.tf&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cyber-0ps/Cybr/blob/main/Introduction%20to%20AWS%20Security/Infrastructure_Security/VPC_Lab/outputs.tf" rel="noopener noreferrer"&gt;outputs.tf&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>cloud</category>
      <category>security</category>
    </item>
    <item>
      <title>Cybr - Introduction to AWS Security</title>
      <dc:creator>Kuljot Biring</dc:creator>
      <pubDate>Mon, 07 Jul 2025 03:00:48 +0000</pubDate>
      <link>https://dev.to/kuljotbiring/cybr-introduction-to-aws-security-535n</link>
      <guid>https://dev.to/kuljotbiring/cybr-introduction-to-aws-security-535n</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3qng9xrwnfxw9c2ucta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3qng9xrwnfxw9c2ucta.png" alt="Image description" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I recently finished the &lt;a href="https://cybr.com/courses/introduction-to-aws-security/" rel="noopener noreferrer"&gt;Introduction to AWS Security&lt;/a&gt; course by &lt;a href="https://cybr.com/" rel="noopener noreferrer"&gt;Cybr&lt;/a&gt;. The course is part of the platform's &lt;a href="https://cybr.com/learning-path/aws-blue-team-learning-path/" rel="noopener noreferrer"&gt;Blue Team Learning Path&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This was a great course that offers great content highlighting the key features of security in AWS.&lt;/p&gt;

&lt;p&gt;The course has over 100 lessons covering the following topics:&lt;/p&gt;

&lt;p&gt;🚧 Infrastructure Security&lt;br&gt;
👤 Identity and Access Management (IAM)&lt;br&gt;
⛁ Data Protection&lt;br&gt;
🪣 Amazon S3 Bucket Protection&lt;br&gt;
🕵🏼‍♂️ Logging, Monitoring, and Incident Response&lt;br&gt;
👥 Multi-Account Security&lt;br&gt;
&amp;lt;/&amp;gt; Infrastructure as Code (IaC)&lt;/p&gt;

&lt;p&gt;The course contains several labs, graphical cheat sheets with detail explanations, sandbox author-hosted labs and demo videos giving details walk-throughs of services and implementations. There is a good blend of theory and practical/hands on material. If that's not all, you get nearly six credit hours for completing the course!&lt;/p&gt;

&lt;p&gt;Since this is introduction course, the author is careful to give sufficient but not overly detailed information via concise digestible videos  covering the important aspects.&lt;/p&gt;

&lt;p&gt;Overall, this is course packs in some great information and the author's teaching style is solid. As a security engineer holding several AWS certifications, this course is a great buy. I would highly recommend you to give it a go.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>cloud</category>
      <category>learning</category>
    </item>
    <item>
      <title>Cybr - [LAB] [Challenge] Create a VPC with public and private subnets</title>
      <dc:creator>Kuljot Biring</dc:creator>
      <pubDate>Thu, 19 Jun 2025 21:02:21 +0000</pubDate>
      <link>https://dev.to/kuljotbiring/cybr-lab-challenge-create-a-vpc-with-public-and-private-subnets-49nd</link>
      <guid>https://dev.to/kuljotbiring/cybr-lab-challenge-create-a-vpc-with-public-and-private-subnets-49nd</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi94wcx2fxe3fq1pjkpz5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi94wcx2fxe3fq1pjkpz5.png" alt="AWS VPC" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today, we are going to tackle the lab &lt;a href="https://cybr.com/courses/introduction-to-aws-security/lessons/lab-challenge-create-a-vpc-with-public-and-private-subnets/" rel="noopener noreferrer"&gt;Cybr - [LAB] [Challenge] Create a VPC with public and private subnets&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, there's a slight twist! We are going to do the lab using Terraform. Why? Because clicking around in a console is great when you initially learning, but not realistic when you are working in more professional environment. Therefore, we are going to use IAC via  Terraform in order to complete the lab.&lt;/p&gt;

&lt;p&gt;Before we get started, this walk-through assumes you know how to set up Terraform and have a very basic understanding of it. &lt;/p&gt;

&lt;p&gt;Let's get started.&lt;/p&gt;

&lt;p&gt;The lab gives the following prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Lab Details 👨‍🔬&lt;br&gt;
Length of time: 20 minutes&lt;/p&gt;

&lt;p&gt;Difficulty: Easy&lt;/p&gt;

&lt;p&gt;We did something very similar in the demo lesson titled “Creating VPCs and Subnets” but I want you to try and complete this scenario as much as possible without looking back at that lesson. Of course, if you’re stuck and you can’t find answers by searching online, I do recommend using the course lesson material to break through. Pretend like you’ve been asked to do this on the job and troubleshoot to the best of your ability. That will help you build practical skills.&lt;/p&gt;

&lt;p&gt;Scenario 🧪&lt;br&gt;
Create a VPC named cybr-vpc-lab that contains 2 public subnets and 2 private subnets. Each of the public subnets should reside in different availability zones, with a private subnet in each of those zones as well.&lt;br&gt;
Use a CIDR block of /16 for the VPC and CIDRs of /24 for the subnets.&lt;br&gt;
Create an S3 Gateway VPC Endpoint that is connected to both of the private subnets.&lt;br&gt;
While you can use the “VPC and more” option to automate a lot of this, I challenge you to manually create these resources instead to really apply what you’ve learned so far.&lt;/p&gt;

&lt;p&gt;Tips:&lt;/p&gt;

&lt;p&gt;Remember what makes a public subnet versus a private one&lt;br&gt;
Before you launch a resource, it’s a great idea to verify its pricing first. For example, you should not be launching a NAT Gateway if you want to keep the cost at $0.00 since NAT Gateways cost money — all resources needed for this lab don’t cost anything so that’s a hint you don’t need a NAT Gateway&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Before we start coding, keep in mind we want our code to be modularized and follow DRY principles. We are going to create a few files which will look like the directory structure below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cybr-vpc-lab/
├── main.tf             
├── variables.tf       
├── outputs.tf         
└── terraform.tf 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let' start by creating a &lt;code&gt;terraform.tf&lt;/code&gt; file. This file is going to contain our Terraform provider and version restrictions. &lt;/p&gt;

&lt;p&gt;Providing versions is a good practice when writing Terraform as it prevents breaking changes from occurring due to compatibility and stability when using different features and versions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "&amp;gt;= 1.3.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "&amp;gt;= 5.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this file we are ensuring that the Terraform version is at least 1.3.0 or greater. &lt;/p&gt;

&lt;p&gt;We are also sourcing the AWS provider from HashiCorp and requiring it to be version 5.0 or greater.&lt;/p&gt;

&lt;p&gt;Lastly, we are configuring the provider to use the AWS Region specified in the variables file (which we will create next).&lt;/p&gt;

&lt;p&gt;Now that we have our providers taken care of, we are going to create &lt;code&gt;variables.tf&lt;/code&gt;. This file will house all of our input variables we will use throughout our code.&lt;/p&gt;

&lt;p&gt;Our first variable block will be for the AWS Region we are going to deploy our infrastructure into. Typically, we like to use &lt;code&gt;us-east-1&lt;/code&gt; for this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "aws_region" {
  description = "The AWS region to deploy resources"
  type        = string
  default     = "us-east-1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we were told that our VPC should be created with a CIDR block of /16. So let's create a variable for that VPC accordingly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "vpc_cidr" {
  description = "CIDR block for the VPC"
  type        = string
  default     = "10.0.0.0/16"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are also to create 2 public subnets in different Availability Zones as well as 2 private subnets in two different availability zones. Both are expected to have /24 CIDRS. We create variables for these as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "public_subnet_cidrs" {
  description = "CIDR blocks for public subnets"
  type        = list(string)
  default     = ["10.0.1.0/24", "10.0.2.0/24"]
}

variable "private_subnet_cidrs" {
  description = "CIDR blocks for private subnets"
  type        = list(string)
  default     = ["10.0.101.0/24", "10.0.102.0/24"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note that the default value for both public and private subnet are created using a list of strings.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At this point we are ready to start creating the resource blocks to build the infrastructure required.&lt;/p&gt;

&lt;p&gt;Let's write out &lt;code&gt;main.tf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;As a quick background, resource blocks in Terraform are generally configured as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "&amp;lt;PROVIDER&amp;gt;_&amp;lt;RESOURCE_TYPE&amp;gt;" "&amp;lt;RESOURCE_NAME&amp;gt;" {
  argument = some_value

  tags = {
    Name = "some_name"
    Environment = "some_environment"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have the block labelled as a &lt;code&gt;resource&lt;/code&gt; type letting Terraform know we are creating a resource. &lt;/p&gt;

&lt;p&gt;Next, the provider and resource type signify which provider we are using and what type of resource to create. &lt;/p&gt;

&lt;p&gt;This is followed by the managed resource name which is how we can refer to the resource with our configuration. &lt;/p&gt;

&lt;p&gt;Within the body of the block we have required arguments, as well as optional tags which help us identify and manage resources.&lt;/p&gt;

&lt;p&gt;We begin by first creating the resource block for our VPC.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_support   = true
  enable_dns_hostnames = true

  tags = {
    Name = "cybr-vpc-labs"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that our resource type is &lt;code&gt;aws_vpc&lt;/code&gt; indicating to Terraform what type of resource we want to build. &lt;/p&gt;

&lt;p&gt;We have the managed resource name of &lt;code&gt;main&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We also have the cidr_block value set to the value of the variable &lt;code&gt;var.vpc_cidr&lt;/code&gt; which is being referenced from our variable named &lt;code&gt;vpc_cidr&lt;/code&gt; in the &lt;code&gt;variables.tf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Both &lt;code&gt;enable_dns_support&lt;/code&gt; and &lt;code&gt;enable_dns_hostnames&lt;/code&gt; are set to true.&lt;/p&gt;

&lt;p&gt;Lastly, we tag our resource with the value &lt;code&gt;cybr-vpc-labs&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next, we want to ensure that our subnets are spread across multiple Availability Zones (AZs) in the selected region, we use a data source to dynamically fetch the available zones.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_availability_zones" "available" {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we create the Internet Gateway which would let resources in our VPC reach the public internet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_internet_gateway" "gw" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "cybr-igw"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we will create the public subnets like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_subnet" "public" {
  count                   = 2
  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.public_subnet_cidrs[count.index]
  availability_zone       = data.aws_availability_zones.available.names[count.index]
  map_public_ip_on_launch = true

  tags = {
    Name = "cybr-public-subnet-${count.index + 1}"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looking at our code, we can see we set &lt;code&gt;count&lt;/code&gt; to 2 - meaning that two subnets will be created. the &lt;code&gt;vpc_id&lt;/code&gt; is set to the ID of our main VPC.&lt;/p&gt;

&lt;p&gt;You can also note that the cidr_block variable is assigned from the &lt;code&gt;public_subnet_cidrs&lt;/code&gt; variable using the current index from count to select the appropriate CIDR block for each subnet.&lt;/p&gt;

&lt;p&gt;The availability_zone is determined from the available zones in the region (us-east-1) that we selected, using the current index to assign each subnet to a different zone.&lt;/p&gt;

&lt;p&gt;Furthermore, we can see that map_public_ip_on_launch is set to true which would allow any instances that are launched in this subnet to receive public IP addresses.&lt;/p&gt;

&lt;p&gt;We use the aws_availability_zones data source to dynamically fetch available AZs in our selected region.&lt;/p&gt;

&lt;p&gt;Lastly, each subnet is tagged with a name utilizing its index: &lt;code&gt;cybr-public-subnet-1&lt;/code&gt; and &lt;code&gt;cybr-public-subnet-2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now that we have code for our public subnets we need to create public route tables for these subnets. We want to associate the route table to our VPC using &lt;code&gt;vpc_id = aws_vpc.main.id&lt;/code&gt;. Next, we create the route definition that directs all output traffic to our internet gateway that we specified earlier using &lt;code&gt;aws_internet_gateway.gw.id&lt;/code&gt;. This will enable internet access for resources using this route table. Lastly, we add a tag to more easily track and manage the resource. Here is what our resource block would look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table" "public" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.gw.id
  }

  tags = {
    Name = "cybr-public-rt"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great, we have created our route tables and now we just need to associate our public subnets with our public route table. We do this by using aws_route_table_association as the resource type. We also set the count parameter to 2 so that two associations are created. The subnet_id is set to the ID of the public subnets defined earlier using aws_subnet.public[count.index].id which allows us to reference each subnet based on the current index. Finally, we link to the public route table. Our resource block will be as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table_association" "public" {
  count          = 2
  subnet_id      = aws_subnet.public[count.index].id
  route_table_id = aws_route_table.public.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have coded our public subnets, public route table, and associated them, we need to do the same for the private subnets.&lt;/p&gt;

&lt;p&gt;Let's begin by first creating the private subnets which is going to be very similar to how we created the public subnets except we will not have a public IP that can be associated with resources in the private subnets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_subnet" "private" {
  count             = 2
  vpc_id            = aws_vpc.main.id
  cidr_block        = var.private_subnet_cidrs[count.index]
  availability_zone = data.aws_availability_zones.available.names[count.index]

  tags = {
    Name = "cybr-private-subnet-${count.index + 1}"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similar to before, we will now create the route table for our private subnets. The key difference here will be that the route table will be used to direct traffic to the S3 endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table" "private" {
  count  = 2
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "cybr-private-rt-${count.index + 1}"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to now associate these private subnets with the private route table we just created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table_association" "private" {
  count          = 2
  subnet_id      = aws_subnet.private[count.index].id
  route_table_id = aws_route_table.private[count.index].id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our last resource block will be creating the S3 Gateway Endpoint which will allow resources in our private subnets to connect to resources in S3 without traversing the internet. We will need to do the following; indicate the appropriate resource type, set the VPC id to that of the main VPC, dynamically construct the service name to reference the S3 service in the region we have chosen, specify the endpoint type, associate the resource with the correct route table, and finally, tag our resource for identification. The configuration will look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc_endpoint" "s3" {
  vpc_id            = aws_vpc.main.id
  service_name      = "com.amazonaws.${var.aws_region}.s3"
  vpc_endpoint_type = "Gateway"

  route_table_ids = [for rt in aws_route_table.private : rt.id]

  tags = {
    Name = "cybr-s3-endpoint"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that for our route table association, we are using list comprehension to gather the private route tables via their IDs from &lt;code&gt;aws_route_table_private&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For the final stretch of this lab we are going to create an outputs.tf file. The purpose of this file is to define several output values for our Terraform configuration which can be useful for retrieving information about the resources that were created. &lt;/p&gt;

&lt;p&gt;We are going to create outputs for; the VPC ID, public subnet IDs, private subnet IDs, and the S3 VPC Endpoint ID. The code will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "vpc_id" {
  description = "The ID of the VPC"
  value       = aws_vpc.main.id
}

output "public_subnet_ids" {
  description = "IDs of the public subnets"
  value       = [for subnet in aws_subnet.public : subnet.id]
}

output "private_subnet_ids" {
  description = "IDs of the private subnets"
  value       = [for subnet in aws_subnet.private : subnet.id]
}

output "s3_vpc_endpoint_id" {
  description = "ID of the S3 Gateway VPC Endpoint"
  value       = aws_vpc_endpoint.s3.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have written all our code we should format out code properly. Let's use a &lt;code&gt;terraform fmt -recursive&lt;/code&gt; to make our code more readable.&lt;/p&gt;

&lt;p&gt;We also need to initialize our modules with the command: &lt;code&gt;terraform init&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next, validate the syntax using &lt;code&gt;terraform validate&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We can do a dry run of our code and infrastructure changes using &lt;code&gt;terraform plan&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;If we are satisfied with  the resources that will be created and how everything looks, we can deploy the resources using &lt;code&gt;terraform apply -auto-approve&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Although the resources we have created should not incur any charges, we are going to remove them using the command &lt;code&gt;terraform destroy&lt;/code&gt; and confirm our choice once we ascertain that we are destroying all the resources we have created.&lt;/p&gt;

&lt;p&gt;Recap:&lt;/p&gt;

&lt;p&gt;We have created the following resources:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Resource Type&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;VPC&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Main container for networking&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Public Subnets&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Spread across AZs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Private Subnets&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Spread across AZs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Route Tables&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;1 public, 2 private&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Subnet Associations&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;2 public, 2 private&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;S3 VPC Endpoint&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Connects private subnets to S3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Outputs&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;VPC ID, subnet IDs, endpoint ID&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;We are using Terraform vs clicking around in the console to create these resources since using Terraform to manage infrastructure allows for repeatability and automation. With Terraform, we can define our infrastructure as code, making it easy to version, share, and reproduce environments consistently, while manual configurations can lead to errors and inconsistencies.&lt;/p&gt;

&lt;p&gt;For the complete version of all the files, see the links list below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/cyber-0ps/Cybr/blob/main/Introduction%20to%20AWS%20Security/Infrastructure_Security/VPC_Lab/terraform.tf" rel="noopener noreferrer"&gt;terraform.tf&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/cyber-0ps/Cybr/blob/main/Introduction%20to%20AWS%20Security/Infrastructure_Security/VPC_Lab/variables.tf" rel="noopener noreferrer"&gt;variables.tf&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/cyber-0ps/Cybr/blob/main/Introduction%20to%20AWS%20Security/Infrastructure_Security/VPC_Lab/main.tf" rel="noopener noreferrer"&gt;main.tf&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/cyber-0ps/Cybr/blob/main/Introduction%20to%20AWS%20Security/Infrastructure_Security/VPC_Lab/outputs.tf" rel="noopener noreferrer"&gt;outputs.tf&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>vpc</category>
      <category>security</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>Hashicorp: Terraform Associate</title>
      <dc:creator>Kuljot Biring</dc:creator>
      <pubDate>Wed, 18 Jun 2025 00:17:51 +0000</pubDate>
      <link>https://dev.to/kuljotbiring/hashicorp-terraform-associate-16kd</link>
      <guid>https://dev.to/kuljotbiring/hashicorp-terraform-associate-16kd</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24bl7prvg3gymoj8cw63.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24bl7prvg3gymoj8cw63.png" alt="Hashicorp: Terraform Associate" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have recently passed the Hashicorp Terraform Associate Exam. To help prepare for this exam, I supplemented my existing knowledge base  with fantastic course from &lt;a href="https://www.udemy.com/course/terraform-hands-on-labs" rel="noopener noreferrer"&gt;HashiCorp Certified: Terraform Associate - Hands-On Labs by: Bryan Krausen&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The course has dozens of labs that help re-enforce the concepts and are a practical way to get experience and knowledge with Terraform.&lt;/p&gt;

&lt;p&gt;As for the exam itself, I was able to breeze through it with plenty of time to spare.&lt;/p&gt;

&lt;p&gt;The exam tests for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;    Infrastructure as code core concepts&lt;/li&gt;
&lt;li&gt;    Terraform workflow (Write, Plan, Apply)&lt;/li&gt;
&lt;li&gt;    State management techniques&lt;/li&gt;
&lt;li&gt;    Module usage and development&lt;/li&gt;
&lt;li&gt;    Terraform Cloud and Enterprise features&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the end of the exam, I received the passing screen. Two days later  I received my Credly badge and certification.&lt;/p&gt;

&lt;p&gt;I am continuing to use Terraform on all my AWS projects to further sharpen my skills. &lt;/p&gt;

</description>
      <category>terraform</category>
      <category>iac</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS Solutions Architect Associate</title>
      <dc:creator>Kuljot Biring</dc:creator>
      <pubDate>Mon, 26 May 2025 17:52:19 +0000</pubDate>
      <link>https://dev.to/kuljotbiring/aws-solutions-architect-associate-22m7</link>
      <guid>https://dev.to/kuljotbiring/aws-solutions-architect-associate-22m7</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodztuyvxnzugxl6ttize.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodztuyvxnzugxl6ttize.png" alt="Image description" width="600" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Last April (2024), I passed the AWS Solutions Architect Associate Exam. I took this exam to validate my AWS knowledge as I dive deeper into the cloud focusing on building and designing:&lt;/p&gt;

&lt;p&gt;🔒 Secure Architectures&lt;br&gt;
💪 Resilient Architectures&lt;br&gt;
🚀 High-Performing Architectures&lt;br&gt;
💰 Cost-Optimized Architectures&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>architecture</category>
      <category>awschallenge</category>
    </item>
    <item>
      <title>How to Pass AWS Certifications</title>
      <dc:creator>Kuljot Biring</dc:creator>
      <pubDate>Tue, 13 May 2025 05:57:01 +0000</pubDate>
      <link>https://dev.to/kuljotbiring/how-to-pass-aws-certifications-64c</link>
      <guid>https://dev.to/kuljotbiring/how-to-pass-aws-certifications-64c</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flj6hr3g4nmtb1mg9m085.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flj6hr3g4nmtb1mg9m085.png" alt="AWS Badges" width="587" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I want to talk about the strategy I used to pass four AWS certifications on the first attempt and what I would recommend as a good path to do it.&lt;/p&gt;

&lt;p&gt;Firstly, this write-up assumes you have basic computer literacy and know the basics of computing.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Course Recommendation&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;For using a course to learn the topics and materials, I strongly recommend &lt;a href="https://learn.cantrill.io/" rel="noopener noreferrer"&gt;Adrian Cantrill&lt;/a&gt;. Adrian’s courses are very detailed. He goes over the actual services and how they work, their limitations, use cases, and how they interact with other services to architect business needs.&lt;/p&gt;

&lt;p&gt;Furthermore, the courses offered by Adrian contain ample hands-on labs that help solidify the topics you have just learned. Additionally, there is often “exam power-ups” mentioned in the course which are key items to pay attention to when giving consideration to how to answer exam questions.&lt;/p&gt;

&lt;p&gt;Lastly, almost every section contains short quizzes to re-enforce learning. There is also a section at the end of each course which goes over exam strategy, how to approach exams questions, as well as some practice exams.&lt;/p&gt;

&lt;p&gt;The courses are very thorough! I would strong advise taking notes as it is a lot of material and writing notes will help you recall a good portion of the content.&lt;/p&gt;

&lt;p&gt;There are are other courses which may be shorter and more exam only focused, however if you goal is to not only pass the exam but understand AWS, then Adrian’s courses will give you both.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Practical/Hands on Labs&lt;/strong&gt;&lt;/u&gt;&lt;br&gt;
Some of the best preparation you can do for getting ready to take the exams are hands on labs. There are many platforms offering author hosted or self-hosted lab. Additionally, there are several resources available from AWS themselves including the Well-Architected Labs and more recently the &lt;a href="https://skillbuilder.aws/learning-plan/31WSD993AF/aws-security-champion--knowledge-badge-readiness-path/QGY41BBUDQ" rel="noopener noreferrer"&gt;AWS Security Champion - Knowledge Badge Readiness Path&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Practice Exam Recommendation&lt;/strong&gt;&lt;/u&gt;&lt;br&gt;
Studying for the AWS certifications is not complete with a study course alone. You need to find some good practice exams. Enter &lt;a href="https://portal.tutorialsdojo.com/" rel="noopener noreferrer"&gt;Tutorials Dojo&lt;/a&gt;. Aside from the &lt;a href="https://aws.amazon.com/certification/certification-prep/" rel="noopener noreferrer"&gt;AWS Official Practice Exams&lt;/a&gt;, Tutorials Dojo gives the best simulation to what exams might be like.&lt;/p&gt;

&lt;p&gt;Tutorials Dojo contains several sets of practice exams usually broken up as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Section based exams (x4)&lt;/li&gt;
&lt;li&gt;Review Mode exams (x4)&lt;/li&gt;
&lt;li&gt;Timed Exams (x4)&lt;/li&gt;
&lt;li&gt;Final Exam (x1)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I recommend doing two iterations of the exams in the order listed above. After you take an exam, you are given a chance to review the results. Use this opportunity to re-read the questions along with the right and wrong answers. Tutorials Dojo does an excellent job of explaining why the right answer is correct and why the wrong answers are wrong. Really try to understand the reasons behind these and don’t rely on memorization.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;White Papers&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;AWS &lt;a href="https://aws.amazon.com/whitepapers/?whitepapers-main.sort-by=item.additionalFields.sortDate&amp;amp;whitepapers-main.sort-order=desc&amp;amp;awsf.whitepapers-content-type=*all&amp;amp;awsf.whitepapers-global-methodology=*all&amp;amp;awsf.whitepapers-tech-category=*all&amp;amp;awsf.whitepapers-industries=*all&amp;amp;awsf.whitepapers-business-category=*all" rel="noopener noreferrer"&gt;White Papers&lt;/a&gt; contain a lot of good knowledge and should not be ignored.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;AWS FAQs&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;AWS &lt;a href="https://aws.amazon.com/faqs/" rel="noopener noreferrer"&gt;FAQS&lt;/a&gt; are a really great resource for getting quick officials answers that you may have regarding AWS services.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Taking the exam&lt;/strong&gt;&lt;/u&gt;&lt;br&gt;
AWS exams can be challenging due to their lenght and complexity. I’ve always chosen to take them in person so I would not run into technical issues. If you’ve gone through the above preparation, you should confidently pass the exams. Good luck! &lt;/p&gt;

</description>
      <category>aws</category>
      <category>certification</category>
      <category>cloud</category>
      <category>learning</category>
    </item>
    <item>
      <title>AWS Security Specialty</title>
      <dc:creator>Kuljot Biring</dc:creator>
      <pubDate>Sat, 19 Apr 2025 20:10:41 +0000</pubDate>
      <link>https://dev.to/kuljotbiring/aws-security-specialty-43pa</link>
      <guid>https://dev.to/kuljotbiring/aws-security-specialty-43pa</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawzc7fc1g3wyox0639fl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawzc7fc1g3wyox0639fl.png" alt="Image description" width="600" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have recently passed the Amazon Web Services (AWS) Security Specialty Exam. I took this exam to show mastery of AWS in the following competencies:&lt;/p&gt;

&lt;p&gt;👮‍♂️ Threat Detection and Incident Response.&lt;br&gt;
📝 Security Logging and Monitoring.&lt;br&gt;
🚧 Infrastructure Security.&lt;br&gt;
🪪 Identity and Access Management.&lt;br&gt;
🗄️ Data Protection.&lt;br&gt;
🚦 Management and Security Governance.&lt;/p&gt;

&lt;p&gt;I'm looking to continue to grow my knowledge and expertise in the cloud and security space.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>cybersecurity</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS Solutions Architect Professional</title>
      <dc:creator>Kuljot Biring</dc:creator>
      <pubDate>Sun, 14 Jul 2024 21:52:06 +0000</pubDate>
      <link>https://dev.to/kuljotbiring/aws-solutions-architect-professional-20e</link>
      <guid>https://dev.to/kuljotbiring/aws-solutions-architect-professional-20e</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj35bai607phxtqzzf5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj35bai607phxtqzzf5e.png" alt="Image description" width="600" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have recently passed the Amazon Web Services (AWS) Solution Architect Professional Exam. I took this exam to show mastery of AWS in the following competencies:&lt;/p&gt;

&lt;p&gt;🏢 Design for organizational complexity.&lt;br&gt;
💡 Design for new solutions.&lt;br&gt;
🛠️ Continuously improve existing solutions.&lt;br&gt;
🚀 Accelerate workload migration and modernization.&lt;/p&gt;

&lt;p&gt;I'm looking to continue to grow my knowledge and expertise in the cloud and security space.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>certification</category>
    </item>
    <item>
      <title>AWS Solutions Architect Associate</title>
      <dc:creator>Kuljot Biring</dc:creator>
      <pubDate>Mon, 01 Apr 2024 19:52:16 +0000</pubDate>
      <link>https://dev.to/kuljotbiring/aws-solutions-architect-associate-2ook</link>
      <guid>https://dev.to/kuljotbiring/aws-solutions-architect-associate-2ook</guid>
      <description>&lt;p&gt;I've passed the Amazon Web Services (AWS) Solution Architect Associate Exam. I took this exam to validate my AWS knowledge as I dive deeper into the cloud and building/designing:&lt;/p&gt;

&lt;p&gt;🔒 Secure Architectures&lt;br&gt;
💪 Resilient Architectures&lt;br&gt;
🚀 High-Performing Architectures&lt;br&gt;
💰 Cost-Optimized Architectures&lt;/p&gt;

&lt;p&gt;I am planning to do a lot of awesome things within the cloud security space!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>certification</category>
    </item>
  </channel>
</rss>
