<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Charbel El Kahwaji</title>
    <description>The latest articles on DEV Community by Charbel El Kahwaji (@ck9801).</description>
    <link>https://dev.to/ck9801</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ck9801"/>
    <language>en</language>
    <item>
      <title>Deployment Automation Of A Static Website ⚙️</title>
      <dc:creator>Charbel El Kahwaji</dc:creator>
      <pubDate>Wed, 24 Jan 2024 00:09:35 +0000</pubDate>
      <link>https://dev.to/ck9801/deployment-automation-of-a-static-website-41am</link>
      <guid>https://dev.to/ck9801/deployment-automation-of-a-static-website-41am</guid>
      <description>&lt;p&gt;Static websites used to be very disregarded (and still are) especially when WEB2 took its place in the tech world. When we talk about static websites, we usually mean HTML, CSS, and JS at the core.&lt;br&gt;
Fortunately for static website users, with the rise of serverless concepts, your static websites are more than enough to run a business securely and efficiently. Read more about it &lt;a href="https://dev.to/ck9801/serverless-technology-static-to-dynamic-fai"&gt;on my other blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In fact, in this blog, I will be talking about how to automate the deployment of such websites and what AWS services helped me achieve this.&lt;/p&gt;

&lt;p&gt;To elevate the development productivity and keep the architecture simple, we developed our website at Zero&amp;amp;One with a library that compiles chunks of code to basic HTML, CSS, and JS. &lt;a href="https://sergey.cool/" rel="noopener noreferrer"&gt;Sergey&lt;/a&gt; is a cool static website generator that helped me refactor redundant code chunks into a file and include this specific file in other files. &lt;/p&gt;

&lt;p&gt;So why didn't we use React.js or Next.js? As I said, I want to keep the architecture very simple and don't want to include any additional functionality for a static website. So this eliminates Next.js. React is directly out because I want SEO optimization for this project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4izlh6fdrvf2hieu7vj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4izlh6fdrvf2hieu7vj.png" alt="vs code" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this image, I have an _imports folder where I write my components&lt;br&gt;
and then call them anywhere with the &lt;code&gt;&amp;lt;sergey-import /&amp;gt;&lt;/code&gt; tag like so.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fylehfklitneumd5qzlik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fylehfklitneumd5qzlik.png" alt="vs code" width="385" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I then proceed to run a build command that will build my files in a public folder where the imports will be the actual HTML compiled. You can start a server to preview your changes with the start command. So this public folder is where everything on my website will be, images, scripts, CSS, etc... I can take this folder and manually deploy it to a static web hosting S3 each time I make a change... OR, we can automate this entire procedure!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your AWS console, go to code pipeline and create a new pipeline.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsr3alhkpc03iy41wphgx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsr3alhkpc03iy41wphgx.png" alt="aws code pipeline step 1" width="800" height="852"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Name your pipeline, choose a role if you already have one for it, or just let aws code pipeline create one for you. Then choose where you want the artifact to be stored.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click next and choose your source stage. This means what is the event source that will trigger this pipeline. In my case, I used GitHub and its webhooks. Connect GitHub and then your repos and branches will show. Pick the repo and branch where you would like to create a push event. Now each time you commit and push to this branch in this repository, this pipeline will execute automatically.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqshi0aohp8lgp2r7r29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqshi0aohp8lgp2r7r29.png" alt="github as source" width="800" height="831"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click next. Now comes the fun part, stick with me.
In this Build stage, we are going to compile Sergey so it spits out the public folder. Note that the node modules and the public folder generated in your project should be added to .gitignore.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Our build provider is gonna be code build. Choose the region that suits you, then click on create project. A new window will open in CodeBuild. Name and describe your project.&lt;br&gt;
Choose Managed Image and pick Amazon Linux 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmpccups4a4qe06hknrf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmpccups4a4qe06hknrf.png" alt="container" width="785" height="749"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pick whatever suits you as options. You can explore additional configurations for your container.&lt;/p&gt;

&lt;p&gt;In the Buildspec section choose insert build commands and paste the following code. (You can always go by the buildspec file option if you like)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2

phases:
  build:
    commands:
       - npm install&amp;amp;&amp;amp;npm run build
artifacts:
  files:
    - 'public/**/*'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What will happen is that the package.json in your project will run and install Sergey, then execute the build command generating the public folder. Once generated, it will create an artifact by traversing recursively everything inside of the public folder. Remember that our node modules and public folder are not in our repo. Only our none compiled HTML files and package.json&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deploy your project.
Choose S3 as your Deploy provider and select your region. Pick the bucket that is configured to host a static website.
By reaching this stage the pipeline would have transferred the previously generated artifact to be deployed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Make sure to click extract file before deploying so we can unzip our artifact, otherwise your files will not be visible to the S3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7f10zpcfhtw1klhzk7uz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7f10zpcfhtw1klhzk7uz.png" alt="deploy" width="800" height="548"&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Review your changes and create your pipeline.&lt;/p&gt;

&lt;p&gt;If you're here now, this is the architecture that you should have built if you followed the steps correctly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8cha7o2umo9ys5ain8d.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8cha7o2umo9ys5ain8d.jpeg" alt="static deployment automation" width="800" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can configure CloudFront and Route 53 if you want to add more action to your project. And that's it!  Congratulations on automating the deployment of your static website! :)&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>devops</category>
      <category>automation</category>
      <category>aws</category>
    </item>
    <item>
      <title>Serverless Technology: Static to Dynamic</title>
      <dc:creator>Charbel El Kahwaji</dc:creator>
      <pubDate>Wed, 24 Jan 2024 00:01:09 +0000</pubDate>
      <link>https://dev.to/ck9801/serverless-technology-static-to-dynamic-fai</link>
      <guid>https://dev.to/ck9801/serverless-technology-static-to-dynamic-fai</guid>
      <description>&lt;p&gt;In today's fast-paced digital era, companies require websites that have not only an attractive UI but also functionality and dynamism. A static website could be decent at first, but it may become restrictive in no time. This is where serverless technology comes into power.&lt;/p&gt;

&lt;p&gt;So what is Serverless? Simply, it's a computing model that operates on the cloud. At its core, servers are operating obviously, but it's called "Serverless" because you do not have to worry about provisioning, maintaining, or giving whatever attention that's required to get a server up and running.&lt;/p&gt;

&lt;p&gt;In other terms, you can develop and execute apps without having to arrange or supervise servers. Smooth scaling of the infrastructure is to be expected as this approach will handle big loads of traffic effortlessly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmiezwbkvznzvofmkw0v.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmiezwbkvznzvofmkw0v.jpeg" alt="Image description" width="800" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The AWS architecture illustrated in the image above shows an example of serverless technology that I applied to turn a static website into a dynamic one. It involves integrating Amazon CloudFront, AWS Lambda, and Amazon API Gateway to back up the hosting features on Amazon S3.&lt;/p&gt;

&lt;p&gt;Why was this solution required? The website was purely static, and as mentioned previously, it can become extremely limiting. We needed to create a careers page where anonymous users/visitors could apply for jobs. Note that we did not have a Sign-Up/Sign-In feature as it was not a requirement for a commercial static website.&lt;/p&gt;

&lt;p&gt;Upon visiting the careers page on our website, Amazon CloudFront receives a request from an unauthenticated user. It will then proceed to serve the HTML, CSS, and JavaScript files from S3 to our user, but this time granting him extra abilities, as we have assigned him a temporary role, allowing him to put objects inside an S3 bucket and execute an API call by invocation. All this via the Amazon Cognito for unauthenticated identities. We need these permissions for steps 2 and 3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vgqc85182mvq6hfh7jt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vgqc85182mvq6hfh7jt.png" alt="Image description" width="800" height="183"&gt;&lt;/a&gt;&lt;br&gt;
This is how we're now tracking our anonymous users.&lt;/p&gt;

&lt;p&gt;The form on our careers page requires the applicant to fill in text inputs for information like name, phone number, email, and a file input to upload their resume. The second the user clicks on submit, step 2 will be initiated, which is to upload the file to a secure and private s3 dedicated to all the resumes, and then retrieve the key of this file on upload success. The anonymous user has been granted the ability to put Objects in the s3 thanks to the role we assigned to him the moment he landed on the page.&lt;/p&gt;

&lt;p&gt;After we retrieve the key, we assemble our data in a JSON object and proceed to step 3 where we invoke our lambda expecting this object, through the API gateway, protected by AWS WAF rate-based rule.&lt;br&gt;
The lambda then proceeds to dispatch an email message to the recruitment team via SES, the email contains info about the user, what job he applied to, and the link to his resume.&lt;br&gt;
The link to the resume is through the S3 bucket of the resumes, frontend by another Cloudfront and another Route 53.&lt;/p&gt;

&lt;p&gt;Here's a resume: By leveraging this serverless approach, the fully static website became dynamic - and that's just one of many potential applications. The possibilities are unlimited.&lt;/p&gt;

&lt;p&gt;here are several advantages that come with utilizing serverless approaches:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Cost reduction through pay-as-you-go&lt;/li&gt;
&lt;li&gt;Increased scalability that adjusts automatically to meet demand.&lt;/li&gt;
&lt;li&gt;Deployed globally for improved performance with low latency. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So in general, Serverless technology did help transform static sites into dynamic ones all while enhancing their functionality, and performance and remaining cost-effective.&lt;/p&gt;

&lt;p&gt;If you're interested in knowing how you can automate the deployment of a static website, you can read more about it &lt;a href="https://dev.to/ck9801/deployment-automation-of-a-static-website-41am"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>frontend</category>
      <category>aws</category>
      <category>security</category>
    </item>
    <item>
      <title>Domain Redirection In A Snap 🔃</title>
      <dc:creator>Charbel El Kahwaji</dc:creator>
      <pubDate>Tue, 25 Oct 2022 10:08:29 +0000</pubDate>
      <link>https://dev.to/ck9801/domain-redirection-in-a-snap-5j2</link>
      <guid>https://dev.to/ck9801/domain-redirection-in-a-snap-5j2</guid>
      <description>&lt;p&gt;At our company, we have two domain names, exampleCorp.ae and exampleCorp.me. I was given a task to redirect users based on where they are geographically located when they visit our website.&lt;br&gt;
Both of these domains lead to the same S3 bucket hosting our website which happens to be fronted by CloudFront.&lt;br&gt;
If the users are from the UAE, they must be redirected to exampleCorp.ae. Everyone else around the globe must be redirected to exampleCorp.me.&lt;/p&gt;

&lt;p&gt;I found several ways to achieve this. The first thing that might come to mind is Route 53 geolocation which is a very good solution. But what if you don't have Route 53 implemented and just had Cloudfront?&lt;/p&gt;
&lt;h2&gt;
  
  
  CloudFront Functions And Lambda@Edge:
&lt;/h2&gt;

&lt;p&gt;According to the &lt;a href="https://aws.amazon.com/blogs/aws/introducing-cloudfront-functions-run-your-code-at-the-edge-with-low-latency-at-any-scale/" rel="noopener noreferrer"&gt;aws docs&lt;/a&gt;: Our CDNs can now perform direct computing! Take a look at Image 1. It's going to be the foundation of all the upcoming logic. This means that our requests can be manipulated on the spot without any middleware or any actual server... Another success story for the serverless concepts 🏆.&lt;/p&gt;

&lt;p&gt;Image 1:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz671x6rgr5k93dmzylma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz671x6rgr5k93dmzylma.png" alt="cloudfront global architecture" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cool, but what do we want to compute or manipulate? What are the inputs we want to give? And what is the difference between these two computing services?&lt;/p&gt;

&lt;p&gt;Let's start with the differences:&lt;/p&gt;

&lt;p&gt;Image 2:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flijud7qod4cvcrpnmxpq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flijud7qod4cvcrpnmxpq.png" alt="cloudfront-functions-and-lambda-edge-comparison" width="684" height="626"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lambda@Edge is extremely powerful, but if we can do it with less cost, then why not use Cloudfront functions? At the end of the day, we have a computing system that is going to process our algorithm. &lt;/p&gt;

&lt;p&gt;I want to know what the user asked to visit in the request, is it exampleCorp.me or exampleCorp.ae? &lt;br&gt;
I also want to know from which country the request originated.&lt;/p&gt;

&lt;p&gt;Amazing so now I have 2 variables, each having 2 relevant options.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Host:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;exampleCorp.me&lt;/li&gt;
&lt;li&gt;exampleCorp.ae&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Country:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;UAE&lt;/li&gt;
&lt;li&gt;everything else but UAE&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Seems like our algorithm is starting to take shape. But how will I pick up these variables?&lt;/p&gt;

&lt;p&gt;Go To: Cloudfront &amp;gt; Distributions and select the distribution where your S3 will be the origin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73rw9qwhyd8g3m1c1l2h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73rw9qwhyd8g3m1c1l2h.png" alt="cloudfront behavior" width="800" height="81"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on create behavior.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set your path pattern. When matched the cache behavior will apply. e.g: /* or /index.html&lt;/li&gt;
&lt;li&gt;Select your correct origin.&lt;/li&gt;
&lt;li&gt;Scroll down to this section:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fggmym88budv8n04q39mw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fggmym88budv8n04q39mw.png" alt="cache key and origin request" width="690" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Keep the Cache policy as is and in the Origin request policy click on Create policy, do as this image:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiir38r5uwlml5x8qe2o9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiir38r5uwlml5x8qe2o9.png" alt="Create origin request policy" width="800" height="843"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now every time our path pattern is matched, a new header is appended to our request having the country where the request actually originated as value.&lt;/p&gt;

&lt;p&gt;All we have to do is intercept the request and write our algorithm!&lt;/p&gt;

&lt;p&gt;Go Back to the cache behavior you just created and scroll down to the last section. You should see something like this.&lt;/p&gt;

&lt;p&gt;Image 3:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizz0xvo3ezj89q0gpzgg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizz0xvo3ezj89q0gpzgg.png" alt="cloudfront events" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we are going to add our function that will redirect the users based on the variables that we now supposedly receive in the request. The function we'll write is going to be triggered by 1 of the 4 CloudFront events. I'll go with Viewer Request.&lt;/p&gt;

&lt;p&gt;Duplicate the tab you are in and go to CloudFront functions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feas4jdvit7nw0u2thssk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feas4jdvit7nw0u2thssk.png" alt="Cloudfront functions" width="262" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a new one, name it, describe it (optional), and click on create. You cannot rename it afterward so watch out.&lt;br&gt;
Now paste this code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function handler(event) {
  console.log("Inside our cloudfront function !!");
  var request = event.request;
  var requestURI = request.uri;
  var referer = request.headers.host.value;
  // this is added thanks to the cache behavior we created
  var country = request.headers["cloudfront-viewer-country"].value;

  if (country === "AE" &amp;amp;&amp;amp; !referer.includes("exampleCorp.ae")) {
    /*this condition means that if the country of origin is UAE but the users are trying to access something other than "exampleCorp.ae" */
    console.log("inside 1st condition");
    var redirectResponse = {
      statusCode: 302,
      statusDescription: "Found",
      headers: {
        location: { value: `https://exampleCorp.ae${requestURI}` },
      },
    };
    return redirectResponse;
  } else if (country !== "AE" &amp;amp;&amp;amp; referer.includes("exampleCorp.ae")) {
    /*this condition means that if the country of origin is not UAE but the users are trying to access "exampleCorp.ae" */
    console.log("inside 2nd condition");
    var redirectResponse = {
      statusCode: 302,
      statusDescription: "Found",
      headers: {
        location: { value: `https://exampleCorp.me${requestURI}` },
      },
    };
    return redirectResponse;
  } else {
    /*this means that the user has actually the correct request and nothing needs to be modified so we return the initial request */
    return request;
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;check the cloudfront-viewer-country values &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-cloudfront-headers.html#:~:text=ISO%203166%2D1,." rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save the changes made to your function and Publish it. If you don't you won't be able to link it to CloudFront.&lt;/p&gt;

&lt;p&gt;Go back to the previous tab and associate the function you just published to the CloudFront event Viewer Request:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0hdpu6rewccokcn6k2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0hdpu6rewccokcn6k2q.png" alt="viewer request cloudfront" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save your changes and you are done! Congratulations!!&lt;br&gt;
You can see your logs in the CloudWatch following these &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html" rel="noopener noreferrer"&gt;steps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Feel free to use both for whatever suits your scenario. Just keep in mind that Cloudfront Functions will be triggered on &lt;strong&gt;Viewer Request&lt;/strong&gt; or &lt;strong&gt;Viewer Response&lt;/strong&gt; as for the lambda@edge it supports any of the 4 triggers (look at Image 2).&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>aws</category>
      <category>lambda</category>
      <category>cloudfront</category>
    </item>
    <item>
      <title>AWS GuardDuty In A Nutshell 🌰</title>
      <dc:creator>Charbel El Kahwaji</dc:creator>
      <pubDate>Tue, 04 Oct 2022 13:56:26 +0000</pubDate>
      <link>https://dev.to/ck9801/aws-guardduty-in-a-nutshell-p9o</link>
      <guid>https://dev.to/ck9801/aws-guardduty-in-a-nutshell-p9o</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;In this blog, I will be talking briefly about GuardDuty and its importance. This quick overview will help you understand the overall functionality of GuardDuty and will help you answer questions in the AWS Security Specialty Exam.&lt;/p&gt;

&lt;p&gt;GuardDuty is an intelligent threat detection service that continuously monitors your AWS accounts, Amazon Elastic Compute Cloud (EC2) instances, Amazon Elastic Kubernetes Service (EKS) clusters, and data stored in Amazon Simple Storage Service (S3) for malicious activity without the use of security software or agents. If potential malicious activity, such as anomalous behavior, credential exfiltration, or command and control infrastructure (C2) communication is detected, GuardDuty generates detailed &lt;strong&gt;security findings&lt;/strong&gt; that can be used for security visibility and assisting in remediation. Additionally, using the Amazon GuardDuty Malware Protection feature helps to detect malicious files on Amazon Elastic Block Store (EBS) volumes attached to an EC2 instance and container workloads.&lt;br&gt;
I took this paragraph straight from the &lt;a href="https://aws.amazon.com/guardduty/faqs/" rel="noopener noreferrer"&gt;Amazon GuardDuty FAQs&lt;/a&gt; which I highly recommend reading as it answers a lot of questions about this service.&lt;br&gt;
As a matter of fact, all of what is stated in this blog are from &lt;a href="https://docs.aws.amazon.com/guardduty/latest/ug/guardduty-ug.pdf" rel="noopener noreferrer"&gt;AWS references&lt;/a&gt; that I went through to make it easier to understand and shorten your reading time&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works:
&lt;/h2&gt;

&lt;p&gt;As mentioned, GuardDuty generates security findings. But how? By using the power of ML and analyzing events such as Amazon CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs based on a threat list.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;threat&lt;/strong&gt; list (not to be confused with a trust list) is a list of IP addresses that GuardDuty will consider as harmful and generate its findings based on it. Adding IP addresses to a &lt;strong&gt;trust&lt;/strong&gt; list tells GuardDuty not to investigate events generated from this IP and therefore not raise findings from this trusted IP.&lt;/p&gt;

&lt;p&gt;GuardDuty Integrates directly with Cloudwatch Events which is extremely helpful when you want to respond to a threat detection rapidly.&lt;/p&gt;

&lt;p&gt;The first step to using GuardDuty is to enable it in your account. Once enabled, GuardDuty will&lt;br&gt;
immediately begin to monitor for security threats in the current region.&lt;/p&gt;

&lt;p&gt;You can manage GuardDuty findings for other accounts within your organization as a GuardDuty&lt;br&gt;
administrator, you must add member accounts and enable GuardDuty for them as well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You can invite other AWS accounts to enable GuardDuty and become associated with your account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once accepted, the inviting account becomes the master account, and the accounts that accepted the invitation become the member accounts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;One AWS account cannot be both a Master and member account at the same time. One membership invitation can be accepted by One AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A master account can have up to 1000 members/region&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Comparison between users in Master account and users in Member accounts.
&lt;/h2&gt;

&lt;p&gt;Going to the specialty exam you have to know the different actions users in the Master account can do either in their actual account or in the associated members' account. The same goes for users in the member account. Obviously, they can't do anything in the master account&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbowapn1i9a7trwoiuvcy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbowapn1i9a7trwoiuvcy.png" alt="Image description" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this Image, I simplified the actions that can be done respectively. Click &lt;a href="https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_accounts.html#:~:text=Understanding%20the%20relationship%20between%20GuardDuty%20administrator%20and%20member%20accounts" rel="noopener noreferrer"&gt;here&lt;/a&gt; if you want to read more about the relationship between the GuardDuty administrator and member account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Note:&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
Lists uploaded by the Master account will be imposed on GD functionality in all member accounts. Therefore, findings will be generated based on the threat list of the master account. (I believe the 4th row in the table now makes sense doesn't it ?)&lt;/p&gt;

&lt;p&gt;The exam might test you on these actions and is expecting you to know the difference! Good Luck!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>guardduty</category>
      <category>specialty</category>
    </item>
  </channel>
</rss>
