<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kerisnarendra</title>
    <description>The latest articles on DEV Community by Kerisnarendra (@kerisnarendra).</description>
    <link>https://dev.to/kerisnarendra</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kerisnarendra"/>
    <language>en</language>
    <item>
      <title>AWS Application Load Balancer Setup</title>
      <dc:creator>Kerisnarendra</dc:creator>
      <pubDate>Fri, 21 Jul 2023 22:42:51 +0000</pubDate>
      <link>https://dev.to/kerisnarendra/aws-application-load-balancer-setup-40k8</link>
      <guid>https://dev.to/kerisnarendra/aws-application-load-balancer-setup-40k8</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlqfzcffllf40yer8w8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlqfzcffllf40yer8w8c.png" alt="Image description" width="706" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this blog post, I am going to remind myself the step-by-step process of setting up an Application Load Balancer (ALB) in AWS using the command line interface (CLI). ALB helps distribute incoming traffic across multiple EC2 instances, ensuring high availability, fault tolerance, and efficient scaling of web applications. &lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we get started, make sure we have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with permissions to create security groups, launch templates, auto scaling groups, target groups, and load balancers.&lt;/li&gt;
&lt;li&gt;Familiarity with AWS services and the AWS CLI.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step-by-Step Guide
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create a Security Group:
A security group acts as a virtual firewall for instances, controlling inbound and outbound traffic. Run the following command to create a security group named "my-security-group" allowing incoming traffic on ports 22 (SSH) and 80 (HTTP) from any source:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-security-group --group-name my-security-group --description "my-security-group" --vpc-id OUR_VPC_ID --region OUR_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a Launch Template:
Launch templates define the launch configuration for instances. Create a launch template named "my-launch-template" using the Amazon Linux 2 AMI and T2 Micro instance type, and specify the "my-security-group" security group and user data for instance initialization:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-launch-template --launch-template-name my-launch-template --image-id OUR_AMI_ID --instance-type t2.micro --security-group-ids OUR_SECURITY_GROUP_ID --user-data file://user-data-script.txt --region OUR_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;user-data-script.txt contains these command below to install and start Apache HTTPD and deploy a web page to let us know where the EC2 instance is located:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
echo '&amp;lt;center&amp;gt;&amp;lt;h1&amp;gt;This EC2 instance is located in Availability Zone: AZ&amp;lt;/h1&amp;gt;&amp;lt;/center&amp;gt;' &amp;gt; /var/www/html/index.txt
sed "s/AZ/$AZ" /var/www/html/index.txt &amp;gt; /var/www/html/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create an Auto Scaling Group (ASG):
An ASG manages the desired number of instances, scaling them based on demand. Create an ASG named "my-auto-scaling-group" with a minimum size of 1, maximum size of 3, and desired capacity of 2 instances. Specify the subnets we want to use for instances:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws autoscaling create-auto-scaling-group --auto-scaling-group-name my-auto-scaling-group --launch-template LaunchTemplateName=my-launch-template,Version=1 --min-size 1 --max-size 3 --desired-capacity 2 --vpc-zone-identifier "OUR_SUBNET_IDS" --region OUR_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a Target Group:
A target group defines a set of instances that handle incoming traffic. Create a target group named "my-target-group" for HTTP traffic on port 80 in our default VPC:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws elbv2 create-target-group --name my-target-group --protocol HTTP --port 80 --vpc-id OUR_VPC_ID --region OUR_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create an Application Load Balancer (ALB):
Now, create an ALB named "my-application-load-balancer" that is internet-facing, using the "my-target-group" target group, and associated with the previously created security group:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws elbv2 create-load-balancer --name my-application-load-balancer --subnets OUR_SUBNET_IDS --security-groups OUR_SECURITY_GROUP_ID --scheme internet-facing --type application --region OUR_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a Listener:
A listener configures the ALB to forward incoming requests to the target group. Create a listener for HTTP traffic on port 80, forwarding it to "my-target-group":
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws elbv2 create-listener --load-balancer-arn OUR_ALB_ARN --protocol HTTP --port 80 --default-actions Type=forward,TargetGroupArn=OUR_TARGET_GROUP_ARN --region OUR_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Attach the Auto Scaling Group to the Load Balancer:
Finally, attach the ASG "my-auto-scaling-group" to the ALB using the "my-target-group" target group:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws autoscaling attach-load-balancer-target-groups --auto-scaling-group-name my-auto-scaling-group --target-group-arns OUR_TARGET_GROUP_ARN --region OUR_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;ALB plays a vital role in ensuring our application can handle varying loads while maintaining high availability. By following this guide, you can create a robust infrastructure that scales seamlessly with demand. Happy load balancing!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>alb</category>
      <category>autoscaling</category>
      <category>loadbalancer</category>
    </item>
    <item>
      <title>NestJS: Making Server-Side Development Easier - My Thoughts</title>
      <dc:creator>Kerisnarendra</dc:creator>
      <pubDate>Fri, 21 Jul 2023 06:03:23 +0000</pubDate>
      <link>https://dev.to/kerisnarendra/nestjs-making-server-side-development-easier-my-thoughts-2983</link>
      <guid>https://dev.to/kerisnarendra/nestjs-making-server-side-development-easier-my-thoughts-2983</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz06351uon1cejdtfp10h.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz06351uon1cejdtfp10h.jpeg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;In this blog post, I want to remind myself why I decided to learn NestJS. As a developer who wants to create strong, reliable, and expandable server-side applications using TypeScript, NestJS caught my attention as a good recommendation. Let's explore the reasons that make NestJS worth the time and effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding NestJS Compared to React and Angular:
&lt;/h2&gt;

&lt;p&gt;To understand what NestJS is all about, it's important to compare it with React and Angular, which are frameworks used for front-end development. React is known for its flexibility and freedom to create user interfaces, mainly for the client-side. On the other hand, Angular takes a comprehensive approach, providing stability and reliability for building web applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Role of NestJS on the Server-Side:
&lt;/h2&gt;

&lt;p&gt;While React is great for the front-end, the server-side requires a different way of doing things. NestJS comes in as the perfect solution, offering a framework that meets the server's need for security and strength.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is NestJS, and Why Should I Learn It?
&lt;/h2&gt;

&lt;p&gt;NestJS is a powerful framework designed to build efficient and expandable server-side applications using TypeScript. Inspired by Angular, NestJS follows the principles of organizing code, writing clean and structured code, and managing dependencies. TypeScript's strong typing and robust features make it the main language for NestJS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring the Advantages of NestJS:
&lt;/h2&gt;

&lt;p&gt;When it comes to back-end development, the focus is more on data and logic rather than user experience. NestJS excels in this area with its structured approach, influenced by Angular. This strong foundation allows NestJS to offer many features for organizing code effectively, including modular architecture, data manipulation through pipes, and the use of controllers.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Wide Range of Built-in Features:
&lt;/h2&gt;

&lt;p&gt;NestJS doesn't just offer a structured approach; it goes above and beyond by providing a wide range of built-in features. These include task scheduling, queue management, logging, cookie handling, validation, and much more. Additionally, the framework seamlessly integrates WebSockets, which allows real-time communication in applications like chat or multiplayer games. Another great advantage is that NestJS supports both REST and GraphQL APIs, giving developers the flexibility to choose the best option for their project.&lt;/p&gt;

&lt;h2&gt;
  
  
  NestJS vs. Other Node.js Frameworks:
&lt;/h2&gt;

&lt;p&gt;Compared to other Node.js frameworks like Express.js, NestJS stands out with its comprehensive set of features, structured approach, and strength. Unlike React or Express, which may require additional libraries for certain features, NestJS offers a complete solution, reducing the need to search for external dependencies. However, it's important to note that NestJS's extensive capabilities come with a steeper learning curve and may feel overwhelming for smaller projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  The True Value of Learning NestJS:
&lt;/h2&gt;

&lt;p&gt;So, why should I invest my time in learning NestJS? The steep learning curve is actually one of the main reasons why NestJS is worth the effort. It's not just about learning a specific syntax, but also understanding the best practices of back-end development integrated into a framework. This knowledge becomes a valuable asset, even if I decide to work with a different framework or programming language later on. When I invest time in learning NestJS, I am ultimately investing in my own growth as a developer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;NestJS is a powerful and valuable choice for building server-side applications. Its foundation inspired by Angular, extensive features, and structured approach make it an excellent option for developers looking for strong and expandable solutions. As I continue my journey in the ever-changing world of development, learning NestJS proves to be a valuable investment in advancing my skills and expertise. So, let's embrace NestJS and unlock its full potential to shape a brighter future in server-side development!&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>nestjs</category>
      <category>node</category>
      <category>backend</category>
    </item>
    <item>
      <title>Unraveling the Tech Trinity: Monoliths, Microservices, and Serverless - A Comparative Analysis</title>
      <dc:creator>Kerisnarendra</dc:creator>
      <pubDate>Tue, 23 May 2023 12:39:04 +0000</pubDate>
      <link>https://dev.to/kerisnarendra/unraveling-the-tech-trinity-monoliths-microservices-and-serverless-a-comparative-analysis-12hi</link>
      <guid>https://dev.to/kerisnarendra/unraveling-the-tech-trinity-monoliths-microservices-and-serverless-a-comparative-analysis-12hi</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the world of software architecture, three prominent paradigms have emerged: monoliths, microservices, and serverless. Each approach has its own strengths and trade-offs. In this blog post, I will delve into a comparative analysis of these three architectures based on key criteria such as development experience, scalability, response time, reliability, and cost. Let's explore which option suits your needs best.&lt;/p&gt;

&lt;h2&gt;
  
  
  Criteria Boundary
&lt;/h2&gt;

&lt;p&gt;Before diving into the comparison, let's set the boundaries for each criterion to provide a fair evaluation.&lt;/p&gt;

&lt;h5&gt;
  
  
  Development Experience:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Refactoring: Assessing the ease of modifying existing code.&lt;/li&gt;
&lt;li&gt;Bug Fixing: Evaluating the simplicity of addressing and resolving bugs.&lt;/li&gt;
&lt;li&gt;Feature Building: Analyzing the agility in developing new features.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Scalability:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Active Users/Requests: Measuring the ability to handle increasing user loads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Response Time:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Gauging the system's speed in responding to user requests.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Reliability:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Error Handling: Assessing how the architecture deals with fatal errors.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Cost:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Daily Expenses: Comparing the overall cost implications of each architecture.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Comparison
&lt;/h2&gt;

&lt;h5&gt;
  
  
  Development Experience:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Monoliths: Relatively simple to refactor, fix bugs, and build features due to the consolidated codebase.&lt;/li&gt;
&lt;li&gt;Microservices: Refactoring and fixing bugs might require coordination among multiple services, but building new features is more modular and scalable.&lt;/li&gt;
&lt;li&gt;Serverless: Offers ease of refactoring and building features with minimal server management, while bug fixing may require navigating third-party services.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Scalability:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Monoliths: Scaling can be challenging due to the need to scale the entire application, potentially resulting in overprovisioning.&lt;/li&gt;
&lt;li&gt;Microservices: Scalability is improved by independently scaling specific services to handle varying loads.&lt;/li&gt;
&lt;li&gt;Serverless: Offers automatic scaling, allowing granular scaling based on demand, ensuring optimal resource utilization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Response Time:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Monoliths: Typically faster due to direct access to the application's resources.&lt;/li&gt;
&lt;li&gt;Microservices: Response time might vary depending on inter-service communication and network latency.&lt;/li&gt;
&lt;li&gt;Serverless: Response time can be slightly slower due to the overhead of function invocation and initialization (cold start).&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Reliability:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Monoliths: A fatal error in one component can bring down the entire system.&lt;/li&gt;
&lt;li&gt;Microservices: Isolated services limit the impact of failures, improving overall system reliability.&lt;/li&gt;
&lt;li&gt;Serverless: Third-party dependencies and serverless platform limitations can impact reliability, but individual failures are often contained.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Cost:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Monoliths: Usually more cost-effective for smaller applications, as fewer resources and infrastructure are required.&lt;/li&gt;
&lt;li&gt;Microservices: Higher infrastructure costs due to managing multiple services, but potential cost savings can be achieved through optimized resource allocation.&lt;/li&gt;
&lt;li&gt;Serverless: Cost depends on usage, but serverless architectures can provide cost efficiencies for low to moderate workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;p&gt;Q: How do I decide which architecture is best for my project?&lt;br&gt;
A: Choosing the right architecture depends on various factors specific to your project. Consider the following guidelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monoliths: Opt for a monolithic architecture if your application is relatively small, has a simple domain, and requires a quick development cycle with less emphasis on scalability.&lt;/li&gt;
&lt;li&gt;Microservices: Choose microservices if your application is complex, with multiple teams working on different components, and requires independent scalability and deployment of services.&lt;/li&gt;
&lt;li&gt;Serverless: Consider serverless if you have sporadic workloads, want to focus more on application logic rather than infrastructure management, and require automatic scaling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Q: Can microservices be combined with serverless?&lt;br&gt;
A: Yes, microservices can be combined with serverless architecture. In fact, it is common to see microservices implemented using serverless functions. You can leverage the benefits of both approaches by building each microservice as a serverless function, enabling independent scalability and deployment, while leveraging the serverless infrastructure for reduced operational overhead.&lt;/p&gt;

&lt;p&gt;Q: Are there any specific performance considerations for each architecture?&lt;br&gt;
A: Performance considerations vary across architectures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monoliths: Direct access to resources often results in faster response times. However, as the application grows, scalability might become a challenge.&lt;/li&gt;
&lt;li&gt;Microservices: Inter-service communication and network latency can impact response times. Ensuring efficient communication protocols and minimizing network overhead is crucial for optimal performance.&lt;/li&gt;
&lt;li&gt;Serverless: The overhead of function invocation and initialization might lead to slightly slower response times. Fine-tuning function cold start times and optimizing resource allocation can help mitigate this impact.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl99nfdt4rxdln0vdvcnt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl99nfdt4rxdln0vdvcnt.png" alt="Monolith, microservices and serverless" width="800" height="442"&gt;&lt;/a&gt;&lt;br&gt;
Choosing the right architecture depends on your project's specific requirements. Monoliths provide simplicity, microservices offer scalability, while serverless offers flexibility and cost efficiencies. Consider the development experience, scalability needs, response time expectations, reliability concerns, and cost implications to make an informed decision. Remember, there's no one-size-fits-all solution. Evaluate your project's unique needs and constraints before adopting an architecture that aligns with your goals.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Cross-Account Access to Amazon S3 using STS:AssumeRole</title>
      <dc:creator>Kerisnarendra</dc:creator>
      <pubDate>Fri, 19 May 2023 04:53:34 +0000</pubDate>
      <link>https://dev.to/kerisnarendra/cross-account-access-to-amazon-s3-using-stsassumerole-h5p</link>
      <guid>https://dev.to/kerisnarendra/cross-account-access-to-amazon-s3-using-stsassumerole-h5p</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Amazon Simple Storage Service (S3) is a widely-used object storage service offered by Amazon Web Services (AWS). It provides secure, durable, and scalable storage for various types of data. In some scenarios, we may need to grant access to our S3 buckets to AWS accounts that are different from the one where the bucket resides. This is where cross-account access comes into play, and AWS Security Token Service (STS) with the AssumeRole API becomes the key mechanism to securely share data across accounts. In this guide, we will explore the steps to set up cross-account access to S3 using the sts:AssumeRole mechanism.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step Guide
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Creating the IAM Role in the Destination Account:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Sign in to the AWS Management Console of the destination account.&lt;/li&gt;
&lt;li&gt;Navigate to the IAM (Identity and Access Management) service.
Create a new IAM role with the necessary permissions for accessing S3.&lt;/li&gt;
&lt;li&gt;Define a trust policy that specifies the source AWS account(s) allowed to assume this role using the sts:AssumeRole API.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"Version": "2012-10-17",
"Statement": {
    "Effect": "Allow",
    "Principal": {
        "AWS": "arn:aws:iam::ACCOUNT_SOURCE_ID:root"
    }
    "Action": "sts:AssumeRole",
    "Condition": {
        "StringEquals": {
            "sts:ExternalId": "EXTERNAL_ID"
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Attach the desired permissions policy that grants access to the specific S3 bucket(s).
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"Version": "2012-10-17",
"Statement": {
    "Effect": "Allow",
    "Action": "s3:*",
    "Resource": "*"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Assuming the IAM Role in the Source Account:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Sign in to the AWS Management Console of the source account.
Navigate to the IAM service.&lt;/li&gt;
&lt;li&gt;Create a new IAM user or use an existing one to assume the role in the destination account.&lt;/li&gt;
&lt;li&gt;Attach the necessary permissions to the IAM user or group to allow assuming the role.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"Version": "2012-10-17",
"Statement": {
    "Effect": "Allow",
    "Action": [
        "iam:ListRoles",
        "sts:AssumeRole",
    ],
    "Resource": "*"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Assuming the Role Programmatically:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Use the AWS SDK or AWS CLI in the source account to assume the IAM role in the destination account.&lt;/li&gt;
&lt;li&gt;Specify the &lt;strong&gt;RoleArn **of the IAM role, the **RoleSessionName&lt;/strong&gt; to identify the session, and optionally, an &lt;strong&gt;ExternalId&lt;/strong&gt; for additional security.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;AssumeRole&lt;/strong&gt; operation returns temporary security credentials consisting of an &lt;strong&gt;AccessKeyId&lt;/strong&gt;, &lt;strong&gt;SecretAccessKey&lt;/strong&gt;, &lt;strong&gt;SessionToken&lt;/strong&gt;, and &lt;strong&gt;Expiration&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws sts assume-role --role-arn "arn:aws:iam::ACCOUNT_DESTINATION_ID:role/TRUST_POLICY_ROLE_NAME_IN_ACCOUNT_DESTINATION" --role-session-name SESSION_NAME --external-id EXTERNAL_ID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Accessing the S3 Bucket in the Destination Account:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Use the temporary security credentials obtained after assuming the role to access the S3 bucket in the destination account.&lt;/li&gt;
&lt;li&gt;Configure our AWS SDK or AWS CLI to use the temporary credentials when making S3 API requests.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;These credentials have the necessary permissions as defined in the IAM role's permissions policy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Testing and Verifying Access:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Perform tests to ensure that the cross-account access is working as expected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use the AWS CLI or SDKs to list, upload, or download objects from the S3 bucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verify that the access control policies on the S3 bucket are correctly configured to allow the assumed role.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;Q: What is the benefit of using sts:AssumeRole for cross-account access?&lt;/p&gt;

&lt;p&gt;A: The sts:AssumeRole mechanism allows us to grant temporary access to another AWS account without sharing long-term credentials, enhancing security and reducing the attack surface.&lt;br&gt;
Q: Can I restrict the duration of the assumed role's access?&lt;/p&gt;

&lt;p&gt;A: Yes, we can define an expiration time for the temporary credentials obtained through &lt;strong&gt;sts:AssumeRole&lt;/strong&gt;, ensuring limited access to the destination account.&lt;br&gt;
Q: Are there any additional costs associated with cross-account access to S3?&lt;/p&gt;

&lt;p&gt;A: No, there are no additional costs&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Cross-account access to Amazon S3 using the sts:AssumeRole mechanism provides a secure and efficient way to share data between AWS accounts. By leveraging IAM roles and the Security Token Service, we can grant temporary access to S3 buckets without sharing long-term credentials. This approach enhances security and allows for fine-grained control over access permissions. Whether we need to collaborate with external partners, consolidate data from multiple accounts, or implement multi-tier architectures, the &lt;strong&gt;sts:AssumeRole&lt;/strong&gt; mechanism simplifies cross-account access and empowers us to utilize the full potential of AWS cloud storage.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>role</category>
      <category>policy</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Securing Our AWS Environment: Preventing Privilege Escalation with Permission Boundaries</title>
      <dc:creator>Kerisnarendra</dc:creator>
      <pubDate>Sun, 14 May 2023 15:21:09 +0000</pubDate>
      <link>https://dev.to/kerisnarendra/securing-our-aws-environment-preventing-privilege-escalation-with-permission-boundaries-l06</link>
      <guid>https://dev.to/kerisnarendra/securing-our-aws-environment-preventing-privilege-escalation-with-permission-boundaries-l06</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the world of AWS, security is incredibly important. To keep things secure, it's important to make sure that users and roles only have the access they need to do their jobs. But sometimes, users or roles end up with more access than they should have, which can be dangerous. That's where privilege escalation comes in.&lt;/p&gt;

&lt;p&gt;Privilege escalation is when someone gains access to more resources or permissions than they're supposed to have. This can happen if there's a security vulnerability that lets someone exploit the system. If someone manages to escalate their privileges, they can access things they shouldn't be able to, and that can be really bad for a company.&lt;/p&gt;

&lt;p&gt;In this blog post, I am going to remind myself how to prevent privilege escalation using a permission boundary in AWS IAM. &lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we begin, make sure we have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with IAM access&lt;/li&gt;
&lt;li&gt;Basic knowledge of AWS IAM and policies&lt;/li&gt;
&lt;li&gt;The permission boundary JSON file below
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Action": [
                "iam:*"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Sid": "DenyPermBoundaryIAMPolicyAlteration",
            "Effect": "Deny",
            "Action": [
                "iam:DeletePolicy",
                "iam:DeletePolicyVersion",
                "iam:CreatePolicyVersion",
                "iam:SetDefaultPolicyVersion"
            ],
            "Resource": [
                "arn:aws:iam::Account_ID:policy/PermissionPolicy"
            ]
        },
        {
            "Sid": "DenyRemovalOfPermBoundaryFromAnyUserOrRole",
            "Effect": "Deny",
            "Action": [
                "iam:DeleteUserPermissionsBoundary",
                "iam:DeleteRolePermissionsBoundary"
            ],
            "Resource": [
                "arn:aws:iam::Account_ID:user/*",
                "arn:aws:iam::Account_ID:role/*"
            ],
            "Condition": {
                "StringEquals": {
                    "iam:PermissionsBoundary": [
                        "arn:aws:iam::Account_ID:poliy/PermissionBoundary"
                    ]
                }
            }
        },
        {
            "Sid": "DenyUserAndRoleCreationWithoutPermBoundary",
            "Effect": "Deny",
            "Action": [
                "iam:CreateUser",
                "iam:CreateRole"
            ],
            "Resource": [
                "arn:aws:iam::Account_ID:role/*",
                "arn:aws:iam::Account_ID:user/*"
            ],
            "Condition": {
                "StringNotEquals": {
                    "iam:PermissionsBoundary": [
                        "arn:aws:iam::Account_ID:policy/PermissionBoundary"
                    ]
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step-by-Step Guide
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create an IAM policy that allows users and roles to perform their intended actions, but also includes a permission boundary to limit their privileges. This policy should be applied to the users and roles in our AWS account.&lt;br&gt;
&lt;strong&gt;Note&lt;/strong&gt;: We have already been provided with a sample permission boundary JSON file. We can use this as a starting point for our own permission boundary policy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an IAM policy that denies the creation of new users and roles without a permission boundary. This policy should be applied to all users and roles in our AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an IAM policy that denies the removal of permission boundaries from any user or role. This policy should be applied to all users and roles in our AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an IAM policy that denies the alteration of the permission boundary policy. This policy should be applied to the permission boundary policy in our AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the policies have been created, apply them to the appropriate users and roles in our AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test the policies by attempting to create a user or role without a permission boundary. The policy should deny the creation and provide an error message.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test the policies by attempting to remove the permission boundary from a user or role. The policy should deny the removal and provide an error message.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test the policies by attempting to alter the permission boundary policy. The policy should deny the alteration and provide an error message.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;Q: What is a permission boundary in AWS IAM?&lt;br&gt;
A: A permission boundary is an advanced feature in AWS IAM that allows us to set the maximum permissions for a user or role. It limits what actions a user or role can perform on AWS resources.&lt;/p&gt;

&lt;p&gt;Q: Why is a permission boundary important?&lt;br&gt;
A: A permission boundary is important because it can help prevent privilege escalation. By setting a maximum level of permissions, we can ensure that users and roles only have the access they need to perform their intended actions.&lt;/p&gt;

&lt;p&gt;Q: How do I know if I am vulnerable to privilege escalation?&lt;br&gt;
A: We can assess our AWS environment for privilege escalation vulnerabilities using tools like AWS IAM Access Analyzer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Privilege escalation can be a serious security issue in an AWS environment. By using a permission boundary, we can help prevent this issue from occurring. In this blog post, we went through the steps needed to handle privilege escalation using a permission boundary in AWS IAM. It is important to regularly review and update our IAM policies to ensure that they are up-to-date and providing the appropriate level of access to users and roles in our AWS account.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automating EC2 Instance Start/Stop using Serverless Code and CloudWatch Rule</title>
      <dc:creator>Kerisnarendra</dc:creator>
      <pubDate>Fri, 12 May 2023 04:47:00 +0000</pubDate>
      <link>https://dev.to/kerisnarendra/automating-ec2-instance-startstop-using-serverless-code-and-cloudwatch-rule-429p</link>
      <guid>https://dev.to/kerisnarendra/automating-ec2-instance-startstop-using-serverless-code-and-cloudwatch-rule-429p</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focebqhcuhsn6hteslyi9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focebqhcuhsn6hteslyi9.png" alt="Image description" width="770" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this blog post, I am going to remind myself how to use serverless(&lt;a href="https://www.serverless.com/"&gt;https://www.serverless.com/&lt;/a&gt;) and a CloudWatch rule to automate starting and stopping EC2 instances in AWS. This is particularly useful for teams that need to frequently scale up and down their infrastructure. We'll also cover the cron syntax used to specify the schedule for the CloudWatch rule, including an important limitation to be aware of. By automating this process, we can save time and reduce costs by only running EC2 instances when we need them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we get started, make sure we have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with permissions to create EC2 instances, CloudWatch rules, and Lambda functions.&lt;/li&gt;
&lt;li&gt;An EC2 instance that we want to start and stop based on a schedule.&lt;/li&gt;
&lt;li&gt;A basic understanding of serverless and the AWS CloudWatch service.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step-by-Step Guide
&lt;/h2&gt;

&lt;p&gt;To create a CloudWatch rule that invokes a Lambda function to start and stop an EC2 instance using serverless code, follow these steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up a new serverless project in AWS by running the command "serverless create --template aws-nodejs --path ec2-scheduler".&lt;/li&gt;
&lt;li&gt;Navigate to the "ec2-scheduler" folder and open the "serverless.yml" file.&lt;/li&gt;
&lt;li&gt;Under "provider", add the following lines to allow the Lambda function to start and stop EC2 instances:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service: ec2-scheduler

provider:
  name: aws
  runtime: nodejs16.x
  stage: dev
  region: ap-southeast-1
  stackName: ${self:service}
  stackTags:
    Name: ${self:service}
  iam:
    role:
      statements:
        - Effect: "Allow"
          Action:
            - "ec2:StartInstances"
            - "ec2:StopInstances"
          Resource: "*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Under "custom", add the instance ID of the EC2 instance we want to start and stop, as well as the cron expressions for starting and stopping the instance. For example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;custom:
  instanceId: i-0f00
  start: cron(0 0 ? * MON-FRI *)
  stop: cron(15 9 ? * MON-FRI *)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will start the instance at 0:0 AM (UTC) every Monday through Friday and stop the instance at 9:15 PM (UTC) every Monday through Friday.&lt;br&gt;
The cron() expression syntax should conform to the following, where all six fields are required and must be separated by a white space:&lt;br&gt;
&lt;code&gt;cron(Minutes Hours Day-of-Month Month Day-of-Week Year)&lt;/code&gt;&lt;br&gt;
Each field can have the following values/wildcards:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2fta3mgy60kdk70xsr9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2fta3mgy60kdk70xsr9.png" alt="Image description" width="578" height="286"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Notes&lt;/strong&gt;: Make Sure We're Not Using * in Both the Day-of-Month and Day-of-Week Fields&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under "functions", create two functions: "start" and "stop". These functions will call the "startInstances" and "stopInstances" methods of the EC2 service, respectively. For example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;functions:
  start:
    handler: handler.start
    events:
      - schedule:
          rate: ${self:custom.start}
          input:
            id: ${self:custom.instanceId}
            region: ${self:provider.region}

  stop:
    handler: handler.stop
    events:
      - schedule:
          rate: ${self:custom.stop}
          input:
            id: ${self:custom.instanceId}
            region: ${self:provider.region}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create the "handler.js" file and define the "start" and "stop" functions that will be called by the "start" and "stop" functions defined in the "serverless.yml" file. For example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'use strict'

const AWS = require('aws-sdk');

module.exports.start = (event, context, callback) =&amp;gt; {
  const ec2 = new AWS.EC2({region: event.region});

  ec2.startInstances({InstanceIds: [event.id]}).promise()
    .then(() =&amp;gt; callback(null, `Successfully started ${event.id}`))
    .catch(err =&amp;gt; callback(err))
};

module.exports.stop = (event, context, callback) =&amp;gt; {
  const ec2 = new AWS.EC2({region: event.region});

  ec2.stopInstances({InstanceIds: [event.id]}).promise()
    .then(() =&amp;gt; callback(null, `Successfully stopped ${event.id}`))
    .catch(err =&amp;gt; callback(err))
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Deploy the project by running the command "serverless deploy --aws-profile aws-profile-in-dot-aws-folder". This will create the CloudWatch rule and Lambda function in our AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the deployment is complete, verify that the CloudWatch rule and Lambda function have been created in the AWS console.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test the solution by waiting for the scheduled time to start and stop the EC2 instance. We should see the instance status change in the AWS console.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;Q: Can I schedule different start and stop times for different instances?&lt;br&gt;
A: Yes, we can modify the "custom" section of the serverless.yaml file to specify different instance IDs and schedules for each instance.&lt;/p&gt;

&lt;p&gt;Q: What if the Lambda function fails to start or stop the instance?&lt;br&gt;
A: The Lambda function will return an error message if it fails to start or stop the instance. We can view the error message in the AWS console or in the CloudWatch logs.&lt;/p&gt;

&lt;p&gt;Q: Can I use this solution with other AWS services besides EC2 instances?&lt;br&gt;
A: Yes, we can modify the Lambda function code to interact with other AWS services.&lt;/p&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/"&gt;https://aws.amazon.com/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.serverless.com/framework/docs"&gt;https://www.serverless.com/framework/docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;If our aws account is using mfa - &lt;a href="https://repost.aws/knowledge-center/authenticate-mfa-cli"&gt;https://repost.aws/knowledge-center/authenticate-mfa-cli&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.serverless.com/examples/aws-node-scheduled-cron"&gt;https://www.serverless.com/examples/aws-node-scheduled-cron&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.designcise.com/web/tutorial/how-to-fix-parameter-scheduleexpression-is-not-valid-serverless-error#check-if-the-cron-expression-syntax-and-values-are-correct"&gt;https://www.designcise.com/web/tutorial/how-to-fix-parameter-scheduleexpression-is-not-valid-serverless-error#check-if-the-cron-expression-syntax-and-values-are-correct&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Automating the start and stop of EC2 instances based on a schedule can help we save time and reduce costs by only running instances when they are needed. Using serverless code and a CloudWatch rule makes it easy to implement this solution in our AWS account. With this solution in place, we can focus on other important tasks while our infrastructure scales up and down automatically.&lt;/p&gt;

</description>
      <category>serveless</category>
      <category>ec2</category>
      <category>aws</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Memoization in JavaScript: The need for speed</title>
      <dc:creator>Kerisnarendra</dc:creator>
      <pubDate>Wed, 22 Mar 2023 08:44:03 +0000</pubDate>
      <link>https://dev.to/kerisnarendra/memoization-the-need-for-speed-4l36</link>
      <guid>https://dev.to/kerisnarendra/memoization-the-need-for-speed-4l36</guid>
      <description>&lt;p&gt;Memoization is a technique used in programming to optimize the performance of frequently executed functions. It involves caching the results of a function call, so that the same results can be returned for subsequent calls with the same arguments, rather than recalculating them each time.&lt;/p&gt;

&lt;p&gt;Memoization is based on the idea that if a function is called with the same arguments multiple times, it will return the same result each time. Therefore, it makes sense to cache the result of the first function call and return it for all subsequent calls with the same arguments.&lt;/p&gt;

&lt;p&gt;The process of memoization involves creating a cache object, which is used to store the results of function calls. When a function is called with a set of arguments, the cache object is checked to see if the result for those arguments already exists. If it does, the cached result is returned; otherwise, the function is executed, and the result is stored in the cache object for future use.&lt;/p&gt;

&lt;p&gt;Memoization can be particularly useful for functions that are computationally expensive or have complex logic. By caching the result of the first call, subsequent calls with the same arguments can be executed quickly and without the need to repeat the same calculations.&lt;/p&gt;

&lt;p&gt;In Node.js, memoization can be implemented using a variety of techniques. One common approach is to use a JavaScript object to store the cached results. Here's an example of a simple memoization function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function memoize(func) {
  const cache = {};
  return function(...args) {
    const key = JSON.stringify(args);
    if (cache[key]) {
      return cache[key];
    }
    const result = func.apply(this, args);
    cache[key] = result;
    return result;
  };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function takes a function as an argument and returns a new function that wraps the original function with memoization. The cache object is created as an empty JavaScript object, and a new function is returned that will be used to execute the memoized function.&lt;/p&gt;

&lt;p&gt;The memoized function takes a variable number of arguments using the spread syntax ...args. These arguments are converted into a string using the JSON.stringify() method, which is used as the key for the cache object. If the key exists in the cache, the cached result is returned. If it doesn't, the original function is executed using the apply() method, and the result is stored in the cache object for future use.&lt;/p&gt;

&lt;p&gt;Here's an example of how you can use the memoization function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function fibonacci(n) {
  if (n &amp;lt;= 1) {
    return n;
  }
  return fibonacci(n - 1) + fibonacci(n - 2);
}

const memoizedFibonacci = memoize(fibonacci);

console.log(memoizedFibonacci(10)); // Output: 55
console.log(memoizedFibonacci(10)); // Output: 55 (cached result)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the fibonacci() function calculates the Fibonacci sequence for a given number. The memoize() function is used to create a new function memoizedFibonacci that wraps the fibonacci() function with memoization. The memoizedFibonacci() function is then called twice with the same argument, and the result is returned from the cache object on the second call.&lt;/p&gt;

&lt;p&gt;Memoization is a powerful technique that can greatly improve the performance of your Node.js applications. By caching the results of frequently executed functions, you can avoid repeating expensive calculations and reduce the overall processing time.&lt;/p&gt;

&lt;p&gt;Memoization is one of effective techniques for optimizing the performance of functions that execute SQL queries. By caching the results of previous queries, subsequent calls with the same arguments can be served quickly without having to re-execute the query. Another approach for improving performance is to use materialized tables, which store the results of a query in a physical table, so that subsequent queries can retrieve the pre-calculated results directly from the table. Are you interested in materialized tables topic? I discussed it in another post.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Visitor Pattern in Typescript</title>
      <dc:creator>Kerisnarendra</dc:creator>
      <pubDate>Mon, 16 Jan 2023 03:21:50 +0000</pubDate>
      <link>https://dev.to/kerisnarendra/visitor-pattern-in-typescript-2jeg</link>
      <guid>https://dev.to/kerisnarendra/visitor-pattern-in-typescript-2jeg</guid>
      <description>&lt;p&gt;Visitor design pattern is a type of behavioral pattern, a way to separate an algorithm or an action from the objects it operates on. It allows new functionality to be added to a set of classes without modifying the classes themselves, by encapsulating the new functionality in a separate class called the Visitor. This makes it more flexible, maintainable and reusable.&lt;/p&gt;

&lt;p&gt;This pattern can be used in a real-life vehicles service scenario to perform different inspections on a variety of vehicles.&lt;/p&gt;

&lt;p&gt;This pattern consists of the following components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visitor interface: it defines a visit method for each class in the object structure.
Sample:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;interface VehicleInspector {
    visit(car: Car): void;
    visit(van: Van): void;
    visit(motorbike: Motorbike): void;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Concrete visitor classes: it implements the visitor interface. Each concrete visitor class defines a specific behavior for the visit method.
Sample:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class CarInspector implements VehicleInspector {
    visit(car: Car): void {
        console.log(`Visiting ${car.constructor.name} with CarInspector`);
    }
}

class VanInspector implements VehicleInspector {
    visit(van: Van): void {
        console.log(`Visiting ${van.constructor.name} with VanInspector`);
    }
}

class MotorbikeInspector implements VehicleInspector {
    visit(motorbike: Motorbike): void {
        console.log(`Visiting ${motorbike.constructor.name} with MotorbikeInspector`);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Element interface: this defines an accept method that takes a visitor as a parameter.
Sample:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;interface Vehicle {
  accept(vehicleInspector: VehicleInspector): void;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Concrete element classes: it implements the element interface. Each concrete element class has an accept method that calls the appropriate visit method on the visitor for that class.
Sample:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class Car implements Vehicle {
    accept(vehicleInspector: VehicleInspector): void {
        return vehicleInspector.visit(this)
    }
}

class Van implements Vehicle {
    accept(vehicleInspector: VehicleInspector): void {
        return vehicleInspector.visit(this)
    }
}

class Motorbike implements Vehicle {
    accept(vehicleInspector: VehicleInspector): void {
        return vehicleInspector.visit(this)
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sample to use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let car = new Car();
let van = new Van();
let motorbike = new Motorbike();

let carInspector = new CarInspector();
car.accept(carInspector);

let vanInspector = new VanInspector();
van.accept(vanInspector);

let motorbikeInspector = new MotorbikeInspector();
motorbike.accept(motorbikeInspector);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result will be like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Visiting Car with CarInspector
Visiting Van with VanInspector
Visiting Motorbike with MotorbikeInspector
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
  </channel>
</rss>
