<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: pedchenkoroman</title>
    <description>The latest articles on DEV Community by pedchenkoroman (@pedchenkoroman).</description>
    <link>https://dev.to/pedchenkoroman</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pedchenkoroman"/>
    <language>en</language>
    <item>
      <title>Pattern Actor in AWS environment</title>
      <dc:creator>pedchenkoroman</dc:creator>
      <pubDate>Tue, 15 Apr 2025 22:11:47 +0000</pubDate>
      <link>https://dev.to/pedchenkoroman/pattern-actor-in-aws-environment-2kpc</link>
      <guid>https://dev.to/pedchenkoroman/pattern-actor-in-aws-environment-2kpc</guid>
      <description>&lt;p&gt;Hi everyone,&lt;br&gt;
In this article, I’d like to show you how to use the Actor pattern in an AWS environment. It’s possible that you’re already using the Actor Model without realizing that this approach actually has a name—the Actor.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is the Actor?
&lt;/h2&gt;

&lt;p&gt;I’m absolutely sure I couldn’t explain it better than the person who invented and formalized the Actor Model, which is why I highly recommend watching this video.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/7erJ1DV_Tlo"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Let me just add a quick recap and show you how to implement the Actor Model in AWS.&lt;/p&gt;

&lt;p&gt;The Actor Model is a conceptual model used in computer science to handle concurrent computation. In this model, an actor is a computational entity that, in response to receiving a message, can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Perform a task (e.g., process data or make a decision),&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Send messages to other actors,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create new actors, and&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update its internal state.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each actor operates independently and communicates only by message passing, which helps avoid issues related to shared memory and makes systems easier to scale and reason about. Let me rephrase it use simply words.&lt;/p&gt;

&lt;p&gt;Actor&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Lightweight and it is easy to create thousand of them&lt;/li&gt;
&lt;li&gt;Has own state&lt;/li&gt;
&lt;li&gt;Has own mailbox (queue)&lt;/li&gt;
&lt;li&gt;Can communicate with each other only through messages&lt;/li&gt;
&lt;li&gt;Messages are processed in FIFO order&lt;/li&gt;
&lt;li&gt;Process only one message at a time&lt;/li&gt;
&lt;li&gt;Decoupled&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In AWS, you can implement the Actor pattern using services like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambda (as individual actors)&lt;/li&gt;
&lt;li&gt;Amazon SQS or SNS (for message passing)&lt;/li&gt;
&lt;li&gt;Step Functions or EventBridge (for orchestration)&lt;/li&gt;
&lt;li&gt;DynamoDb (as a storage/state)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It might seem like the Actor Model is simply a combination of a queue service, a Lambda function triggered by that queue, and a DynamoDB table — and you'd be mostly right. However, there are some subtle nuances, and I’d like to walk you through them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;A typical use case well-suited for the Actor Model is a shopping cart in an e-commerce application. You'll easily come across this example in many articles, which is why I'll provide some slightly different ones instead.&lt;/p&gt;

&lt;p&gt;Let's imagine a fintech application where a user can deposit money into their own account or transfer it to another account. In the fintech world, these operations are referred to as transactions, and they typically fall into two categories: &lt;em&gt;deposit&lt;/em&gt; and &lt;em&gt;transfer&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Another example is a store where products can arrive and depart at any time. To prevent the inventory from dropping below zero, we must ensure that all arrivals are processed before any departures.&lt;/p&gt;

&lt;p&gt;🎯 I’ll use these two scenarios to demonstrate how to implement a proper, scalable and from ever going below zero solution using AWS resources and the Actor Model.&lt;/p&gt;

&lt;p&gt;Now we do have a lot to cover so let's jump in and get started straight away&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Initial Approach: Limited Scalability&lt;/strong&gt;&lt;br&gt;
We can split our transactions into two batches: the first containing only deposit transactions, and the second containing transfer transactions. These batches will then be sent to a FIFO queue for processing. Using the same &lt;em&gt;messageGroupId&lt;/em&gt; for all messages, to help us to process the transaction in correct order and cover the scenario where a user deposits money before transferring it to another user.&lt;/p&gt;

&lt;p&gt;With products the same idea first we will process all arrivals and only then departures.&lt;/p&gt;

&lt;p&gt;Let’s take a visual look at how our flow works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ek63wg4xqtzzuv0dkuf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ek63wg4xqtzzuv0dkuf.png" alt="Diagram of a non-scalable implementation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Imagine that Lambda 1 has already sorted and filtered the products/transactions and is sending them one by one to a queue. Lambda 2 is triggered by the queue and processes each message individually, storing useful information—such as the quantity of products or the cash balance of clients—in a DynamoDB table. The bottle neck of this approach is the process lambda because it is always only one instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalable Solution Using the Actor Model&lt;/strong&gt;&lt;br&gt;
To solve this issue, the Actor pattern comes in handy. Before I describe the solution, I’d like to stress out again that every Actor must have its own state, queue and perform a task. In the non-scalable solution, the processing Lambda function has all these characteristics and may look like an Actor—but what is the state of this Actor? You’ll probably say it's DynamoDB, and you’d be right—but that's a shared state for the entire application. What I’d like to do instead is break it down into the smallest possible pieces. In this case, it could be either the clientId or productId, couldn’t it?&lt;/p&gt;

&lt;p&gt;So, this will help us solve our task—let’s take a look at it visually: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1xryhcinxrs78suyn3d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1xryhcinxrs78suyn3d.png" alt="Actor implementation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, let's break down the actor implementation:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The first Lambda function already has the sorted list of products or transactions and sends them one by one to a FIFO queue, using messageGroupId as either the clientId or productId.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The queue triggers multiple instances of the processing Lambda function, creating them as needed. Each instance processes messages grouped by messageGroupId, effectively isolating the flow per client or product.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;MessageGroupId in FIFO SQS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Guarantees strict ordering within the same MessageGroupId.&lt;/li&gt;
&lt;li&gt;Only one Lambda will process one message group at a time.&lt;/li&gt;
&lt;li&gt;If two messages have the same MessageGroupId, they are processed one-by-one, in order.&lt;/li&gt;
&lt;li&gt;If two messages have different MessageGroupIds (e.g., client-1, client-2), they can be processed concurrently by separate Lambda invocations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a result, we have a scalable solution where each processing Lambda function fits to the three core characteristics of the Actor Model.&lt;/p&gt;

&lt;p&gt;💡 &lt;em&gt;As a bonus if a message fails processing (Lambda error), SQS will not deliver the next message in that group until the failed one is successfully handled or moved to a DLQ.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Recap
&lt;/h2&gt;

&lt;p&gt;🚀 The Actor Model is a powerful pattern for building scalable, concurrent systems—and AWS gives us all the tools to implement it effectively. By assigning state and queues per actor (e.g., per client or product), and combining Lambda with FIFO SQS queues, we can create robust, fault-tolerant systems that scale effortlessly.&lt;/p&gt;

&lt;p&gt;Also, don’t forget that every AWS account has Lambda quotas—especially regarding concurrent executions. I recommend properly configuring these limits before going to production.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you’d like to support my work, you can subscribe, give me kudos, or share your valuable feedback. Your support and insights mean a lot!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>webdev</category>
      <category>programming</category>
      <category>node</category>
    </item>
    <item>
      <title>Send reports safely via email(mailbox).</title>
      <dc:creator>pedchenkoroman</dc:creator>
      <pubDate>Mon, 07 Apr 2025 10:34:53 +0000</pubDate>
      <link>https://dev.to/pedchenkoroman/send-reports-safely-via-emailmailbox-3bao</link>
      <guid>https://dev.to/pedchenkoroman/send-reports-safely-via-emailmailbox-3bao</guid>
      <description>&lt;p&gt;Hi everyone.&lt;br&gt;
In this article, I will guide you on how to securely send reports via email using AWS. Before we dive in, I’ll provide more details about the AWS services involved and outline the problem we’re solving.&lt;/p&gt;
&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;Let’s explore why attaching a report directly to an email is generally unsafe.&lt;/p&gt;
&lt;h3&gt;
  
  
  Email Interception &amp;amp; Lack of Encryption.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Standard email protocols (SMTP, POP3, IMAP) do not encrypt emails end-to-end.&lt;/li&gt;
&lt;li&gt;If an attacker sniffs network traffic (e.g., on public Wi-Fi or an unsecured network), they can capture emails, including attachments.&lt;/li&gt;
&lt;li&gt;Even TLS (used by Gmail, Outlook, etc.) encrypts emails only in transit; once delivered, they are stored unencrypted on mail servers.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Email Account Compromise
&lt;/h3&gt;

&lt;p&gt;If a hacker gains access to your email account (phishing, credential stuffing, malware, etc.), they can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download all email attachments you’ve sent or received.&lt;/li&gt;
&lt;li&gt;Forward emails to other accounts without you knowing.&lt;/li&gt;
&lt;li&gt;Expose the report permanently—it stays in inboxes (yours and the recipient’s) indefinitely.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Forwarding &amp;amp; Unintended Access
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A recipient might accidentally forward the email.&lt;/li&gt;
&lt;li&gt;Emails can be archived and stored indefinitely, making them a long-term risk.&lt;/li&gt;
&lt;li&gt;If their email account is hacked, your sensitive report is exposed to attackers.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Malware &amp;amp; Attachment Exploits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Hackers often embed malicious payloads in email attachments.&lt;/li&gt;
&lt;li&gt;If your report is attached as a Word or Excel file, an attacker could inject macro-based malware and resend it.&lt;/li&gt;
&lt;li&gt;Some email providers scan attachments and store copies, potentially exposing sensitive content.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Please be aware that this is not a perfect solution. The main drawback is that the report link is shared via email. If a malicious actor becomes aware of this approach, it could be relatively easy to trick users by phishing them with a fake link and a spoofed authorization page.&lt;/p&gt;

&lt;p&gt;The most secure and recommended approach would be to notify users that a new report is ready, and make it accessible only through your application’s interface and a secure, custom mailbox.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  AWS Services
&lt;/h2&gt;

&lt;p&gt;To achieve the goal of this article, I will use the following AWS services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Cognito&lt;/li&gt;
&lt;li&gt;AWS Identity and Access Management (IAM)&lt;/li&gt;
&lt;li&gt;AWS S3&lt;/li&gt;
&lt;li&gt;AWS Lambda&lt;/li&gt;
&lt;li&gt;AWS Api Gateway&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before we begin I would like to mention that each of these services is self-sufficient and covering all of their features in a single article is nearly impossible. That's why I will go into some more depth about some of them. By the end of this article, I hope these concepts will be deeply engraved in your mind, like wisdom, empowering you to use them productively.&lt;br&gt;
Let's break down every service with a feature that we will use and also the steps of our flow.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Cognito&lt;/strong&gt;&lt;br&gt;
There are two services that we will use, and the first one is Identity Pools. An identity pool exchanges an external identity type for a set of temporary AWS credentials, allowing access to AWS resources. The external identity could be from Google, Facebook, Twitter, SAML 2.0, or even User Pools identities.&lt;br&gt;
From an Identity Pools perspective, User Pools are just another form of identity. As you might have guessed, we will use User Pools for authentication. Additionally, I’d like to clarify that User Pools are used for sign-in and sign-up (with a customizable web UI), MFA, and other security features.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Identity and Access Management&lt;/strong&gt;&lt;br&gt;
We will use this service to create an IAM role, which will be attached to the Identity Pool. When the JWT token is passed to the configured Identity Pool, Cognito assumes the IAM role and returns temporary AWS credentials.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3&lt;/strong&gt;&lt;br&gt;
Amazon S3 (Simple Storage Service) is a scalable, durable, and secure object storage service that allows you to store and retrieve any amount of data from anywhere. It’s used for storing files, images, videos, backups, logs, reports, and more.&lt;br&gt;
We will use Amazon S3 to store the report in a private bucket..&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lambda&lt;/strong&gt;&lt;br&gt;
AWS Lambda is a serverless computing service that lets you run code without managing servers. You just upload your function, and AWS takes care of the rest—scaling, infrastructure, and execution. &lt;br&gt;
We will use it to exchange a JWT token, generated by the user pool service, for temporary AWS credentials with an IAM role that grants access to the private S3 bucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Gateway&lt;/strong&gt;&lt;br&gt;
AWS API Gateway is a fully managed service that helps you create, publish, and manage APIs at scale. &lt;br&gt;
We will use it to create a simple GET endpoint and attach a Lambda function. This endpoint will serve as a callback URL for the User Pool.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  The solution
&lt;/h2&gt;

&lt;p&gt;Before we implement our solution, let's have a look visually at how our flow architecture looks at this point.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yp5sfkk8gvn2zfcmlk2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yp5sfkk8gvn2zfcmlk2.png" alt="Solution diagram"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  S3
&lt;/h3&gt;

&lt;p&gt;First, we need to create a private S3 bucket. To create a bucket, you only need to provide a globally unique bucket name. All other settings, including Block Public Access settings for this bucket, can be kept as default. &lt;strong&gt;And by default, all public access is blocked&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Lambda
&lt;/h3&gt;

&lt;p&gt;Then, we need to create a Lambda function. I'll keep all the logic within a single Lambda, but you can split it into two functions or even three to follow the Single Responsibility Principle. &lt;br&gt;
The Lambda function will be attached to an API Gateway endpoint. The event will contain fileName and a Cognito code. For simplicity, I’m omitting event validation and error handling in the Lambda code, but you should include these for production use.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;For all intents and purposes, we haven’t finished with the Lambda function yet. We still need to set all environment variables, but from a code perspective, we won’t be making any more changes. Since we haven’t created the necessary resources yet, we’re unable to do so.&lt;/p&gt;

&lt;p&gt;Now is the time to do that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Api Gateway
&lt;/h3&gt;

&lt;p&gt;We need to create an API Gateway with a GET endpoint and attach our Lambda function to it. You should be able to handle this on your own. However, there is one important detail I’d like to highlight—please don’t forget to enable the &lt;strong&gt;Lambda Proxy Integration&lt;/strong&gt; option. Without it, the event argument will be empty.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuobktclb9e2u468cqww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuobktclb9e2u468cqww.png" alt="Lambda proxy integration"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  User pools
&lt;/h3&gt;

&lt;p&gt;We need to create a User Pool. As I mentioned earlier, we use the User Pool as an identity store for Identity Pools. If your integrators prefer not to create another user account (with a separate login and password), they can use an alternative identity provider such as Google or Facebook.&lt;br&gt;
To create a User Pool, simply fill out the required form. Additionally, don’t forget to provide the API Gateway endpoint link that was created in the second step.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusly4fsqraxt77dumix8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusly4fsqraxt77dumix8.png" alt="The form of creating a user pool"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Identity pools
&lt;/h3&gt;

&lt;p&gt;We need to create an Identity Pool to enable swapping the JWT token issued by the User Pool for temporary AWS credentials. I want to emphasize once again that access to AWS resources is only possible through AWS credentials.&lt;br&gt;
To create an Identity Pool, you’ll need to go through a few steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the first step, choose Authenticated access and select Amazon Cognito User Pool as the authentication provider.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4uprcddf2shmzrhclexs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4uprcddf2shmzrhclexs.png" alt="The first step of the Identity Pool form."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The second step is about configuring the default IAM role. You need to choose Create a new role and provide a meaningful name for it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After creating the Identity Pool, we will assign the required permissions to this role.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88ojvc54r1hjc5sby27r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88ojvc54r1hjc5sby27r.png" alt="The second step of the Identity Pool form."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The third step is about configuring your Cognito identity pool to accept users that sign in with a Cognito user pool. Simply choose the user pool created earlier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswnz5xz2yhdlc0nn9ln4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswnz5xz2yhdlc0nn9ln4.png" alt="The third step of the Identity Pool form."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can easily skip the fourth step and just click the Next button.&lt;/li&gt;
&lt;li&gt;In the final step, after reviewing, you can click the Create Identity Pool button.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  IAM
&lt;/h3&gt;

&lt;p&gt;In the previous step, when we created the Identity Pool, we only provided a name for the role that will be assumed with temporary credentials. We need to update its policy to grant read access to our private bucket.&lt;/p&gt;

&lt;p&gt;Please open the permissions policies for this role and paste the following two actions. Don’t forget to replace BUCKET_NAME with the actual bucket name.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Set environment variables
&lt;/h3&gt;

&lt;p&gt;Finally, navigate to the Lambda service, select the Lambda function, and open the Configuration tab. Here, we need to set all the required environment variables. If you're using the code from my gist, you’ll need to define six variables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frt9gavca3ezb9poxm6fk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frt9gavca3ezb9poxm6fk.png" alt="Env Variables form for the lambda function"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it — we’re almost done. Due to the hardcoded file format in the Lambda function code, I would recommend uploading a &lt;em&gt;txt&lt;/em&gt; file into the S3 bucket. Once you've done that, we’re ready to test.&lt;/p&gt;

&lt;p&gt;To test, navigate to &lt;strong&gt;User Pools&lt;/strong&gt; &amp;gt; &lt;strong&gt;App clients&lt;/strong&gt; &amp;gt; &lt;strong&gt;Login pages&lt;/strong&gt;. There, you can easily find the &lt;em&gt;View login page&lt;/em&gt; link.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4wto8o2w5amnr2zvg3n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4wto8o2w5amnr2zvg3n.png" alt="View login page button"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on this button, and it will navigate you to a new page where you'll find the sign-in form. Before you proceed with the sign-in process, you need to modify the URL in the browser’s address bar. Add &amp;amp;state=FILE_NAME_INTO_S3 at the end of the URL. The state parameter allows us to pass the file name into the Lambda function. More details about this parameter you can find &lt;a href="https://docs.aws.amazon.com/cognito/latest/developerguide/token-endpoint.html#post-token" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 Benefits of This Approach
&lt;/h2&gt;

&lt;p&gt;✅ No Custom Frontend Needed – Everything is API-driven.&lt;br&gt;
✅ Fully Secure – Users must authenticate before accessing S3.&lt;br&gt;
✅ Short-Lived Credentials – Uses temporary AWS credentials via Cognito Identity Pool.&lt;br&gt;
✅ Custom File Access – Users only get pre-signed URLs for their requested files.&lt;br&gt;
✅ Serverless &amp;amp; Scalable – Uses Cognito, API Gateway, Lambda, and S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯 Next Steps
&lt;/h2&gt;

&lt;p&gt;Would you like help with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🔐 Adding RBAC (Role-Based Access Control) to limit file access?&lt;/li&gt;
&lt;li&gt;🛠 Deploying everything in AWS using CloudFormation?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ll find answers to these questions in the extended version of this article on &lt;a href="https://patreon.com/roman_pedchenko" rel="noopener noreferrer"&gt;patreon&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>webdev</category>
      <category>programming</category>
      <category>security</category>
    </item>
    <item>
      <title>10 Essential Questions to Improve One-on-One Meetings for Developers</title>
      <dc:creator>pedchenkoroman</dc:creator>
      <pubDate>Mon, 10 Feb 2025 14:45:57 +0000</pubDate>
      <link>https://dev.to/pedchenkoroman/10-essential-questions-to-improve-one-on-one-meetings-for-developers-1b6h</link>
      <guid>https://dev.to/pedchenkoroman/10-essential-questions-to-improve-one-on-one-meetings-for-developers-1b6h</guid>
      <description>&lt;p&gt;Hi folks,&lt;/p&gt;

&lt;p&gt;As promised, this article is about soft skills. Many developers, myself included, have made the mistake of focusing only on hard skills, assuming that soft skills aren’t as important. But that’s not true—soft skills are just as essential as technical expertise.&lt;/p&gt;

&lt;p&gt;In this article, I’ll share 10 powerful questions—five for managers and five for developers—to help improve team communication, motivation, and transparency. These questions can foster a strong and engaged team while preventing surprises like, "I want to switch teams" or "We’re not going to continue working with you."&lt;/p&gt;

&lt;h2&gt;
  
  
  5 Questions from a Manager/Lead's Perspective
&lt;/h2&gt;

&lt;h3&gt;
  
  
  First question
&lt;/h3&gt;

&lt;p&gt;&lt;del&gt;How is it going?&lt;/del&gt; The question itself isn’t bad, but one-on-one meetings typically have limited time. Rephrasing it as, &lt;strong&gt;'What has been your biggest success this week or since our last meeting?'&lt;/strong&gt; helps start the conversation in a more meaningful way, and when someone highlights something, you can build follow-up questions based on that information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Second question
&lt;/h3&gt;

&lt;p&gt;&lt;del&gt;How can I help you?&lt;/del&gt; The question itself isn’t bad, but some developers might not answer honestly either it implies they are struggling and need help or they can not do something. Instead, you could ask something like, &lt;strong&gt;'What is the most complicated part of your daily tasks?'&lt;/strong&gt; The answer to this question can help address two issues at once.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;For example,&lt;/strong&gt; &lt;em&gt;I once received an answer like, 'The most challenging part is doing the markup.' The solution was quite simple: we split the tasks into two parts, assigned the markup work to another developer, and had the original person act as a reviewer. Additionally, he attended some markup courses to improve his knowledge. As a result, his daily tasks became less challenging, reducing  frustration and burnout. After four months, we no longer needed to split the tasks&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Third question
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Is there anything in your tasks that seems unnecessary or meaningless?&lt;/strong&gt; Asking this question might reveal areas of miscommunication or highlight where the team lacks sufficient information. As a manager or tech lead, your role is to align your goals with the team's goals. If someone doesn’t understand a task or sees it as meaningless, achieving it becomes nearly impossible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fourth question
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;"Is there any work that’s been undervalued?"&lt;/strong&gt; From a lead or manager’s point of view, it might seem like just a ticket moving from one column to another on the board. But for a developer, it could represent hours of hard work. If the manager or lead doesn’t acknowledge this for any reason, the developer might feel upset or lead to demotivation. Asking this question can help you understand the developer's perspective and prevent a process known as "negativity bias." I personally rephrase it as "the empty space always fills with bad thoughts."&lt;/p&gt;

&lt;h3&gt;
  
  
  Fifth question
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;"Could you please rate the complexity/interest level of your tasks on a scale from 1 to 10?"&lt;/strong&gt; These types of questions allow you to track a person's progress. I wouldn’t recommend asking them in every session, but it’s important to take notes to monitor changes over time. Tracking is important, but acting on the results is what truly matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  5 Questions from a Developer's Perspective
&lt;/h2&gt;

&lt;h3&gt;
  
  
  First question
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;"Is there anything I should be doing that I'm not doing?"&lt;/strong&gt; Asking this question can lead to new insights. You might discover unmet expectations or growth opportunities. For example, I once learned I should focus more on mentoring junior developers, which led me to improve my code reviews and task breakdowns.&lt;/p&gt;

&lt;h3&gt;
  
  
  Second question
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;"In what case should I ask for your help?"&lt;/strong&gt; Leads and managers are often busy with meetings. This question, along with the next one, helps establish a clear guideline for when it's appropriate to ask for help. In many teams, there are no defined patterns for when someone should seek assistance versus when they should try to solve a problem on their own. Additionally, it's important to consider that a new team member might be hesitant to ask for help or, on the other hand, might seek guidance on every small issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  Third question
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;"What situations do I need to notify you of?"&lt;/strong&gt; Similar to the previous question, this one helps clarify communication boundaries. As a manager or tech lead, I don’t need to be informed about everything—such as the installation of an npm package or minor code refactoring. However, I do need to be notified about critical issues like production incidents, blockers affecting the team, unexpected delays, architectural decisions, or dependencies that could impact delivery. Defining these boundaries ensures smooth communication and helps avoid unnecessary disruptions while keeping everyone aligned on what truly matters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Forth question
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;"What growth points do you see now?"&lt;/strong&gt; It is a great question because it encourages proactive thinking about personal and professional development. It fosters a culture of continuous improvement and helps tailor development plans, such as training, mentorship, or new responsibilities, based on the person's aspirations. Sometimes, a developer feels stuck or unsure of their next steps. Discussing growth opportunities ensures you stay engaged, challenged, and motivated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fifth question
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;"Is there something I haven't asked or told you?"&lt;/strong&gt; is a great question because it creates an open space for discussion and helps uncover important topics that might have been missed. Also it signals that you're genuinely interested in hearing anything that might be on the other person’s mind, even if it wasn’t covered in the conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;There are many great questions beyond the ones I’ve covered here. These aren’t my own inventions—they come from books, conferences, and videos that have shaped my perspective. Some excellent resources include Herding Cats by J. Hank Rainwater and Radical Respect by Kim Scott.&lt;/p&gt;

&lt;p&gt;By incorporating these questions into your one-on-ones, you can build stronger relationships, improve communication, and create a more engaged team. Let me know what you think or if you have any other great questions to add!&lt;/p&gt;

&lt;p&gt;Cheers!&lt;/p&gt;

</description>
      <category>softskills</category>
      <category>webdev</category>
      <category>career</category>
      <category>leadership</category>
    </item>
    <item>
      <title>Kaizen or how to start/stop ec2.</title>
      <dc:creator>pedchenkoroman</dc:creator>
      <pubDate>Sun, 26 Jan 2025 10:59:57 +0000</pubDate>
      <link>https://dev.to/pedchenkoroman/kaizen-or-how-to-startstop-ec2-18aa</link>
      <guid>https://dev.to/pedchenkoroman/kaizen-or-how-to-startstop-ec2-18aa</guid>
      <description>&lt;p&gt;Hi guys.&lt;/p&gt;

&lt;p&gt;Today, I’d like to share with you a technique I’ve typically used in my work. This technique is called Kaizen.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;So, what is Kaizen, and why should you care? Great question. Kaizen is a continuous improvement method that is used in manufacturing. For some, you might be familiar with similar methods like Lean in Project Management or the Toyota Production System (TPS). Kaizen was first introduced in the 1960’s when engineer Taiichi Ohno, who created TPS. It was designed to achieve objectives like improving quality, increasing profitability, and reducing costs.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Normally, each of my PRs consists of a dedicated task and a small improvement. This could be anything, such as adding logging, encapsulating code and covering it with unit tests or reduce unnecessary http requests. Some might refer to this as following the "Boy Scout Rule" and it will be absolutely right. In my humble opinion it is just adjacent topic. &lt;/p&gt;

&lt;p&gt;So, I’d like to share a story about how I used this technique to reduce our AWS monthly bill. &lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;The application I’ve been working on includes some third-party integrations, and to test the entire workflow, I came up with the idea of spinning up an EC2 instance with primitive logic to simulate those integrations. Everything worked perfectly until I noticed that, most of the time, the instance was idle. So, I did a quick calculation and realized that I could save $1.46 per environment per month by optimizing this setup. Then I considered an effort to implement it and got a green light from a PM to implement it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Calculations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Before
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;1 instances x 0.006 USD On Demand hourly cost x 730 hours in a month = 4.380000 USD&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  After
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;1 instances x 0.006 USD On Demand hourly cost x 486.66666666666663 hours in a month = 2.920000 USD&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;The solution is straightforward and consists of three steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a New AWS Role: Set up a new AWS role with permissions to start and stop EC2 instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzrrpbw4o2hcxa2g66yq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzrrpbw4o2hcxa2g66yq.png" alt="Create a new role" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set Up CloudWatch Rules: Create two CloudWatch rules—one to start the EC2 instance and another to stop it at predefined times.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4y6f4l0rz1pvkdm3w6nh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4y6f4l0rz1pvkdm3w6nh.png" alt="Set up rules" width="800" height="585"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement a Lambda Function: Use AWS SDK within a Lambda function to send start and stop commands to the EC2 instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7uzyxv7z9yy2b24lyvev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7uzyxv7z9yy2b24lyvev.png" alt="Lambda logic" width="800" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s pretty much it! The next step is to get rid of the EC2 instance entirely and use only AWS Lambda, with input and output stored in S3. Considering AWS Lambda’s generous free tier, I anticipate the cost will be $0.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It’s the little details that are vital. Little things make big things happen. ~ John Wooden&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; The source code you can find &lt;a href="https://github.com/pedchenkoroman/aws-kaizen" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscdk</category>
      <category>webdev</category>
      <category>lambda</category>
    </item>
    <item>
      <title>AWS Cognito + GraphQL Directive = ACL with minimal effort</title>
      <dc:creator>pedchenkoroman</dc:creator>
      <pubDate>Mon, 13 Jan 2025 12:58:34 +0000</pubDate>
      <link>https://dev.to/pedchenkoroman/aws-cognito-graphql-directive-acl-with-minimal-effort-4dk8</link>
      <guid>https://dev.to/pedchenkoroman/aws-cognito-graphql-directive-acl-with-minimal-effort-4dk8</guid>
      <description>&lt;p&gt;Hi, everyone,&lt;br&gt;
I would like to share an idea/prototype for implementing an access control layer with minimal effort. As the title suggests, I will be using the AWS Cognito service and a GraphQL directive. AWS Cognito provides excellent functionality for controlling access to REST APIs. However, it doesn't fully meet our needs because, with GraphQL, there is typically only a single POST endpoint that handles numerous queries and mutations.&lt;/p&gt;

&lt;p&gt;Before we begin, I want to clarify that this is not a step-by-step guide on how to use AWS Cognito or an introduction to GraphQL, including how to create a GraphQL server or a lambda function from scratch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;Let’s imagine we have a front-end application and an Article GraphQL service. This service provides simple CRUD mutations, such as &lt;code&gt;publish&lt;/code&gt;, &lt;code&gt;delete&lt;/code&gt;, and &lt;code&gt;saveDraft&lt;/code&gt;. Our task is to implement, with minimal effort, a way to manage access for our Cognito users to these endpoints. For example:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The first user can only save draft.&lt;/li&gt;
&lt;li&gt;The second user can save draft and publish.&lt;/li&gt;
&lt;li&gt;The third user has full access, including delete operations.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cognito&lt;/strong&gt;&lt;br&gt;
As mentioned earlier, I will not cover how to set up the AWS Cognito service (user pools and users) in this article. I assume you already have it configured. However, I want to highlight that Cognito users support custom attributes. For more details, you can refer to this &lt;a href="https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.html#user-pool-settings-custom-attributes" rel="noopener noreferrer"&gt;link&lt;/a&gt;. Before creating our first custom attribute, we will develop our own language. This technique is known as a DSL.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A DSL (Domain-Specific Language) is a programming or scripting language designed to solve problems within a specific domain. Unlike general-purpose programming languages (such as Python, Java, or TypeScript), which are built to handle a wide range of applications, DSLs are tailored for specialized tasks. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;First and foremost let's come up with a syntax.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;*&lt;/code&gt; - means allow all actions&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;|&lt;/code&gt; - the separator between the namespaces&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/&lt;/code&gt; - the separator between a namespace and an action(s)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s agree on the following rules:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;article/*&lt;/code&gt; - allows all actions.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;article/saveDraft&lt;/code&gt; - allows only the saveDraft action.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;article/saveDraft,publish&lt;/code&gt; - allows both the publish and saveDraft actions.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;article/*|user/*&lt;/code&gt; - allows everything for the article and user namespaces.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As you can see, creating your own DSL is straightforward, and this technique is widely used in various areas, such as SQL, configuration files, and Infrastructure as Code.&lt;/p&gt;

&lt;p&gt;Now, let’s create a custom attribute that will include a list of actions we want to allow or deny.&lt;/p&gt;

&lt;p&gt;Create custom attribute &lt;code&gt;action&lt;/code&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frt7ic4ag86223no3aves.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frt7ic4ag86223no3aves.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step is to create the users with the action custom field.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F739k0prt35igqnv216ln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F739k0prt35igqnv216ln.png" alt="The user with full access" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdf60dka9r5v8pvz45r6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdf60dka9r5v8pvz45r6p.png" alt="Another user with saveDraft access" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code&lt;/strong&gt;&lt;br&gt;
That’s pretty much it for the AWS Console setup; now we can dive into the code. You can easily find the complete code on &lt;a href="https://github.com/pedchenkoroman/acl-dev.to"&gt;GitHub&lt;/a&gt;. Here, I’ll focus on the most essential parts.&lt;/p&gt;

&lt;p&gt;You can find a tutorial on creating and setting up an Apollo Lambda handler &lt;a href="https://www.apollographql.com/docs/apollo-server/deployment/lambda#setting-up-your-project" rel="noopener noreferrer"&gt;here.&lt;/a&gt; The key difference in my approach is that I parse the JWT token from the event and set the &lt;code&gt;custom:action&lt;/code&gt; property in the context. As a result, the context includes the &lt;code&gt;action&lt;/code&gt; property containing all the rules we added in AWS Cognito.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8uhc56m12lka7edqypt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8uhc56m12lka7edqypt.png" alt="GrahpQL handler code" width="800" height="613"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step is to parse the &lt;code&gt;action&lt;/code&gt; property from the context and determine whether to allow or deny the action. For this task, we’ll use a GraphQL directive. You can find a guide on how to build it from scratch by following this &lt;a href="https://the-guild.dev/graphql/tools/docs/schema-directives" rel="noopener noreferrer"&gt;link.&lt;/a&gt; Here, I’ll provide the code and an explanation of how to check access using it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv922zf93a30oy7ywcrxm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv922zf93a30oy7ywcrxm.png" alt="Image description" width="800" height="1004"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The logic of the directive is straightforward and involves three steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Locate the Namespace&lt;/strong&gt;: Split the entire string using the namespace separator (we agreed earlier that it is |) and find the specific namespace by name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Separate Namespace and Actions&lt;/strong&gt;: Use the / separator to split the namespace from the actions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Check Actions&lt;/strong&gt;: Map all actions and verify whether the action is allowed or denied.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The final piece of our setup is the GraphQL schema.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1dmgzh7m4so9ehs2oyk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1dmgzh7m4so9ehs2oyk.png" alt="Graphql Schema" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, simply add the directive in front of the mutation and provide the namespace name as an argument.&lt;/p&gt;

&lt;p&gt;That’s pretty much it! This approach seems not only useful and easy to implement but also flexible, cost-effective, and one of the fastest ways to add an Access Control Layer to your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  To try it out:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Fork the &lt;a href="https://github.com/pedchenkoroman/acl-dev.to"&gt;repository&lt;/a&gt; and deploy it using your own AWS account. All resources are defined using AWS CDK. You can find instructions on how to bootstrap and deploy an AWS stack in this &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/hello_world.html#hello_world_prerequisites" rel="noopener noreferrer"&gt;guide&lt;/a&gt;. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I’ve created a &lt;a href="https://www.postman.com/pedchenko07/dev-to/collection/997p7i2/acl-dev-to?action=share&amp;amp;creator=1507710" rel="noopener noreferrer"&gt;public Postman collection&lt;/a&gt; for your convenience. Please follow the guide provided within the collection.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;If you’d like to support my work, you can subscribe, give me kudos, buy me a &lt;a href="https://ko-fi.com/r_pedchenko" rel="noopener noreferrer"&gt;Ko-Fi&lt;/a&gt;, or share your valuable feedback. Your support and insights mean a lot!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>graphql</category>
      <category>webdev</category>
      <category>cognito</category>
    </item>
    <item>
      <title>Why you might want to build your own cli?</title>
      <dc:creator>pedchenkoroman</dc:creator>
      <pubDate>Thu, 05 Dec 2024 09:00:00 +0000</pubDate>
      <link>https://dev.to/pedchenkoroman/why-you-might-want-to-build-your-own-cli-apd</link>
      <guid>https://dev.to/pedchenkoroman/why-you-might-want-to-build-your-own-cli-apd</guid>
      <description>&lt;p&gt;Hi everyone,&lt;br&gt;
In my last article, "Not only dynamoDb migration/seed scripting", I provided a solution how to manage migration/seed scripting and I proposed to run it by CLI command. If you haven't read it yet, you can find it &lt;a href="https://dev.to/pedchenkoroman/not-only-dynamodb-migration-or-seed-scripting-4h54"&gt;here&lt;/a&gt;. In this article, I’d like to share why you might want to build your own CLI and what types of commands it could contain. However, I will not teach you how to build a CLI from scratch here, and I assume you are already familiar with NodeJS and TypeScript. Instead, this article focuses on the reasons for creating your own CLI and the types of commands it could include. The source code of CLI you will find &lt;a href="https://github.com/pedchenkoroman/dev-to-proto-cli" rel="noopener noreferrer"&gt;here&lt;/a&gt; and to run it you can use this command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx dev-to-proto-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configure
&lt;/h2&gt;

&lt;p&gt;The first command I’d like to share is &lt;code&gt;configure&lt;/code&gt;. Imagine you are working on a project that follows a multi-services architecture, with at least three services plus a front-end. Each service has its own configuration file for four environments and might have the following structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service/environments
├── environment.development.yml/.env.dev
├── environment.tst.yml/.env.tst
├── environment.acc.yml/.env.acc
└── environment.yml/.env.prd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The file extension does not matter.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I see some disadvantages with this approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If you have repeatable configuration properties for multiple services, you need to copy and paste them into each service.&lt;/li&gt;
&lt;li&gt;Maintenance becomes too complicated. Some properties may have the same value but different names across services.&lt;/li&gt;
&lt;li&gt;The release tag depends not only on the codebase but also on the configuration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is where the &lt;code&gt;configure&lt;/code&gt; command comes into play.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Instead of keeping the configuration in each service, you can store it elsewhere, such as in another repository, a secret manager, an S3 bucket, or any other source. I’ve created two examples of configuration: one stored in a &lt;a href="https://gist.githubusercontent.com/pedchenkoroman/70fcdb6d1383a840d6d5dfb123cf68da/raw/aae6d1e437a1b6c3a0a956ead632118de0a1d2b3/configuration.yaml" rel="noopener noreferrer"&gt;GitHub Gist&lt;/a&gt; and another in a &lt;a href="https://bitbucket.org/!api/2.0/snippets/p7o/g78KX8/658d081497d9770ca4d1586b304631d5722bbbff/files/configuration.yaml" rel="noopener noreferrer"&gt;Bitbucket Snippet&lt;/a&gt;. Imagine that GitHub Gist is &lt;strong&gt;dev&lt;/strong&gt; env and Bitbucket is &lt;strong&gt;tst&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The configuration structure is a simple nested format. The top-level keys represent global settings for all services, while the nested sections contain specific properties for each service. The subsections can override or extend the global settings. All properties are in one file, and maintaining it is not difficult.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It depends on where you decide to keep it. In my team, we store it in a separate repository and use tags or branches for different scenarios.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;log_level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;INFO&lt;/span&gt;
&lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;

&lt;span class="na"&gt;foo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;log_level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DEBUG&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-west-1&lt;/span&gt;
  &lt;span class="na"&gt;foo_prop_1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;foo_prop_2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;

&lt;span class="na"&gt;bar&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;log_level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WARN&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-west-2&lt;/span&gt;

&lt;span class="na"&gt;baz&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;log_level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ERROR&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to use
&lt;/h2&gt;

&lt;p&gt;I assume you have a similar line of code in your GitHub Actions or Bitbucket Pipeline to create an env files depending on env.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cp &lt;/span&gt;evnironment.&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ENV&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.yml environment.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using your own CLI, you would have something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx dev-to-proto-cli configure &lt;span class="nt"&gt;--extension&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt; &lt;span class="nt"&gt;--service&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;foo &lt;span class="nt"&gt;--env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will generate either a &lt;code&gt;config.yml&lt;/code&gt; or &lt;code&gt;.env&lt;/code&gt; file with all root properties and the properties dependent on a service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other commands
&lt;/h2&gt;

&lt;p&gt;In our team, we aim to create commands for a wide range of scenarios. Here is a brief overview of some of them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean the Database: To clean a DynamoDB table or all tables, we use the clean-db command. The command looks like this:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx dev-to-proto-cli &lt;span class="nt"&gt;--profile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dev &lt;span class="nt"&gt;--region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-west-1 &lt;span class="nt"&gt;--provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sso
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;If you run it and provide all credentials, it will clean the chosen tables. Please use it carefully.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;NOTE:&lt;/strong&gt; The clean-db command is useful only if your dynamoDb table is NOT attached to a custom VPC.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To get an overview of our tests, we use a Bitbucket pipeline to run daily tests. While you can view the results directly in the pipeline, I find this approach inconvenient for collecting statistics over a longer period, such as the last 30 days. For instance, comparing all failed tests or identifying flaky tests becomes impossible without manual effort or paid add-ons.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To address this, I created another CLI command that uses the public Bitbucket REST API to collect and analyze all test statistics. The command looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @own-cli statistic &lt;span class="nt"&gt;--start&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2024-11-11 &lt;span class="nt"&gt;--end&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2024-11-18
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I’ve described only three of the commands we’ve implemented. Please feel free to share your ideas in the comments. Any feedback or kudos would be greatly appreciated!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In my next article, I'll show you how to build an ACL with minimal effort. Subscribe to stay tuned!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cli</category>
      <category>development</category>
      <category>typescript</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Not only dynamoDb migration or seed scripting</title>
      <dc:creator>pedchenkoroman</dc:creator>
      <pubDate>Wed, 06 Nov 2024 07:54:50 +0000</pubDate>
      <link>https://dev.to/pedchenkoroman/not-only-dynamodb-migration-or-seed-scripting-4h54</link>
      <guid>https://dev.to/pedchenkoroman/not-only-dynamodb-migration-or-seed-scripting-4h54</guid>
      <description>&lt;p&gt;Hi folks.&lt;/p&gt;

&lt;p&gt;I would like to share with you the idea/prototype for managing and running migration or seed scripts for DynamoDB. &lt;br&gt;
&lt;strong&gt;First&lt;/strong&gt;, let's clarify what a migration and a seed script are.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Migration&lt;/strong&gt; refers to the process of transferring data from one database to another, or making structural changes to a database, such as upgrading, transforming, or migrating to a new database management system (DBMS). Due to the fact that &lt;em&gt;aws cdk&lt;/em&gt; helps us and handles for us DynamoDB structural changes. I would like to make an agriment that into this article &lt;strong&gt;migration&lt;/strong&gt; &lt;em&gt;refers only for transforming data&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seed&lt;/strong&gt; refers to the process of populating a database with initial data, often for development, testing, or setup purposes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second&lt;/strong&gt;, let's outline the required functionality.&lt;br&gt;
In the company where I've worked, the product follows a multi-service architecture. I identified five key requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;em&gt;TypeScript support&lt;/em&gt;: The tool should support TypeScript.&lt;/li&gt;
&lt;li&gt;Minimal migration/seed script logic*: The migration and seeding processes should require minimal scripting.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Ability to run specific scripts&lt;/em&gt;: The tool should allow developers to execute particular scripts.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Service independence&lt;/em&gt;: The tool should function independently of any specific services.&lt;/li&gt;
&lt;li&gt;Leverage the latest AWS SDK version: The tool should utilize the most recent version of the AWS SDK.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Third&lt;/strong&gt;, before diving into my own implementation, I explored existing packages. One option was the &lt;a href="https://github.com/floydspace/dynamodb-migrations-tool" rel="noopener noreferrer"&gt;Dynamit CLI&lt;/a&gt;, created by my friend Victor Korzunin. While Dynamit CLI offers basic functionality and handles most common tasks, it doesn't fully meet all of the requirements outlined above. Therefore, I decided to implement my own solution.&lt;/p&gt;

&lt;p&gt;To present my solution, I've created a separate repository dedicated to migration scripts. The source code you can find &lt;a href="https://github.com/pedchenkoroman/dynamodb-seed-migration-scripts" rel="noopener noreferrer"&gt;here&lt;/a&gt;. The repository structure is straightforward, consisting of three primary folders. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The first folder, named &lt;code&gt;templates&lt;/code&gt;, contains files for the &lt;a href="https://www.npmjs.com/package/plop" rel="noopener noreferrer"&gt;Plop package&lt;/a&gt;. Plop, a micro-generator framework, allows us to generate standardized templates for migration and seed scripts. &lt;/li&gt;
&lt;li&gt;The second folder, &lt;code&gt;scripts&lt;/code&gt;, this folder can contain subdirectories, each named after a specific service, to organize service-specific migration/seed scripts.&lt;/li&gt;
&lt;li&gt;The final folder, &lt;code&gt;framework&lt;/code&gt;, houses the core migration logic and interfaces.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  First script
&lt;/h2&gt;

&lt;p&gt;Let's run the npm run plop command to generate our first script. This will prompt you to provide a name for the script and the desired folder location.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fq0dpfksk01z87uqst0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fq0dpfksk01z87uqst0.png" alt="Image description" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's open the file and examine the code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxly1h65wv7hqhtvz1m0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxly1h65wv7hqhtvz1m0.png" alt="Generated script" width="800" height="775"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First&lt;/strong&gt;, I'd like to draw your attention to the &lt;code&gt;DynamoDBScriptTracker&lt;/code&gt; instance. The first argument in the constructor is a client, and the second is an object with two properties. The first property, &lt;code&gt;scriptName&lt;/code&gt;, is automatically generated when I provide the name using the plop console command. The second property, &lt;code&gt;scriptStore&lt;/code&gt;, refers to the name of the DynamoDB table where I store all previously executed script names. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE&lt;/em&gt;: If you choose a different store for tracking, you only need to implement the ScriptTracker interface. This store could be anything—a relational database, a file, or another storage solution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Then&lt;/strong&gt;, as you can see, you need to implement three functions: &lt;code&gt;read&lt;/code&gt;, &lt;code&gt;write&lt;/code&gt;, and &lt;code&gt;transform&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;read&lt;/code&gt;: This required function retrieves items in chunks. The data source could be anything—an API, file, S3 bucket, or DynamoDB table.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;transform&lt;/code&gt;: This optional function applies transformations to an array of items. Additionally, you can cover this function with unit tests.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;write&lt;/code&gt;: This required function writes a batch of items to a target resource. You don't need to worry about the array's size, as it is automatically split into chunks under the hood. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Finally&lt;/strong&gt;, you need to export an instance of the &lt;code&gt;Migrator&lt;/code&gt; class to pass the arguments &lt;code&gt;scriptTracker&lt;/code&gt; and &lt;code&gt;operations&lt;/code&gt;. Additionally, there is one more optional argument, &lt;code&gt;force&lt;/code&gt;, which allows you to skip execution checks and run the script directly. &lt;/p&gt;

&lt;p&gt;If you open the migrator file, you will not find any complicated logic; there is only one public method, run, and that is pretty much it. The other private methods provide logging and recursively execute functions from the operations dependency. The implementation of Migrator class you can find below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym8zxf65dv3ums25zl4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym8zxf65dv3ums25zl4s.png" alt="Migrator class" width="800" height="1048"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This implementation is straightforward and flexible, allowing it to be used not only for migrations but also as a seeding tool. Additionally, the data source and target can be entirely different from each other.&lt;/p&gt;
&lt;h2&gt;
  
  
  The only question left is: &lt;strong&gt;How do you run it&lt;/strong&gt;?
&lt;/h2&gt;

&lt;p&gt;Before we dive into running the solution, we first need to decide how to compile and store it. There are several options available, and I’ll outline a few:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We can build artifacts on a pre-push hook and push them to the repository (&lt;code&gt;dist&lt;/code&gt; folder as an example).&lt;/li&gt;
&lt;li&gt;We can build artifacts on a pre-push hook and store them in an S3 bucket.&lt;/li&gt;
&lt;li&gt;We can set up a Bitbucket pipeline or GitHub Actions workflow to handle builds and storage. 
&lt;strong&gt;Note&lt;/strong&gt;: that Bitbucket pipelines only store artifacts for 14 days, so you’d need to build them daily or on demand if you require longer retention.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are plenty of other options as well, and you can choose the one that best fits your requirements.&lt;/p&gt;

&lt;p&gt;To run it, I created my own CLI and published it in our private npm repository. There are several commands available, one of which is &lt;code&gt;migration&lt;/code&gt; run. In my next article, I will discuss how to build your own CLI. For now, I will provide the code and the command. &lt;/p&gt;

&lt;p&gt;The command looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx @company/cli migration --list="migration-foo, migration-bar"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and the code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp24c92vr5edva6honsly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp24c92vr5edva6honsly.png" alt="cli command implementation" width="800" height="775"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope the idea is clear: you simply provide a list of migration or seed script names, and the command looks up these files in a loop and then executes them. I’ve also omitted some details for clarity, but keep in mind that you’ll also need to provide additional arguments, such as the AWS token and environment.&lt;/p&gt;

&lt;p&gt;Last but not least, if you appreciate my work, please subscribe.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dynamodb</category>
      <category>migration</category>
      <category>development</category>
    </item>
    <item>
      <title>A year-long journey</title>
      <dc:creator>pedchenkoroman</dc:creator>
      <pubDate>Thu, 19 Sep 2024 12:17:28 +0000</pubDate>
      <link>https://dev.to/pedchenkoroman/a-year-long-journey-84h</link>
      <guid>https://dev.to/pedchenkoroman/a-year-long-journey-84h</guid>
      <description>&lt;p&gt;Hi folks. &lt;/p&gt;

&lt;p&gt;Long time no see! Frankly speaking, I've had good reasons for my absence. There have been a lot of changes in my life, but let's start at the beginning. &lt;/p&gt;

&lt;h2&gt;
  
  
  A new opportunity
&lt;/h2&gt;

&lt;p&gt;First and foremost, I've taken on a new role at &lt;a href="https://www.mylette.nl" rel="noopener noreferrer"&gt;MyLette&lt;/a&gt; as a software engineer. The company recently acquired a new FinTech application and was in search of developers for a new team. Not only was I the first person to join the team, but I also brought my experience with AWS.&lt;/p&gt;

&lt;p&gt;My initial significant task involved migrating the application from one customer to another, setting up all the necessary environments and connecting third parties. I'd be lying if I said the developers who worked on this application before didn't assist me. They were quite helpful, especially with the DEV environment, and to a lesser extent, the TST environment.&lt;/p&gt;

&lt;p&gt;When all environments were set up and the team was complete, we had a couple of brainstorming sessions. Not only to evaluate the application but also to outline the next steps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The first step was to create some integration tests and cover the core functionality. We chose the playwright framework for it, and I was responsible for setting it up. I'll share the results in my next articles.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The second one was to improve the deployment process. These findings were discovered during application deployment. I have a few words to say about the application. The application follows a multi-service architecture, and there were around 10 services. Most of the AWS resources were created by AWS CDK, but some resources were created manually. In addition to that, every service has its own configuration folder with files for each environment (dev, tst, acc, and prd). As part of this task, I created a config-gen CLI, and I'll share it in my next articles.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The third task was to address the findings from the AWS Well-Architected Framework. This framework consists of six pillars: operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability. Each pillar includes several recommendations. One of the key recommendations was to eliminate direct lambda function invocations. Cross-service requests were previously implemented using the direct invocation approach, and I'll share it in my next articles.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There were many other interesting decisions and features implemented. I'll also share the most intriguing ones in my next articles. &lt;/p&gt;

&lt;p&gt;Stay tuned.&lt;/p&gt;

</description>
      <category>job</category>
      <category>company</category>
      <category>opportunity</category>
      <category>linkedin</category>
    </item>
    <item>
      <title>The bot for BNG bank</title>
      <dc:creator>pedchenkoroman</dc:creator>
      <pubDate>Wed, 17 Aug 2022 07:58:08 +0000</pubDate>
      <link>https://dev.to/pedchenkoroman/the-bot-for-bng-bank-129a</link>
      <guid>https://dev.to/pedchenkoroman/the-bot-for-bng-bank-129a</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Hi folks. My name is Roman Pedchenko and I am a full-stack developer. I always thought that my profession as a developer is just to write code but I had mistaken. I've realized that not only do I can create applications but also I can make some people's life easier. I bet everyone knows the situation in the world. In addition to that, there are lots of Ukrainian people who have come across some new challenges every day since February. Unfortunately, I could not solve all their problems but I've tried to solve one of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem
&lt;/h2&gt;

&lt;p&gt;The people who have moved to Netherland and registered there have received some help and money from the government. But the problem is that they received a &lt;a href="https://www.bngbank.nl/Online-bankieren/betalingsverkeer/BNG-Prepaid-pinkaart" rel="noopener noreferrer"&gt;card from BNG bank&lt;/a&gt; and the plane operation such as checking their current balance becomes challenging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;I've created the telegram bot that after two steps of registration has just one button check your balance and when you click it it will send you the current balance. There are two options. The first one is just click to on the &lt;a href="https://t.me/prepaid_saldo_bot" rel="noopener noreferrer"&gt;link&lt;/a&gt; and go through some steps of registration. The second is to set up the project on your own &lt;code&gt;aws profile&lt;/code&gt;. If you choose this way please just open the &lt;a href="https://github.com/pedchenkoroman/presaldo-telegram-bot" rel="noopener noreferrer"&gt;repository&lt;/a&gt; and follow the steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stack
&lt;/h2&gt;

&lt;p&gt;It seems to me that if you are a developer you want to know what I use. First and foremost I use &lt;code&gt;aws cdk&lt;/code&gt; to define resources. If you do not know what is it please open this &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/home.html" rel="noopener noreferrer"&gt;one&lt;/a&gt;. Even though it is a really small bot and it can only parse the site and send the message with the current balance it uses at least four resources such as &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/welcome.html" rel="noopener noreferrer"&gt;Lambda&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html" rel="noopener noreferrer"&gt;DynamoDB&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html" rel="noopener noreferrer"&gt;API Gateway&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html" rel="noopener noreferrer"&gt;EventBridge&lt;/a&gt;. The first lambda is responsible for registering and handling the check balance event. The second lambda uses the puppeteer layer and receives the current balance and stores it. And last but not least lambda is invoked when the record with the new balance is changed and sent to the new balance. In addition to that, I have one more lambda and the main idea of this one is as a guard. It checks one specific header.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwwieb3g6uo6mk60vcpq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwwieb3g6uo6mk60vcpq.png" alt="Architecture diagram" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So, I published the bot to some telegram channels and I was wondered that more than 50 people are using it. Frankly speaking, I am happy that my knowledge and experience help someone. Thank you and take care.&lt;/p&gt;

</description>
      <category>telegram</category>
      <category>aws</category>
      <category>typescript</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Interface segregation principle (SOLID)</title>
      <dc:creator>pedchenkoroman</dc:creator>
      <pubDate>Fri, 12 Aug 2022 22:00:00 +0000</pubDate>
      <link>https://dev.to/pedchenkoroman/interface-segregation-principle-solid-4cid</link>
      <guid>https://dev.to/pedchenkoroman/interface-segregation-principle-solid-4cid</guid>
      <description>&lt;h2&gt;
  
  
  Motivation
&lt;/h2&gt;

&lt;p&gt;Hi guys. My name is Roman Pedchenko and I am a full-stack developer. Pleased to make your acquaintance. It is my first article and I ask you do not judge it too harshly. The idea to write the article appeared after my conversation with my friend &lt;a href="https://www.youtube.com/channel/UClDDVLu0Cj_o9Y5D2ilCtdQ" rel="noopener noreferrer"&gt;Max Grom&lt;/a&gt; and I want to say thanks him.&lt;/p&gt;

&lt;h2&gt;
  
  
  Story
&lt;/h2&gt;

&lt;p&gt;There are lot's of developers has an technical interview every day. Someone wants to receive a new job, someone the first one. But the problem is that you have to show your knowledge in a limited period of time which is why every answer is really important. In my humble opinion there are three types of answers. The first one is just academic knowledge. It means that you read about something but do not use it. The second one is you can describe or give an example from real world but you could not answer on the questioin what is it a principal or paradigm or pattern. And last but not least it is to combine the first and the second. Not only you know how to use it but also what you use. As you probably guess that the third &lt;br&gt;
one amplifyes your position on an interview as a really good developer.&lt;/p&gt;

&lt;p&gt;I bet everyone it does not metter you are a candidate or an interviever to prepare for the interview repeats &lt;strong&gt;SOLID&lt;/strong&gt; principals. In addition to that I beleive that everyone tryes to use it every day but when someone asks could you explain them and gives some examples. It is always so dificult. In this article I will touch only one letter from abreviation but I hope it helps you to be more convincied.&lt;/p&gt;
&lt;h2&gt;
  
  
  Letter I
&lt;/h2&gt;

&lt;p&gt;If you open wiki you will easily figure out that &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The interface segregation principle (ISP) states no code should be forced to depend on methods it does not use.ISP splits interfaces that are very large into smaller and more specific ones so that clients will only have to know about the methods that are of interest to them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I hope it sounds really easy to understand but as I wrote  above not only teoretical knowled but also the examples where do we use it and here there are lots of people to get stuck. And here's a hint. It is easier than learning the definition itself. If you are a Angular developer that you are lucky person. Every time and every day when you create a component and add some hooks to component you use it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export class AppComponent implements OnInit, OnDestroy {
  ngOnInit() {
  // some logic
  }

  ngOnDestroy() {
  // some logic
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see we have to implement two interfaces in order to that hooks start to work and that's all. And oddly enough I believe that this answer will show you that at least you know the letter &lt;strong&gt;I&lt;/strong&gt; from SOLID.&lt;/p&gt;

&lt;p&gt;Thank you and break a leg at a job interview. &lt;/p&gt;

</description>
      <category>solidjs</category>
      <category>interview</category>
      <category>angular</category>
      <category>career</category>
    </item>
  </channel>
</rss>
