<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Karthik</title>
    <description>The latest articles on DEV Community by Karthik (@gkrthk).</description>
    <link>https://dev.to/gkrthk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gkrthk"/>
    <language>en</language>
    <item>
      <title>First build and publish Nx expo app</title>
      <dc:creator>Karthik</dc:creator>
      <pubDate>Sat, 10 Feb 2024 16:48:01 +0000</pubDate>
      <link>https://dev.to/gkrthk/first-build-and-publish-nx-expo-app-acc</link>
      <guid>https://dev.to/gkrthk/first-build-and-publish-nx-expo-app-acc</guid>
      <description>&lt;p&gt;As you embark on your Expo journey, ensuring a smooth build process is crucial. Here's a step-by-step guide to performing the initial build:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: First Build and Publish
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Open VSCode and Navigate to Project Folder:&lt;/strong&gt;&lt;br&gt;
Open Visual Studio Code (VSCode) and navigate to the project folder containing your Expo application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Run &lt;code&gt;eas login&lt;/code&gt; as Admin User:&lt;/strong&gt;&lt;br&gt;
Open the terminal in VSCode and execute the &lt;code&gt;eas login&lt;/code&gt; command. Log in using the admin user credentials. This will authenticate your Expo account for the build process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Run &lt;code&gt;npx nx affected -t build&lt;/code&gt;:&lt;/strong&gt;&lt;br&gt;
Execute the command &lt;code&gt;npx nx affected -t build&lt;/code&gt; in the terminal. Follow the interactive screen prompts and choose all default values unless specific configurations are required for your project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Complete the Build Process:&lt;/strong&gt;&lt;br&gt;
Allow the build process to complete successfully. This step ensures that your Expo project is compiled and ready for publishing.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After this step a project with a successful build will be created in your Expo account. An apk/aab file will be created which is ready to be submitted to the app store.&lt;/p&gt;

</description>
      <category>expo</category>
      <category>beginners</category>
      <category>mobile</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Streamlining Expo Build Pipeline with Bitbucket: An Admin and Developer Guide</title>
      <dc:creator>Karthik</dc:creator>
      <pubDate>Sat, 10 Feb 2024 16:36:52 +0000</pubDate>
      <link>https://dev.to/gkrthk/streamlining-expo-build-pipeline-with-bitbucket-an-admin-and-developer-guide-3iml</link>
      <guid>https://dev.to/gkrthk/streamlining-expo-build-pipeline-with-bitbucket-an-admin-and-developer-guide-3iml</guid>
      <description>&lt;h2&gt;
  
  
  Admin Setup:
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Login Using Admin Account
&lt;/h3&gt;

&lt;p&gt;Begin by logging into Expo using the administrator account that will manage the organization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create an Organization in Expo
&lt;/h3&gt;

&lt;p&gt;Navigate to the Expo dashboard and create a new organization for your project. This organization will serve as the central hub for collaboration and development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Create a Robot User and Access Token
&lt;/h3&gt;

&lt;p&gt;In Expo, generate a robot user to handle the automated build processes. Create an access token for this user, which will be utilized in the Bitbucket pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Save Access Token in Bitbucket
&lt;/h3&gt;

&lt;p&gt;For secure integration, save the generated access token as an environment variable (&lt;code&gt;EXPO_TOKEN&lt;/code&gt;) in Bitbucket. This token will authenticate Expo during the build process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Bitbucket Pipeline YAML File
&lt;/h3&gt;

&lt;p&gt;Create a Bitbucket pipeline YAML file in the root of your project. This file will define the steps for testing and building your Expo project.&lt;/p&gt;

&lt;p&gt;Below is a sample bitbucket pipeline file to build an expo application in production profile &lt;/p&gt;

&lt;p&gt;`image: node:18.14.0&lt;/p&gt;

&lt;p&gt;pipelines:&lt;br&gt;
  branches&lt;br&gt;
    main:&lt;br&gt;
    - step:&lt;br&gt;
        name: Build and Deploy to Production &lt;br&gt;
        deployment: production&lt;br&gt;
        script:&lt;br&gt;
          - npm install -g @nrwl/cli&lt;br&gt;
          - npm install -g nx&lt;br&gt;
          - npm install -g expo-cli&lt;br&gt;
          - npm install&lt;br&gt;
          - npx nx affected:build --base=main --platform all --non-interactive --no-wait&lt;br&gt;
`&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Invite Users to Organization
&lt;/h3&gt;

&lt;p&gt;Invite all relevant users, including developers, to the Expo organization with read access. This ensures that they can join the organization and contribute to the project seamlessly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer Onboarding:
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Sign Up to Expo
&lt;/h3&gt;

&lt;p&gt;For developers, the process begins with signing up for an Expo account. This account will be linked to the organization created by the admin.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Join the Organization
&lt;/h3&gt;

&lt;p&gt;Upon signing up, developers will receive an email invitation to join the organization on Expo. Click on the provided link to join the organization and gain access to the project.&lt;/p&gt;

&lt;p&gt;By following these steps, both administrators and developers can collaborate efficiently using Expo and Bitbucket, establishing a robust build pipeline for Expo projects.&lt;/p&gt;

</description>
      <category>expo</category>
      <category>bitbucket</category>
      <category>cicd</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Event-Driven Microservice – Software Architecture Pattern</title>
      <dc:creator>Karthik</dc:creator>
      <pubDate>Thu, 26 Oct 2023 17:14:56 +0000</pubDate>
      <link>https://dev.to/gkrthk/event-driven-microservice-software-architecture-pattern-8d4</link>
      <guid>https://dev.to/gkrthk/event-driven-microservice-software-architecture-pattern-8d4</guid>
      <description>&lt;p&gt;In this post I will go over Event-Driven Microservice Architecture, its benefits and a Case Study implementation of an E-Commerce application. Most modern applications are run as micro services as it provides scalability and decoupled structure. One of the core principles of a micro service architecture is Database per Service Pattern where each database will be accessed by only one service, which provides a way of atomicity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Difficulties in Micro-services Pattern:
&lt;/h2&gt;

&lt;p&gt;One major difficulty in micro-services is, in real world applications a single request cannot be served by a single micro-service, as it might need different data from multiple DB’s. The immediate option is to make inter-process communication between micro-services to make this happen. There are some consequences of doing inter-process communication.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lihN8Hk_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/347zyoenujtqehtrrph5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lihN8Hk_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/347zyoenujtqehtrrph5.png" alt="representation of simple microservice interaction" width="429" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pitfalls In Inter-process Communication:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Module A waits for the response&lt;/li&gt;
&lt;li&gt;Dependent Module can be down&lt;/li&gt;
&lt;li&gt;Network delays can decrease performance&lt;/li&gt;
&lt;li&gt;If huge data then pagination is required. This means more REST calls between process&lt;/li&gt;
&lt;li&gt;Any Module can be under heavy load and can introduce latency.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Event-Driven Approach:
&lt;/h2&gt;

&lt;p&gt;To overcome the above short coming, We can use Event-Driven approach. An event-driven microservice architecture uses events to trigger and communicate between services. An event is a state change or an action to which other services might react as per the business process. This will take us to the reactive programming model.&lt;/p&gt;

&lt;p&gt;I will run through a real world example of an event-driven architecture implementation for a fairly simple eCommerce application workflow. The workflow is as below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Customer Orders product through Website.&lt;/li&gt;
&lt;li&gt;Create Temp Order and attach a transaction to order.&lt;/li&gt;
&lt;li&gt;Payment Confirmation&lt;/li&gt;
&lt;li&gt;Create Order&lt;/li&gt;
&lt;li&gt;Order Dispatch from warehouse&lt;/li&gt;
&lt;li&gt;Update Site with Product quantity(Real Time)&lt;/li&gt;
&lt;li&gt;Order Shipped&lt;/li&gt;
&lt;li&gt;Order Delivered&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lGej5kc5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d2zpzvelunxjdx00p7cb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lGej5kc5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d2zpzvelunxjdx00p7cb.png" alt="Service Architecture of an eCommerce website" width="800" height="634"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each micro-service performs a local transaction and publishes an event which can be consumed by other dependent micro-services to act upon accordingly. The Email Service consumes all the events from Order channel. So that every status change in the order has a corresponding email notification. There is no direct inter-process communication but all communications happens through events.&lt;/p&gt;

&lt;p&gt;With this approach we can even show real time data changes to the Users. The warehouse service sends an event after the order is dispatched. This event is consumed by the UI and the product stock is adjusted as per the data from the event.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAT Events:
&lt;/h2&gt;

&lt;p&gt;Use of fat events makes this architecture pure async. Fat events are events where messages posted to the queue contains details of the items processed or a snapshot of the latest data. This ensures the consumer doesn’t need additional API call to get the latest data from the DB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Idempotency:
&lt;/h2&gt;

&lt;p&gt;No technology is invincible, this goes for the message queue as well. There might be occurrences where the message queue does face some issues and resends the same message multiple times. So it is very important for all the consuming services to check for idempotency of the events before processing it to avoid duplicates.&lt;/p&gt;

&lt;p&gt;One quick solution is that we can add a unique key to each event message and check for the key in all the consumer services before processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Outbox Pattern:
&lt;/h2&gt;

&lt;p&gt;Another scenario in this architecture is messages getting lost due to Queue downtime. These become irreversible. To mitigate this, we can follow the outbox pattern where all messages are stored in DB and then processed by an event service and sent to consumers. This way even if the Queue is down the messages are still recorded and can be replayed to Queue when it is back on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;In micro service architecture, we have to keep coupling low. To be able to keep the coupling low, we have to focus on the connections between modules. Also we want scalability and flexibility at will, and this can be achieved by Event-Driven approach. We can combine this architecture with power of serverless computing.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>architecture</category>
      <category>serverless</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Generate Pre-Signed URL for S3 with AWS Lambda</title>
      <dc:creator>Karthik</dc:creator>
      <pubDate>Tue, 24 Oct 2023 17:33:39 +0000</pubDate>
      <link>https://dev.to/gkrthk/generate-pre-signed-url-for-s3-with-aws-lambda-1oin</link>
      <guid>https://dev.to/gkrthk/generate-pre-signed-url-for-s3-with-aws-lambda-1oin</guid>
      <description>&lt;h2&gt;
  
  
  Step 1: Create an AWS Lambda function
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Go to the &lt;a href="https://console.aws.amazon.com/lambda/"&gt;AWS Lambda console&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create function&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Author from scratch&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Configure your function:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Function Name:&lt;/strong&gt; Choose a name for your Lambda function.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runtime:&lt;/strong&gt; Select a runtime,We will use Node in this tutorial. (Node latest version).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execution Role:&lt;/strong&gt; Create a new role for Lambda service with permissions to access S3. This role should have an IAM policy allowing PutObject and ReadObject access to the S3 bucket. For simplicity we will choose AmazonS3FullAccess. Do not use this in production environment.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Advanced settings&lt;/strong&gt;, select &lt;strong&gt;Enable function URL&lt;/strong&gt; and &lt;strong&gt;Auth type&lt;/strong&gt; as &lt;strong&gt;NONE&lt;/strong&gt;. Leave the rest as default.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create function&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 2: Write the Lambda function code
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="cm"&gt;/* We will pass the file name in the request to the lambda 
 * function and that will be used to create the object key. If no 
 * file name is provided we will fallback to a hardcoded filename
 * for simplicity. You can throw a validation error if no filename 
 *is passed in request as query param
*/&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PutObjectCommand&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;S3Client&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-sdk/client-s3&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;getSignedUrl&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-sdk/s3-request-presigner&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;your-region&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;your-bucket&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;queryStringParameters&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;queryStringParameters&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;queryStringParameters&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="nx"&gt;key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;queryStringParameters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;createPresignedUrlWithClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createPresignedUrlWithClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;S3Client&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;PutObjectCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;getSignedUrl&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;expiresIn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Create an S3 bucket
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Go to the &lt;a href="https://console.aws.amazon.com/s3/"&gt;AWS S3 console&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create bucket&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Enter a name for your bucket.&lt;/li&gt;
&lt;li&gt;Select a region for your bucket.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 4: Update the CORS policy for the S3 bucket
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Go to the &lt;strong&gt;Properties&lt;/strong&gt; tab for your S3 bucket.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Edit CORS configuration&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Add the following CORS policy and Click &lt;strong&gt;Save&lt;/strong&gt;.
Again for simplicity we will allow all origins. But in production this has to be revisited to allow only the required origin.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"AllowedHeaders"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"AllowedMethods"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="s2"&gt;"PUT"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"AllowedOrigins"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"ExposeHeaders"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Test the Lambda function
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Enter the Function URL in a postman/Curl GET request with filename as query parameter.&lt;/li&gt;
&lt;li&gt;The response will be a presigned URL that you can use to upload a file to the S3 bucket.&lt;/li&gt;
&lt;li&gt;Upload a file using the URL in postman/Curl by choosing PUT request type.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog post, we showed you how to create a Lambda function that generates a presigned URL for S3 upload. We also showed you how to create an S3 bucket and update the CORS policy for the bucket.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>lambda</category>
      <category>node</category>
    </item>
  </channel>
</rss>
