<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bhavin Babariya</title>
    <description>The latest articles on DEV Community by Bhavin Babariya (@bhavin03).</description>
    <link>https://dev.to/bhavin03</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bhavin03"/>
    <language>en</language>
    <item>
      <title>Event driven architecture : Overview and comparison of AWS Messaging services</title>
      <dc:creator>Bhavin Babariya</dc:creator>
      <pubDate>Wed, 03 Jul 2024 10:39:19 +0000</pubDate>
      <link>https://dev.to/distinction-dev/event-driven-architecture-overview-and-comparison-of-aws-messaging-service-18lb</link>
      <guid>https://dev.to/distinction-dev/event-driven-architecture-overview-and-comparison-of-aws-messaging-service-18lb</guid>
      <description>&lt;h3&gt;
  
  
  In this Article
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Overview of Event Driven Architecture&lt;/li&gt;
&lt;li&gt;Event Driven Architecture Common Model&lt;/li&gt;
&lt;li&gt;AWS messaging services (use case, model, throughput, pricing)

&lt;ul&gt;
&lt;li&gt;SQS&lt;/li&gt;
&lt;li&gt;SNS

&lt;ul&gt;
&lt;li&gt;Combined Use Case: SNS and SQS Integration&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Eventbridge&lt;/li&gt;

&lt;li&gt;Kinesis Data stream&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Kafka Overview&lt;/li&gt;

&lt;li&gt;Very Important to know when to choose which service&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of Event Driven Architecture ✨
&lt;/h2&gt;

&lt;p&gt;Event-Driven Architecture (EDA) is a design paradigm where systems communicate and respond to events in real-time.&lt;/p&gt;

&lt;p&gt;This architecture promotes loose coupling, scalability, and flexibility, as components are only connected through the events they produce and consume.&lt;/p&gt;

&lt;p&gt;Event-Driven Architecture is widely used in systems requiring high responsiveness and real-time processing, such as financial trading platforms, IoT networks, and customer service applications.&lt;/p&gt;

&lt;p&gt;In Event-Driven Architecture (EDA), the main components are Producer, Event Broker and Consumer : &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Producer&lt;/strong&gt;: This is the source that generates and emits events. Producers can be anything from applications, services, or devices that detect a change in state or trigger an event.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event Broker&lt;/strong&gt;: This intermediary handles the transmission of events from producers to consumers. It ensures decoupling by managing the distribution and routing of events, often providing features like event filtering, persistence, and scalability. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consumer&lt;/strong&gt;: Listens for and processes events received from the event broker. Consumers act on the event data, performing tasks such as updating systems, triggering workflows, or generating responses.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Event Driven Architecture Common Model
&lt;/h2&gt;

&lt;p&gt;There are many model in EDA, but commonly Point to Point and Pub/Sub model is used.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Point to Point Model
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3y1qcf8jmmerhtkxe04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3y1qcf8jmmerhtkxe04.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Point-to-Point model ensures reliable and direct communication between a single producer and a single consumer, enhancing transactional processing and delivery guarantees.&lt;/p&gt;

&lt;p&gt;In this model, a producer sends messages to a specific queue. A consumer retrieves and processes these messages from the queue. The message broker manages the queue and ensures each message is delivered to only one consumer.&lt;/p&gt;

&lt;p&gt;This model is particularly helpful for scenarios where each message needs to be processed by a single recipient, ensuring reliable message delivery and simplifying message routing.&lt;/p&gt;

&lt;p&gt;Ex. SQS&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Pub Sub Model in EDA
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35txezjem0h5y1pz6go1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35txezjem0h5y1pz6go1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Pub/Sub model is used in EDA to decouple producers and consumers, enhancing scalability and flexibility. It allows efficient, real-time communication by distributing messages through topics managed by an event broker.&lt;/p&gt;

&lt;p&gt;In the Pub/Sub (Publish/Subscribe) model, a publisher sends messages to a specific topic. Consumers subscribe to this topic to receive and process the messages. The event broker manages the topics and ensures that messages from publishers are delivered to all subscribed consumers. &lt;/p&gt;

&lt;p&gt;This is how the Pub/Sub model is helpful in making systems flexible and scalable by decoupling producers and consumers, allowing efficient real-time communication through managed topics.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h1&gt;
  
  
  🚀 &lt;strong&gt;AWS messaging services&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;AWS services useful in EDA include Amazon SQS, Amazon SNS, Amazon EventBridge, Amazon Kinesis, AWS MKS (Kafka), AWS Lambda, Amazon MQ (for Apache ActiveMQ and RabbitMQ), and many more.&lt;/p&gt;




&lt;h2&gt;
  
  
  Simple Queue Service (SQS)
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS SQS is Queuing service that is useful in communication between Applications, microservices, and Distributed system.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Use Case of SQS  ✴️
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;process asynchronous task&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Queues enable the processing of asynchronous tasks effectively. By using a queue, we can poll messages at any time, allowing for flexible task management and execution.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;decoupling microservices&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;When two services communicate via a queue, they are decoupled, eliminating direct dependencies. This allows each service to operate independently, enhancing system scalability and resilience.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;batch processing&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;SQS supports batch processing so we can do batch processing on queue messages and also optimise resource utilization.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;job scheduling&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;If we add messages to queue throughout days and want to process this all message’s at a time in day then we can schedule event and by polling mechanism we can do batch processing on all data which is inside queue.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Model  / Mechanism
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;works on Pull mechanism.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Consumers
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;It supports only 1 consumer for 1 message.&lt;/li&gt;
&lt;li&gt;we can also consume message by AWS SDK or aws lambda as well.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Supports Ordering Mechanism
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Yes, it supports FIFO queue to process item in order.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Conditional Message Filtering
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;SQS doesn’t support conditional filtering mechanism for Message.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Encryption
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;supports message encryption using KMS&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Throughput
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;SQS standard queue has unlimited throughput&lt;/li&gt;
&lt;li&gt;SQS FIFO queue supports 3000 messages per second with batch processing or 300 message per second without batch processing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Dead Letter Queue
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Yes, it supports Dead Letter Queue&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Pricing 🤑
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;$ 0.40 per 1 Million (Standard Queue)&lt;/li&gt;
&lt;li&gt;$ 0.50 per 1 Million (FIFO Queue)&lt;/li&gt;
&lt;li&gt;Data Outbound Charge

&lt;ul&gt;
&lt;li&gt;$ 0.09 per GB&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Simple Notification Service (SNS)&lt;/strong&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;SNS (Simple Notification Service) is a fully managed pub/sub solution service offered by AWS. With SNS, users can create multiple topics and subscribers, and each topic can be connected with multiple subscribers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Use Cases of SNS ✴️
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fan Out System&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Distribute a single message to multiple recipients efficiently.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Mobile Push Notifications&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Send real-time updates to mobile devices across various platforms.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;System Monitoring Alerts&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Trigger alerts from monitoring tools based on specific events or thresholds.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Trigger Different Workflows&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Initiate diverse workflows by sending messages to various endpoints based on events.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Model  / Mechanism
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;works on Push mechanism.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Consumers
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;It supports multiple consumer per message.&lt;/li&gt;
&lt;li&gt;it supports Kinesis Data Firehose, Lambda, SQS, Email, HTTP/s, Application Notification and SMS as consumer.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Supports Ordering Mechanism
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Yes, it supports FIFO topic to process item in order.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Conditional Message Filtering
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;SNS supports conditional filtering mechanism for Message.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Encryption
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;supports message encryption using KMS&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Dead Letter Queue
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Yes, it supports Dead Letter Queue&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Throughput
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;SNS standard topic has near about unlimited throughput&lt;/li&gt;
&lt;li&gt;SNS FIFO topic supports 300 messages per second or 10 MB message per second per topic.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Pricing 🤑
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Standard Topic

&lt;ul&gt;
&lt;li&gt;$ 0.50  per 1 M (Mobile Push Notification)&lt;/li&gt;
&lt;li&gt;$ 0.60 per 1 M (HTTP/s Request)&lt;/li&gt;
&lt;li&gt;$ 2.00 per 100,000 notifications&lt;/li&gt;
&lt;li&gt;No charge for SQS and Lambda&lt;/li&gt;
&lt;li&gt;$ 0.19 per 1 M Notification (Amazon Kinesis Data Firehose)&lt;/li&gt;
&lt;li&gt;Data Outbound Charge

&lt;ul&gt;
&lt;li&gt;$0.09 per GB&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;FIFO Topic

&lt;ul&gt;
&lt;li&gt;Publish and publish batch API requests are $0.30 per 1 million and $0.017 per GB of payload data&lt;/li&gt;
&lt;li&gt;Subscription messages are $0.01 per 1 million and $0.001 per GB of payload data&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Combined Use Case: SNS and SQS Integration 🤝
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Fan-out with SQS:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use Case:&lt;/strong&gt; Distributing a message to multiple queues for parallel processing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Example:&lt;/strong&gt; An e-commerce platform needs to update inventory, process billing, and send a confirmation email when an order is placed. The order service publishes a message to an SNS topic, which then fans out to multiple SQS queues. Each queue is processed by different services responsible for inventory, billing, and email notifications, ensuring that the tasks are handled independently and concurrently.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhax2c6dwwaa5mt8520y9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhax2c6dwwaa5mt8520y9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Event Bridge
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS EventBridge is a service that connects multiple AWS services based on events, facilitating event-driven architecture management. With EventBridge, you can send custom events from SaaS applications to an event bus, schedule tasks, and monitor AWS services. This enables seamless integration, automation, and real-time monitoring within your cloud environment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Use Cases of Eventbridge ✴️
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Building Serverless Event-Driven Architecture:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;AWS EventBridge allows the setup of a full event-driven infrastructure when using AWS services as both producers and consumers. AWS service events can act as sources, while AWS services can also be targets for event processing.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;SaaS Integration with AWS Services:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;EventBridge supports custom events, enabling seamless SaaS integration. You can send custom events to an EventBridge event bus, facilitating communication between SaaS applications and AWS services.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Real-Time Monitoring and Alerting:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;EventBridge can monitor actions or events in real-time across various services. Based on these events, you can generate alerts or create CloudWatch logs, enhancing your system's observability and responsiveness.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Scheduling Tasks:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;EventBridge allows the scheduling of tasks using cron jobs or rate expressions. This enables you to automate the invocation of AWS services at specified times or intervals, ensuring timely execution of routine tasks.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Model  / Mechanism
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;works on Event bus model.&lt;/li&gt;
&lt;li&gt;uses Push mechanism to call target.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Consumers
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;It supports multiple consumer per rule.&lt;/li&gt;
&lt;li&gt;We can set many aws services as target as well as also set HTTP/s endpoint if we want to call external API .

&lt;ul&gt;
&lt;li&gt;Ex. invoke step function, start Glue Workflow.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Supports Ordering Mechanism
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;No, there are no order guarantee&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Conditional Message Filtering
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Event bridge supports event message filtering and transforming mechanism.

&lt;ul&gt;
&lt;li&gt;We can define schema in schema registry and based on schema we can filter message.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;We can use Eventbridge pipe to filter and transform data.&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Encryption
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;doesn’t support message encryption using KMS&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Archive and Event Replay
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;we can archive events and replay them later when it needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Throughput
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Eventbridge has near about unlimited throughput for all aws service event.&lt;/li&gt;
&lt;li&gt;In all regions, PutPartnerEvent (used by SaaS provider to write event in eventbus) has a soft limit of 1400  requests per second and 3,600 burst requests per second by default.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Pricing 🤑
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;EventBus

&lt;ul&gt;
&lt;li&gt;Free for aws service event&lt;/li&gt;
&lt;li&gt;$1 per 1 million custom event or SaaS event or Cross account event.

&lt;ul&gt;
&lt;li&gt;64 KB of chuck is considered as 1 Event. (If data of payload is 150 KB consider as 3 event)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;EventPipe

&lt;ul&gt;
&lt;li&gt;$0.40 per 1 million requests (event count after filter)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Event Replay

&lt;ul&gt;
&lt;li&gt;$0.023 per GB for event storage.&lt;/li&gt;
&lt;li&gt;$0.10 per GB for archive processing.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Schema Registry

&lt;ul&gt;
&lt;li&gt;Use of schema registry for aws and creating custom schema is free.&lt;/li&gt;
&lt;li&gt;$0.10 per million events (only discovery charge)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Kinesis Datastream
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Amazon Kinesis Data Streams (KDS) is used for real-time processing of streaming data at massive scale. when we need to process real time data processing &amp;amp; analytics at scale, Kinesis is useful.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Use cases of Kinesis ✴️
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time Analytics:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;An e-commerce platform can use Kinesis Data Streams to capture and analyse clickstream data to understand user behaviour and personalise recommendations in real-time.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Log and Event Data Collection:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;We can use KDS to ingest and monitor application logs and system events to detect anomalies and react quickly to potential issues.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;IoT Data Ingestion:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Manufacturing companies can stream sensor data from IoT devices to monitor equipment health, predict maintenance needs, and optimise operations.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Financial Market Data Processing:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Financial services can use KDS to process market data in real-time to detect trading opportunities and risks.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Model  / Mechanism
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;works on Data Stream modal&lt;/li&gt;
&lt;li&gt;works on Pull mechanism.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Consumers
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambda, AWS Kinesis DataStream, AWS Kinesis Data Analytics, AWS Kinesis Data Firehose, KCL (Kinesis Client Library)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Archive and Event Replay
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Kinesis Data Streams' archive and replay features enable long-term data retention, fault recovery, and compliance by securely storing data in Amazon S3 and allowing for easy reprocessing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Throughput
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;On Demand Mode

&lt;ul&gt;
&lt;li&gt;Read Capacity : Handle upto 400 MB per second&lt;/li&gt;
&lt;li&gt;Write Capacity : Handle upto 200 MB/second and 2,00,000 records/second&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Provisioned Mode ( for 1 shard only)

&lt;ul&gt;
&lt;li&gt;Read Capacity : Maximum 2MB per second&lt;/li&gt;
&lt;li&gt;Write Capacity : 1 MB/second and 1,000 records/second&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Drawbacks
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Shard need to be managed manually.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Pricing 🤑
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;$ 0.015 per hour per shard (Provisioned Mode)&lt;/li&gt;
&lt;li&gt;$0.04 per hour per shard (On Demand Mode)&lt;/li&gt;
&lt;li&gt;$ 0.014 per 1 million PUT payload units&lt;/li&gt;
&lt;li&gt;NOTE : There are other charges as well like Data Retention and data fan out etc..&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Apache Kafka (AWS MKS)
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Apache Kafka is a distributed, fault-tolerant, reliable, and durable streaming platform used for real-time data pipelines. Initially developed by LinkedIn, it later became open-source.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Kafka boasts high throughput and low latency, making it especially useful when data consistency and availability are crucial. It efficiently handles large volumes of data, enabling organizations to build robust, real-time data processing and analytics systems. Kafka's architecture supports horizontal scalability, ensuring that it can grow with the needs of the application.&lt;/p&gt;

&lt;blockquote&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ylpto8jeqayamodgumw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ylpto8jeqayamodgumw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases of Apache Kafka ✴️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Real-time analytics and monitoring&lt;/li&gt;
&lt;li&gt;Event sourcing and event-driven architectures&lt;/li&gt;
&lt;li&gt;Log aggregation and processing&lt;/li&gt;
&lt;li&gt;Stream processing and transformation&lt;/li&gt;
&lt;li&gt;Data integration and ETL (Extract, Transform, Load) pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Model  / Mechanism
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Pub/sub Model&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Consumers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AWS lambda,Kinesis Data Analytics, Kinesis Data Firehouse, EMR, Glue are connected to MSK directly while S3, DynamoDB, Redshift can be connected via Kafka Connect.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Supports Ordering Mechanism
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Yes, there are order guarantee&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conditional Message Filtering
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;It doesn’t support message filtering at broker level, we have to handle filtering at consumer level.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Encryption
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;supports message encryption using KMS&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Archive and Event Replay
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AWS MSK retains all published messages for a configurable retention period.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Throughput
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kafka provides high throughput, also capable of handling large volumes of streaming data with low latency.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pricing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Broker Instance charges

&lt;ul&gt;
&lt;li&gt;$0.204 (price per hour for a kafka.m7g.large)&lt;/li&gt;
&lt;li&gt;$0.21 (price per hour for a kafka.m5.large)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Storage charge

&lt;ul&gt;
&lt;li&gt;$0.10 (the price per GB-month in US East region)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  📗 Very Important to know when to choose which service ?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Asynchronous job processing&lt;/strong&gt;: Use &lt;strong&gt;Amazon SQS&lt;/strong&gt;. Ideal for decoupling microservices and buffering requests.

&lt;ul&gt;
&lt;li&gt;If numbers of event per second is lower or medium then SQS is always recommendable service&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Sending notifications or invoking services&lt;/strong&gt;: Use &lt;strong&gt;Amazon SNS&lt;/strong&gt;. Perfect for sending notifications to multiple recipients or invoking services with pub/sub messaging.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Triggering services based on events&lt;/strong&gt;: Use &lt;strong&gt;Amazon EventBridge&lt;/strong&gt;. Best for integrating AWS services and custom applications through event-driven architectures.

&lt;ul&gt;
&lt;li&gt;Eventbridge is highly recommend when SaaS  integrate to AWS services is requirement.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Handling high request rates and event-driven data ingestion&lt;/strong&gt;: Use &lt;strong&gt;Amazon Kinesis&lt;/strong&gt;. Suitable for real-time data streaming and analytics with the ability to scale by adding shards.

&lt;ul&gt;
&lt;li&gt;Kinesis is costly service compare to other service as it’s cost depends on number of active shards.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Live streaming and scalable, low-latency data pipelines&lt;/strong&gt;: Use &lt;strong&gt;Amazon MSK (Managed Streaming for Apache Kafka)&lt;/strong&gt;. Excellent for building scalable, real-time data streaming applications with Apache Kafka.

&lt;ul&gt;
&lt;li&gt;Apache Kafka is highly recommended when millions or billions of request occurs at time.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  ➕ Additional Considerations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Message Ordering and Deduplication&lt;/strong&gt;: If you require strict message ordering and deduplication, consider using &lt;strong&gt;SQS FIFO Queues, Kinesis&lt;/strong&gt; or &lt;strong&gt;Kafka&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple Consumer Support&lt;/strong&gt;: For scenarios where multiple consumers need to process the same stream of data, &lt;strong&gt;SNS&lt;/strong&gt; , &lt;strong&gt;Kinesis&lt;/strong&gt;, &lt;strong&gt;Kafka&lt;/strong&gt; is preferred.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex Event Processing&lt;/strong&gt;: For applications needing complex event processing and routing, &lt;strong&gt;EventBridge&lt;/strong&gt; provides advanced capabilities for rule-based event handling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Summary Table
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;SQS&lt;/th&gt;
&lt;th&gt;SNS&lt;/th&gt;
&lt;th&gt;Event bridge&lt;/th&gt;
&lt;th&gt;Kinesis&lt;/th&gt;
&lt;th&gt;Kafka (MKS)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Message Filtering&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Order&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes (FIFO Queue)&lt;/td&gt;
&lt;td&gt;Yes(FIFO Topic)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Throughput&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Low (FIFO)&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Latency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Durability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Integration&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AWS Services&lt;/td&gt;
&lt;td&gt;AWS Services&lt;/td&gt;
&lt;td&gt;AWS Services&lt;/td&gt;
&lt;td&gt;Custom Applications &amp;amp; AWS Services&lt;/td&gt;
&lt;td&gt;Custom Applications &amp;amp; AWS Lambda&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SaaS support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Data Persistenc&lt;/strong&gt;e&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;pricing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
      <category>aws</category>
      <category>eventdriven</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Move aws resources from one stack to another cloudformation stack</title>
      <dc:creator>Bhavin Babariya</dc:creator>
      <pubDate>Tue, 02 Jul 2024 06:18:00 +0000</pubDate>
      <link>https://dev.to/distinction-dev/move-aws-resources-from-one-stack-to-another-cloudformation-stack-5d1m</link>
      <guid>https://dev.to/distinction-dev/move-aws-resources-from-one-stack-to-another-cloudformation-stack-5d1m</guid>
      <description>&lt;h2&gt;
  
  
  Why do we need this?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The AWS CloudFormation resource limit is currently set at 500, although this size may increase with the introduction of new features in Application.&lt;/li&gt;
&lt;li&gt;To accommodate this limitation, we must distribute all resources across various stacks.&lt;/li&gt;
&lt;li&gt;Our approach involves isolating Lambda functions into a separate stack, while other resources such as S3 buckets and DynamoDB tables reside in an infra stack.&lt;/li&gt;
&lt;li&gt;This is the reason why we need to import resources from the main stack into the infra stack.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Steps to move resources from one stack to another stack
&lt;/h2&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Apply 'DeletionPolicy: Retain' to all resources of the main stack&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Applying 'DeletionPolicy: Retain' to all resources in the main stack ensures that when these resources are deleted during stack updates or deletions, they are retained rather than being deleted permanently.&lt;/li&gt;
&lt;li&gt;This is particularly useful for resources that contain valuable data or configurations that need to be preserved even if they are no longer actively used.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Consider you have two cloudformation stack(which is generated by serverless framework): main and destination, and you want to import some resources from main to destination. Here are the steps to move resources from one stack to another stack without deleting the actual resources.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Copy AWS resources from the main cloudFormation stack and paste them into the destination cloudFormation stack.&lt;/li&gt;
&lt;li&gt;Remove resources from the main stack and deploy the main stack.&lt;/li&gt;
&lt;li&gt;Prepare another file named "resourcesToImport.txt" containing the AWS resource type, logical ID, and resource identifier.&lt;/li&gt;
&lt;li&gt;Run a command to create an IMPORT changeset.&lt;/li&gt;
&lt;li&gt;Execute a command to apply changeset which was created in the previous step.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  1. Copy AWS resources from the main cloudFormation stack and paste them into the destination cloudFormation stack.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Copy destination stack cloudformation code into one file ( templateToImport.json)&lt;/li&gt;
&lt;li&gt;Copy main stack resource’s ( which you want to import) cloudformation code and append them in destination stack code (templateToImport.json)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Remove resources from the main stack and deploy the main stack.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Now, remove all the resources which we want to import or we added into the destination stack in step 1 .&lt;/li&gt;
&lt;li&gt;Redeploy main stack.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Now resources are not in any stack and also not deleted because resource’s deletionPolicy is set to Retain.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  3. Prepare another file named "resourcesToImport.txt" containing the aws resource type, logical ID, and resource identifier.
&lt;/h3&gt;

&lt;p&gt;Now, create One file named ‘resourcesToImport.txt’ and add ResourceType, LogicalResourceId and ResourceIdentifier for each resource which we want to import.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ResourceType will be the cloudformation resource type&lt;/li&gt;
&lt;li&gt;LogicalResourceId will be the Logical Name of resource&lt;/li&gt;
&lt;li&gt;ResourceIdentifier contains actual identifier of resource

&lt;ul&gt;
&lt;li&gt;If resource is S3 bucket then value will be {"BucketName": ""}&lt;/li&gt;
&lt;li&gt;If resource is dynamodb table then value will be { "TableName": "ACTUAL_DYNAMODB_TABLE_NAME" }&lt;/li&gt;
&lt;li&gt;If resource is rest api then value will be { "RestApiId": "REST_API_ID" }&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Example File :&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ResourceType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS::S3::Bucket"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"LogicalResourceId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;LOGICAL_NAME_OF_BUCKET&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ResourceIdentifier"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"BucketName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;ACTUAL_NAME_OF_BUCKET&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ResourceType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS::DynamoDB::Table"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"LogicalResourceId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;LOGICAL_NAME_OF_DYNAMODB_TABLE&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ResourceIdentifier"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"TableName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ACTUAL_NAME_OF_DYNAMODB_TABLE"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ResourceType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS::ApiGateway::RestApi"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"LogicalResourceId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;LOGICAL_NAME_OF_RESTAPI&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ResourceIdentifier"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"RestApiId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"REST_API_ID"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Run a command to create IMPORT changeset&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;below command creates import changeset of resource&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws cloudformation create-change-set &lt;span class="nt"&gt;--stack-name&lt;/span&gt; &amp;lt;YOUR_STACK_NAME&amp;gt; &lt;span class="nt"&gt;--change-set-name&lt;/span&gt; &amp;lt;CHANGE_SET_NAME&amp;gt; &lt;span class="nt"&gt;--change-set-type&lt;/span&gt; IMPORT &lt;span class="nt"&gt;--resources-to-import&lt;/span&gt; file://resourcesToImport.txt &lt;span class="nt"&gt;--template-body&lt;/span&gt; file://templateToImport.json &lt;span class="nt"&gt;--capabilities&lt;/span&gt; CAPABILITY_NAMED_IAM &lt;span class="nt"&gt;--description&lt;/span&gt; &lt;span class="s2"&gt;"write here description"&lt;/span&gt; &lt;span class="nt"&gt;--profile&lt;/span&gt; &amp;lt;AWS_PROFILE&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  5. Execute a command to apply the changeset.
&lt;/h3&gt;

&lt;p&gt;below command executes the import changeset and resources will be move from main stack to destination stack 🥳&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws cloudformation execute-change-set &lt;span class="nt"&gt;--change-set-name&lt;/span&gt; &amp;lt;CHANGE_SET_NAME&amp;gt; &lt;span class="nt"&gt;--stack-name&lt;/span&gt; &amp;lt;YOUR_STACK_NAME&amp;gt; &lt;span class="nt"&gt;--profile&lt;/span&gt; &amp;lt;AWS_PROFILE&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcp35ip0g3ifhyzrgl7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcp35ip0g3ifhyzrgl7s.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;👉 NOTE&lt;/strong&gt; : Cloudformation doesn’t allow to import all types of resources. Few resources are not supported to import.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Below link contains all the resources which are allowed  to import in cloudformation stack
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import-supported-resources.html" rel="noopener noreferrer"&gt;Resource type support - AWS CloudFormation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Reference
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import-existing-stack.html" rel="noopener noreferrer"&gt;Importing existing resources into a stack - AWS CloudFormation&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>guide</category>
      <category>serverless</category>
      <category>cloudformation</category>
    </item>
  </channel>
</rss>
