<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Irene Aguilar</title>
    <description>The latest articles on DEV Community by Irene Aguilar (@ysyzygy).</description>
    <link>https://dev.to/ysyzygy</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ysyzygy"/>
    <language>en</language>
    <item>
      <title>IoT for Dummies: Building a Basic IoT Platform with AWS</title>
      <dc:creator>Irene Aguilar</dc:creator>
      <pubDate>Tue, 16 Jul 2024 09:16:38 +0000</pubDate>
      <link>https://dev.to/aws-builders/iot-for-dummies-building-a-basic-iot-platform-with-aws-3a4o</link>
      <guid>https://dev.to/aws-builders/iot-for-dummies-building-a-basic-iot-platform-with-aws-3a4o</guid>
      <description>&lt;p&gt;This article will guide you through creating the fundamental functionalities of an IoT platform using AWS, with a practical use case focused on monitoring electrical grid parameters to enhance efficiency and contribute to reducing the carbon footprint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Use Case
&lt;/h2&gt;

&lt;p&gt;In the quest for net-zero emissions, electric distribution companies face immense pressure to reduce their carbon footprint and transition towards sustainable energy solutions. Efficient management of the electrical grid is critical in this endeavor, as it optimizes energy use, integrates renewable energy sources, and reduces emissions associated with electricity generation and distribution.&lt;/p&gt;

&lt;p&gt;The use case involves deploying sensors and network monitoring devices within the electrical infrastructure. These devices collect real-time data on energy consumption, renewable energy generation, energy demand, and other relevant parameters. The data will be processed and analyzed to identify opportunities for improving energy efficiency and reducing emissions. By leveraging AWS's advanced analytics capabilities, proactive measures can be taken to optimize grid operation and move towards a sustainable future.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9d4pfyr9earvn2zawf9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9d4pfyr9earvn2zawf9m.png" alt="goal" width="328" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Objectives
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Integrate the New IoT Platform within Corporate Landing Zone Standards and Regulations&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The company has a pre-configured landing zone that meets the standards and regulations for a large enterprise with multiple subsidiaries. Each subsidiary has its own organizational unit to enable agile development while adhering to global standards in networking, security, and shared components. The new IoT platform must comply with these requirements.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Design and Create a Scalable IoT Platform&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The company plans to deploy over 10,000 devices from three hardware providers. The initial version aims to provide a secure Fleet Provisioning capability to simplify the installation of these devices in each substation. Network data must be collected from each device and stored for future analysis and processing.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Ensure Automation from Device Provisioning to Data Collection&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automation is key, from provisioning IoT devices to collecting data. This follows infrastructure-as-code principles using Terraform for automation.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step-by-Step Guide to Building the IoT Platform
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Connecting Devices
&lt;/h3&gt;

&lt;p&gt;The devices are on-premises, and the first task is to connect them to AWS IoT Core.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS IoT Core&lt;/strong&gt;: This managed cloud service allows connected devices to interact securely with cloud applications and other devices. It can support billions of devices and trillions of messages, reliably processing and routing those messages to AWS endpoints and other devices.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fzipkieer65o7cbus0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fzipkieer65o7cbus0v.png" alt="aws_iot_core" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Fleet Provisioning
&lt;/h3&gt;

&lt;p&gt;AWS offers several methods to provision devices and install unique client certificates:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw866s0tzfm5xjwinqm3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw866s0tzfm5xjwinqm3.png" alt="Fleet Provisioning" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Devices can be connected using three types of provisioning methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Just-in-time provisioning (JITP)&lt;/strong&gt;: If you can securely install unique client certificates on your IoT devices before delivering them to the end user, you should opt for just-in-time provisioning (JITP) or just-in-time registration (JITR).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Provisioning by trusted user&lt;/strong&gt;: If it's not feasible to securely install unique client certificates on your IoT devices prior to delivery, but the end user or an installer can use an app to register the devices and install the unique device certificates, the provisioning by trusted user process is suitable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Provisioning by claim&lt;/strong&gt;: If end users cannot use an app to install certificates on their IoT devices, the provisioning by claim process can be used. This method involves your IoT devices having a claim certificate shared by other devices in the fleet. When a device connects for the first time using a claim certificate, AWS IoT registers the device using its provisioning template and issues it a unique client certificate for future access to AWS IoT. This method allows automatic device provisioning upon connection to AWS IoT but poses a higher risk if a claim certificate is compromised. If a claim certificate is compromised, it can be deactivated to prevent future registrations with that certificate, though already provisioned devices will not be affected.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Provisioning by Claim
&lt;/h4&gt;

&lt;p&gt;This method uses a certificate (AWS Private Certificate Authority (PCA) certificate) shared with AWS Resource Access Manager (RAM). It is effective for mass provisioning and managing device credentials securely.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Private Certificate Authority (PCA)&lt;/strong&gt;: Best practices include regular rotation of certificates and minimizing their scope to reduce the risk if compromised. Isolate your PCA in its own AWS account to minimize unauthorized access risk. Share certificates across AWS accounts securely using AWS RAM.
Terraform is used as IaC, here there are examples about how a pca is set up:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0m1ywz6muz0zvuhyq2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0m1ywz6muz0zvuhyq2z.png" alt="terraform_aws" width="265" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9q7466e17z8k3dxmxx9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9q7466e17z8k3dxmxx9r.png" alt="pca_type" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxx12e66ot8muz9eta70.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxx12e66ot8muz9eta70.png" alt="revocation_config" width="783" height="567"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70weq2db3misbbpm0n5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70weq2db3misbbpm0n5t.png" alt="aws_acmpca_certificate_authority" width="800" height="653"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Provisioning Template&lt;/strong&gt;: Create a template that defines policies and configurations for the devices to ensure consistent security standards.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Provisioning Flow&lt;/strong&gt;: The device uses the shared certificate to connect to AWS IoT Core. AWS IoT Core validates the certificate and applies the provisioning template to register and configure the device in the cloud.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq6gifmftv5ah7h0b9hn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq6gifmftv5ah7h0b9hn.png" alt="Provisioning Flow" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Present Bootstrap Certificate&lt;/strong&gt;: Edge devices initially connect to AWS IoT Core using a bootstrap/claim certificate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Birth Policy Execution&lt;/strong&gt;: The birth policy is executed, which includes a Certificate Signing Request (CSR) that is signed and returned.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Official Certificate Payload&lt;/strong&gt;: The device receives its official certificate payload for secure communications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Send Ownership Token and Specify Provisioning Template&lt;/strong&gt;: The device sends an ownership token and provisioning template to AWS IoT Core.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execute Provisioning Template&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Custom Provisioning Validation&lt;/strong&gt;: Validates the provisioning request.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Activate Certificate&lt;/strong&gt;: Activates the device's official certificate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Thing/Group&lt;/strong&gt;: Creates the device entity (Thing) or associates it with a group.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Assign Policy&lt;/strong&gt;: Assigns the necessary security policies to the device.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Respond with Outcome of Provisioning Transaction&lt;/strong&gt;: AWS IoT Core confirms the outcome of the provisioning transaction.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  3. Data Ingestion and Processing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Set Up AWS IoT Rules&lt;/strong&gt;: Create rules to process incoming data, routing it to other AWS services for further processing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sending Data to a Data Lake&lt;/strong&gt;: Use IoT rules to send data to an Amazon S3 data lake and an analytics platform in another AWS account. This involves setting up a Lambda function to enrich the data and using Amazon SQS to decouple the systems for efficient processing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33kqm6haohv6qmoaayek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33kqm6haohv6qmoaayek.png" alt="iot_rules" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A diagram with MQTT connections to AWS IoT Core, its suborditane CA in the same account and the isolate root CA in other account is picture here. Besides, and IoT Rule is added to send infortation to the datalake (S3) and other account to process information (lambda + SQS).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm94oyw3j2suv9txg4to8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm94oyw3j2suv9txg4to8.png" alt="mqtt" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Testing the Setup
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Script for Device Testing&lt;/strong&gt;: Develop a script to simulate data transmission from the device to AWS IoT Core. This ensures communication and data ingestion functionality. There is an example in aws repository: &lt;a href="https://github.com/aws/aws-iot-device-sdk-python-v2/blob/main/samples/pubsub.py" rel="noopener noreferrer"&gt;https://github.com/aws/aws-iot-device-sdk-python-v2/blob/main/samples/pubsub.py&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MQTT Test&lt;/strong&gt;: Use the MQTT test client in AWS IoT Core to publish and subscribe to topics, verifying data flow between the device and AWS IoT Core.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg2osztmc7oalpb2aw5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg2osztmc7oalpb2aw5c.png" alt="mqtt_test" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Execute Actions on the Devices: AWS Jobs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Firmware Updates&lt;/strong&gt;: AWS IoT Jobs facilitate communication from the cloud to devices for tasks such as firmware updates, ensuring all devices remain up-to-date and secure. Use AWS IoT Jobs to manage remote operations for one or multiple devices connected to AWS IoT. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To create jobs, start by defining a job document containing instructions for the remote operations the device should perform. Then, specify the targets for these operations, which can be individual things, thing groups, or both. The combination of the job document and the specified targets constitutes a deployment.&lt;/p&gt;

&lt;p&gt;AWS IoT Jobs notifies the targets that a job is available. The target devices then download the job document, execute the specified operations, and report their progress back to AWS IoT. You can track the job's progress for specific targets or for all targets using AWS IoT Jobs commands. Once a job starts, it has an "In progress" status, and devices will report incremental updates until the job is completed, fails, or times out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fff8ihsp17jmgxqa357.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fff8ihsp17jmgxqa357.png" alt="iot_job" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Advanced Steps
&lt;/h2&gt;

&lt;h3&gt;
  
  
  6. Analyzing Data with AWS IoT Analytics
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create Data Sets&lt;/strong&gt;: Define data sets in AWS IoT Analytics to process and transform the raw data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run Analyses&lt;/strong&gt;: Utilize built-in analytics capabilities to run SQL queries and perform machine learning on the data to derive insights.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  7. Visualizing Data
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS QuickSight&lt;/strong&gt;: Create dashboards and visualize the data to understand patterns and trends in energy consumption and generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time Alerts&lt;/strong&gt;: Set up real-time alerts using AWS IoT Events to notify operators of anomalies or inefficiencies in the grid.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  8. Integrating Advanced Machine Learning with Amazon SageMaker and use GenAI with Amazon Bedrock
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon SageMaker&lt;/strong&gt;: Use Amazon SageMaker to build, train, and deploy machine learning models with the data collected from IoT devices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Bedrock&lt;/strong&gt;: Leverage Amazon Bedrock to simplify the development and deployment of machine learning models.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Main Benefits of the AWS IoT Platform
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: AWS IoT services can scale to handle increasing amounts of data and devices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Robust security features ensure the data collected and transmitted is secure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: AWS offers a range of services that can be tailored to specific needs, allowing for a flexible and customizable IoT platform.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By following these steps, you can build a basic IoT platform with AWS that not only monitors electrical grid parameters but also contributes to the path towards net-zero emissions. Leveraging IoT technology and AWS's comprehensive suite of services, electric distribution companies can optimize energy use, integrate renewable sources more effectively, and significantly reduce their carbon footprint. This approach is not limited to this specific use case; it can be adapted and applied to any IoT scenario, demonstrating AWS's versatility in enabling sustainable solutions across diverse industries.&lt;/p&gt;

</description>
      <category>awsiotcore</category>
      <category>iot</category>
      <category>fleetprovision</category>
      <category>iotplatform</category>
    </item>
    <item>
      <title>My experience re-certifying in AWS Certified DevOps Engineer - Professional Exam and learning something new</title>
      <dc:creator>Irene Aguilar</dc:creator>
      <pubDate>Mon, 08 Jul 2024 15:20:27 +0000</pubDate>
      <link>https://dev.to/aws-builders/my-experience-re-certifying-in-aws-certified-devops-engineer-professional-exam-and-learning-something-new-2m3o</link>
      <guid>https://dev.to/aws-builders/my-experience-re-certifying-in-aws-certified-devops-engineer-professional-exam-and-learning-something-new-2m3o</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the fast-paced world of cloud computing and DevOps, staying abreast of the latest certifications is paramount. Recently, I undertook the challenge of recertifying for the AWS Certified DevOps Engineer - Professional exam. This certification is tailored for seasoned professionals with extensive experience in managing AWS environments, affirming proficiency in deploying and operating distributed applications on the AWS platform.&lt;br&gt;
Articles on how to pass the AWS Certified DevOps Engineer - Professional exam are plentiful, but I always make a point of reviewing the latest ones for any new insights or updates. This year, I'll share my experience recertifying, including new things I didn’t recall from the certification and the aspects I found most interesting this time around.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of the Certification
&lt;/h2&gt;

&lt;p&gt;The AWS Certified DevOps Engineer - Professional exam focuses on various aspects of DevOps engineering, including continuous delivery (CD) methodologies, automation of security controls, governance processes, and monitoring and logging practices. It is recommended to have prior certifications such as the AWS Certified Developer – Associate and AWS Certified SysOps Administrator – Associate, particularly the SysOps certification as it covers a significant part of the content at a different level.&lt;/p&gt;

&lt;h3&gt;
  
  
  Official Resources:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The exam content outline and passing score, is in the &lt;a href="https://d1.awsstatic.com/training-and-certification/docs-devops-pro/AWS-Certified-DevOps-Engineer-Professional_Exam-Guide.pdf" rel="noopener noreferrer"&gt;Exam Guide&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;AWS Skill Builder Resources

&lt;ul&gt;
&lt;li&gt;The &lt;a href="https://explore.skillbuilder.aws/learn/course/external/view/elearning/14673/aws-certified-devops-engineer-professional-official-practice-question-set-dop-c02-english" rel="noopener noreferrer"&gt;Sample Questions&lt;/a&gt; are 20 questions developed by AWS to demonstrate the style of our certification exams.&lt;/li&gt;
&lt;li&gt;AWS offers various resources on their Skill Builder platform to help you prepare for the exam. There is a free course called &lt;a href="https://explore.skillbuilder.aws/learn/course/external/view/elearning/16352/exam-prep-standard-course-aws-certified-devops-engineer-professional-dop-c02-english" rel="noopener noreferrer"&gt;Exam Prep Standard Course&lt;/a&gt; and for those with a subscription, there are additional exam questions and an enhanced version of the preparation course.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Exam Content Domains
&lt;/h3&gt;

&lt;p&gt;The exam covers six content domains, each with a specific weighting. Below is a breakdown of each domain along with key topics and important points to review:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Domain 1: SDLC Automation (22%)&lt;br&gt;
Domain 2: Configuration Management and IaC (17%)&lt;br&gt;
Domain 3: Resilient Cloud Solutions (15%)&lt;br&gt;
Domain 4: Monitoring and Logging (15%)&lt;br&gt;
Domain 5: Incident and Event Response (14%)&lt;br&gt;
Domain 6: Security and Compliance (17%)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Detailed Exploration of my Key Learnings
&lt;/h3&gt;

&lt;p&gt;The domains are important to understand the percentage of questions for each topic, but in this case, the difference between the maximum and minimum is 8%, so all domains have more or less the same weight. Normally, I tend to review the services and how to integrate them with each other more than focusing on the domains. Here are some of the notes I took for review or learning, but this will depend a lot on your experience and background in AWS. &lt;br&gt;
Apart from taking notes, it is very useful to see diagrams of integrations or solutions and practice with real scenarios (hands on experience is always the best). I try to complement this with diagrams from AWS documentation or create my own.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Developer Tools:&lt;/strong&gt; Extensive exploration of AWS CodePipeline, AWS CodeBuild, and AWS CodeCommit.

&lt;ul&gt;
&lt;li&gt;AWS CodeArtifact: Understanding How It Works, Integration with External Repositories, and Configuration in a Multi-Account Organization.&lt;/li&gt;
&lt;li&gt;AWS CodeDeploy: Understand the hooks and their appropriate use cases (BeforeInstall, AfterInstall,…). Familiarize yourself with the different deployment types and their impacts. Understand the different deployment strategies.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xhhm44zkkccdhh5gqhg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xhhm44zkkccdhh5gqhg.png" alt="AWS Developer tools architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Serverless architectures:&lt;/strong&gt; Ways of deployments, when and how to use canary releases. Differences between provisioned concurrency and reserved concurrency with AWS Lambdas. Use AWS Serverless Application Model (AWS SAM)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fquvktxuz02sy8y4lq5qg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fquvktxuz02sy8y4lq5qg.png" alt="Serverless architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyd6u80p47jo1g2umxfmz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyd6u80p47jo1g2umxfmz.png" alt="What is AWS SAM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure managed EC2 instances have the correct application version and patches installed using &lt;strong&gt;SSM&lt;/strong&gt; (Patch Manager, Maintenance Windows, state Manager, Inventory).&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;CloudFormation&lt;/strong&gt; drift detection to manage configuration changes. How to use different stacks together, different between StackSets and nested stacks, how to deploy instances and how to updated it using its user data and understand the hooks of EC2, ASG and ALB and when use it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fote1n1hwknlccyq7fsg1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fote1n1hwknlccyq7fsg1.png" alt="Nested stack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptehq80do6in9u0zrhl8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptehq80do6in9u0zrhl8.png" alt="Stack Set"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;strong&gt;Auto Scaling with warm pools&lt;/strong&gt; for better instance state management.&lt;/li&gt;
&lt;li&gt;Use of Amazon &lt;strong&gt;EventBridge rules for detecting events&lt;/strong&gt;, for example with AWS Health Service.&lt;/li&gt;
&lt;li&gt;Know well how &lt;strong&gt;AWS Organizations&lt;/strong&gt; works, how it is integrated with other services, how you can delegate the administration of these services to other accounts, how they are defined and what SCPs are used for, and the differences with permission boundaries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbw8n9yarq7t56a34188c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbw8n9yarq7t56a34188c.png" alt="SCPs scope"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dazcwelg4r45tol27dm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dazcwelg4r45tol27dm.png" alt="Effective permisions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up automatic &lt;strong&gt;remediation&lt;/strong&gt; actions using &lt;strong&gt;AWS Config&lt;/strong&gt; and AWS Systems Manager Automation runbooks.&lt;/li&gt;
&lt;li&gt;Track &lt;strong&gt;service limits&lt;/strong&gt; with Trusted Advisor and set up CloudWatch Alarms for notifications.
Additional Services: Learn the difference of each service what it is used for and how it differs from the rest.&lt;/li&gt;
&lt;li&gt;Amazon &lt;strong&gt;Inspector&lt;/strong&gt;: Continuously scan workloads for vulnerabilities.&lt;/li&gt;
&lt;li&gt;Amazon &lt;strong&gt;GuardDuty&lt;/strong&gt;: Detect threats and unauthorized activities.&lt;/li&gt;
&lt;li&gt;AWS &lt;strong&gt;Trusted Advisor&lt;/strong&gt;: Make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps.&lt;/li&gt;
&lt;li&gt;Amazon &lt;strong&gt;Macie&lt;/strong&gt;: Automatically discover, classify, and protect sensitive data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frq4p55bjer7rsdled6bx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frq4p55bjer7rsdled6bx.png" alt="AWS Security Hub integrations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS &lt;strong&gt;Compute Optimizer&lt;/strong&gt;: Identify optimal AWS resource configurations.&lt;/li&gt;
&lt;li&gt;AWS &lt;strong&gt;EC2 Image Builder&lt;/strong&gt;: simplifies the building, testing, and deployment of Virtual Machine and container images for use on AWS or on-premises.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkqvmw3m12ws3czvri4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkqvmw3m12ws3czvri4x.png" alt="Image Builder pipeline &amp;amp; recipe"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS &lt;strong&gt;Elastic Disaster Recovery&lt;/strong&gt;: Minimize downtime and data loss with fast recovery.&lt;/li&gt;
&lt;li&gt;AWS &lt;strong&gt;Resilience Hub&lt;/strong&gt;: Define, validate, and track application resilience on AWS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ghw7pkp2vc0kdnmnp83.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ghw7pkp2vc0kdnmnp83.png" alt="What is AWS Resilience Hub"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although certifications do not serve to validate your knowledge, I always &lt;strong&gt;learn something new&lt;/strong&gt; from a service that I have not used or from some functionality that I had not used. For example, using Warm Pools in Amazon EC2 Auto Scaling to decrease latency for applications with long boot times (there is not new it is from 2021) or using the AWS CodeArtifact domain to manage multiple repositories across multiple accounts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvzguy78zu1m5kghshx49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvzguy78zu1m5kghshx49.png" alt="How works AWS CodeArtifact"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In conclusion&lt;/strong&gt;, the AWS Certified DevOps Engineer - Professional exam not only reinforced my existing skills but also broadened my understanding of AWS services and their real-world applications. Continuous learning is indispensable in navigating the ever-evolving landscape of cloud technology.&lt;/p&gt;

&lt;p&gt;And you, what is the latest thing you have learned?&lt;br&gt;
Keep learning!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>certification</category>
      <category>learning</category>
    </item>
    <item>
      <title>Descomposición de Monolitos: Patrones de Arquitectura aplicados en AWS. Primera Parte.</title>
      <dc:creator>Irene Aguilar</dc:creator>
      <pubDate>Tue, 03 Oct 2023 16:18:15 +0000</pubDate>
      <link>https://dev.to/aws-espanol/descomposicion-de-monolitos-patrones-de-arquitectura-aplicados-en-aws-primera-parte-24h3</link>
      <guid>https://dev.to/aws-espanol/descomposicion-de-monolitos-patrones-de-arquitectura-aplicados-en-aws-primera-parte-24h3</guid>
      <description>&lt;p&gt;En el mundo del desarrollo de software, los monolitos han sido una arquitectura tradicional ampliamente utilizada. Un monolito es una aplicación en la que todas las funcionalidades se encuentran en un solo código base. A medida que las aplicaciones crecen, los monolitos pueden volverse difíciles de mantener y escalar. En este artículo, exploraremos qué es un monolito, sus ventajas e inconvenientes en comparación con los microservicios, y luego analizaremos los patrones de descomposición de monolitos disponibles y cómo aplicarlos en el entorno de Amazon Web Services (AWS).&lt;/p&gt;

&lt;h2&gt;
  
  
  Contexto
&lt;/h2&gt;

&lt;p&gt;Con el auge del cloud, se abre la posibilidad de poder desarrollar aplicaciones cloud native para sacar el máximo partido a la nube. ¿Pero qué pasa con aplicaciones que ya están desarrolladas? ¿Cómo las migramos y modernizamos para convertirlas en cloud native?&lt;/p&gt;

&lt;p&gt;Lo primero es que hay diferentes estrategias para migrar a la nube, pero en este articulo nos vamos a centrar en la modernización de aplicaciones monolíticas que necesitan partir el monolito para poder escalar de forma diferente los diferentes módulos.&lt;/p&gt;

&lt;p&gt;La elección entre usar un monolito o adoptar microservicios depende de varios factores, incluidos los requisitos de la aplicación, las características del equipo de desarrollo y las necesidades futuras del negocio.&lt;/p&gt;

&lt;h2&gt;
  
  
  ¿Qué es un Monolito?
&lt;/h2&gt;

&lt;p&gt;Un monolito es una arquitectura de software donde todas las funcionalidades de una aplicación están empaquetadas y ejecutadas dentro de un solo proceso. Esto significa que el código, la base de datos y cualquier otra dependencia están interconectados en un solo componente. Aunque los monolitos son fáciles de desarrollar e implementar al principio, pueden volverse complejos y difíciles de escalar a medida que la aplicación crece.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ventajas de los Monolitos:
&lt;/h3&gt;

&lt;p&gt;Simplicidad: Los monolitos son más fáciles de desarrollar, probar y mantener, especialmente para aplicaciones pequeñas y de mediana envergadura.&lt;/p&gt;

&lt;p&gt;Menos complejidad en despliegues: Como todo está empaquetado en una sola unidad, los despliegues pueden ser más sencillos, siempre que no haya conflictos entre las funcionalidades.&lt;/p&gt;

&lt;p&gt;Fácil depuración: La depuración en un monolito es relativamente más simple, ya que todo el código está en un solo lugar.&lt;/p&gt;

&lt;p&gt;Menos Requerimientos de Escalabilidad: Si no se prevé un crecimiento significativo en el número de usuarios o las demandas de rendimiento, un monolito puede ser suficiente. En estos casos, la complejidad de los microservicios puede no estar justificada.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inconvenientes de los Monolitos:
&lt;/h3&gt;

&lt;p&gt;Dificultad para escalar: A medida que la aplicación crece, puede ser complicado escalar componentes específicos del monolito sin afectar toda la aplicación.&lt;/p&gt;

&lt;p&gt;Acoplamiento: La naturaleza monolítica puede conducir a un alto acoplamiento entre las funcionalidades, lo que dificulta la modificación o adición de nuevas características sin afectar otras áreas.&lt;/p&gt;

&lt;p&gt;Mayor tiempo de implementación: A medida que el monolito crece, el tiempo requerido para implementar nuevas características puede aumentar significativamente.&lt;/p&gt;

&lt;h3&gt;
  
  
  Híbrido:
&lt;/h3&gt;

&lt;p&gt;Es importante tener en cuenta que no es una elección "todo o nada". Algunas aplicaciones pueden beneficiarse de un enfoque híbrido, donde partes del sistema se mantienen como un monolito mientras que otras se descomponen en microservicios. Esta combinación puede ofrecer la simplicidad y la estabilidad de un monolito junto con la escalabilidad y la flexibilidad de los microservicios en áreas específicas de la aplicación.&lt;/p&gt;

&lt;p&gt;Desde este primer momento AWS nos puede ayudar a planificar y organizar el proceso de migración usando AWS Migration Hub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Migration Hub&lt;/strong&gt; brinda una ubicación central para recopilar datos de inventario de servidor y aplicación para la evaluación, la planificación y el rastreo de las migraciones a AWS. Migration Hub también puede agilizar la modernización de aplicaciones tras la migración.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AosYOtV---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1oqsmsw6fldcx2yox66j.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AosYOtV---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1oqsmsw6fldcx2yox66j.jpg" alt="Image description" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Para ilustrar los patrones que vamos a comentar para poder romper un monolito, partiremos de que ya tenemos nuestras cargas de trabajo en AWS, un servicio que nos ayuda a realizar un lift and shift de nuestra aplicación es AWS Application Migration Service.&lt;/p&gt;

&lt;p&gt;AWS Application Migration Service minimiza los procesos manuales que consumen mucho tiempo y son propensos a errores al automatizar la conversión de sus servidores de origen para que se ejecuten de forma nativa en AWS. También simplifica la modernización de aplicaciones con opciones de optimización personalizadas e integradas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s904lGg8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/785o7sxq7a0tp6jgzbmt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s904lGg8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/785o7sxq7a0tp6jgzbmt.jpg" alt="Image description" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Migration Hub está integrado con AWS Application Migration Service y puede utilizar la consola de AWS Migration Hub para supervisar los servidores que está migrando con AWS Application Migration Service.&lt;/p&gt;

&lt;p&gt;Arquitectura de nuestro monolito actual:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tGwuwK5h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tkpeodkr2jw16qmbq255.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tGwuwK5h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tkpeodkr2jw16qmbq255.jpg" alt="Image description" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Patrones de Descomposición de Monolitos en AWS
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Patrón Strangler Fig:
&lt;/h3&gt;

&lt;p&gt;El patrón Strangler Fig es un enfoque arquitectónico utilizado para modernizar gradualmente sistemas monolíticos, migrándolos hacia una arquitectura basada en microservicios. El término y la analogía provienen de Martin Fowler, un destacado experto en desarrollo de software y autor de renombre en el campo.&lt;/p&gt;

&lt;p&gt;El patrón Strangler Fig fue introducido por Martin Fowler en su libro "Refactoring: Improving the Design of Existing Code". La analogía se inspira en la forma en que la planta trepadora "Strangler Fig" crece envolviendo gradualmente a un árbol huésped con sus raíces, finalmente reemplazando al árbol original por completo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2nmpcIny--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2wcctjfc1hmfadxrfurs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2nmpcIny--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2wcctjfc1hmfadxrfurs.jpg" alt="Image description" width="800" height="532"&gt;&lt;/a&gt;&lt;br&gt;
En términos de desarrollo de software, el patrón Strangler Fig implica reemplazar gradualmente partes de una aplicación monolítica existente con microservicios. Esto se hace a medida que se desarrollan y despliegan nuevos microservicios, los cuales, con el tiempo, van "estrangulando" y reemplazando las funcionalidades del monolito original.&lt;/p&gt;

&lt;p&gt;Como hemos comentado, puedes usar este patrón si quieres migrar tu monolito de forma gradual sin tener impacto en la disponibilidad de la aplicación por lo que los usuarios finales no deberían ser impactados durante la migración. Puede también ocurrir que se necesite añadir una nueva funcionalidad y en vez de añadirla al monolito ya se cree como un microservicio independiente.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pasos para Implementar el Patrón Strangler Fig:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Añade la capa de proxy o Stranger facade&lt;br&gt;
Añade el componente que será encargado de dirigir las peticiones al monolito o al microservicio que corresponda. Inicialmente, este proxy no hará nada más que un pass- through de todo el tráfico, sin modificar, a la aplicación monolítica.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identificación de Componentes Para Modernizar&lt;br&gt;
Identifica los componentes específicos del monolito que deseas modernizar. Pueden ser los módulos que necesitan escalar más, funcionalidades críticas que requieren mejoras o cualquier otra área que se beneficiaría de la transformación en un microservicio. Se puede usar diseño orientado a dominio (Domain driven design (DDD)) para este propósito.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creación del Nuevo Microservicio&lt;br&gt;
Desarrolla un nuevo microservicio que reemplace la funcionalidad del componente identificado en el paso anterior. Diseña el microservicio y extrae o desarrolla la funcionalidad que cumpla con los requisitos de la aplicación.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configuración del enrutamiento, proxy o capa de adaptación&lt;br&gt;
Implementa una capa de adaptación, proxy o enrutamiento, se puede denominar de muchas maneras, pero el objetivo es que permita canalizar las solicitudes del componente a modernizar hacia el nuevo microservicio en lugar del monolito existente. Esto puede involucrar la configuración de reglas de enrutamiento en un Load Balancer o la implementación de API Gateway para redirigir las solicitudes. Con esta estrategia pueden incrementar los tiempos de respuesta además de los costes de infraestructura ya que tendremos que mantener el monolito y los microservicios que se vayan creando, así como la fachada para enrutar las peticiones.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implementación Gradual&lt;br&gt;
Comienza a redirigir gradualmente las solicitudes del componente a modernizar hacia el nuevo microservicio utilizando la capa de adaptación o enrutamiento. Esto debe hacerse de manera controlada para minimizar el impacto en el funcionamiento general del sistema.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5I3iA_6W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20ftir1x4yxp27hogzi8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5I3iA_6W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20ftir1x4yxp27hogzi8.jpg" alt="Image description" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Repite el Proceso&lt;/p&gt;

&lt;p&gt;Repite los pasos anteriores para otros componentes que deseas modernizar. Cada nuevo microservicio debe seguir el mismo ciclo de desarrollo, pruebas y migración gradual. A medida que más componentes son reemplazados por microservicios, el monolito original se "estrangula" gradualmente.&lt;/p&gt;

&lt;p&gt;Eliminación del Monolito&lt;/p&gt;

&lt;p&gt;Una vez que todos los componentes hayan sido modernizados y reemplazados por microservicios, el monolito original puede ser eliminado de manera segura. Ahora tu aplicación habrá evolucionado hacia una arquitectura basada en microservicios, lo que permite una mayor escalabilidad, flexibilidad y mantenibilidad.&lt;/p&gt;

&lt;p&gt;Una capacidad del servicio mencionado anteriormente de AWS, AWS Migration Hub, nos puede ayudar a realizar esta migración del monolito, se llama AWS Migration Hub Refactor Spaces.&lt;/p&gt;

&lt;p&gt;AWS Migration Hub Refactor&lt;br&gt;
AWS Migration Hub Refactor ayuda a construir y operar la infraestructura que vas a necesitar durante tu proceso de migración. Este servicio es especialmente útil si se hace uso de una infraestructura más compleja haciendo uso de múltiples cuentas de AWS.&lt;/p&gt;

&lt;p&gt;Refactor Spaces te ayuda a seguir las buenas prácticas provisionando un aislamiento a nivel de cuentas (incluido el propio Refactor Spaces).&lt;/p&gt;

&lt;p&gt;Por lo que la implementación de nuestro ejemplo usando la estrategia de multi-account con Refactor Spaces quedaría de la siguiente manera:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aUWwfpg7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e539jzjx5azvvzp53tn4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aUWwfpg7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e539jzjx5azvvzp53tn4.jpg" alt="Image description" width="800" height="532"&gt;&lt;/a&gt;&lt;br&gt;
Como ves, esta arquitectura tiene tres cuentas:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Refactor Spaces management account donde Refactor Spaces configura el networking de la arquitectura cross-account y el enrutado de tráfico. (Se ha incluido la migración del frontend a un bucket de S3).&lt;/li&gt;
&lt;li&gt;La cuenta de la aplicación monolítica.&lt;/li&gt;
&lt;li&gt;La nueva cuenta para los microservicios
Cómo funciona:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;En primer lugar, crea un entorno de Refactor Spaces en la cuenta elegida como propietario del entorno. A continuación, comparte el entorno con las otras dos cuentas. Después de compartir el entorno con otra cuenta, Refactor Spaces comparte automáticamente los recursos que crea dentro del entorno con las demás cuentas.&lt;/p&gt;

&lt;p&gt;El entorno de refactor proporciona un networking unificado para todas las cuentas. Para ello, se configuran los siguientes servicios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Transit Gateway&lt;/li&gt;
&lt;li&gt;AWS Resource Access Manager&lt;/li&gt;
&lt;li&gt;Network Load Balancer&lt;/li&gt;
&lt;li&gt;Amazon API Gateway&lt;/li&gt;
&lt;li&gt;VPCs y security groups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DsQKJq0c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c3h2zi5s1diors6hk5b4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DsQKJq0c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c3h2zi5s1diors6hk5b4.jpg" alt="Image description" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;El entorno del refactor contiene la aplicación existente y los nuevos microservicios. Después de crear un entorno de refactor, crea una aplicación Refactor Spaces dentro del entorno. La aplicación Refactor Spaces contiene servicios y rutas y proporciona un único endpoint para exponer la aplicación al exterior.&lt;/p&gt;

&lt;p&gt;Una aplicación admite el enrutamiento a servicios que se ejecutan en contenedores, serverless y Amazon Elastic Compute Cloud (Amazon EC2) con visibilidad pública o privada. Los servicios de una aplicación pueden tener uno de los dos tipos de endpoint: una URL (HTTP y HTTPS) en una VPC o en AWS Lambda.&lt;/p&gt;

&lt;p&gt;Diagrama con contenedores en vez de lambdas:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--op6iMhbr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4dshc7g9y38rr2j7j9w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--op6iMhbr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4dshc7g9y38rr2j7j9w.jpg" alt="Image description" width="800" height="532"&gt;&lt;/a&gt;&lt;br&gt;
Una vez que una aplicación contiene un servicio, agrega una ruta predeterminada para dirigir todo el tráfico desde el proxy de la aplicación al servicio que representa la aplicación existente. A medida que se desarrolla o agrega nuevas capacidades en contenedores o serverless, agrega nuevos servicios y rutas para redirigir el tráfico a los nuevos servicios. Como ya te habrás dado cuenta, esto sería una implementación del componente enrutador/proxy o strangler facade, además este servicio gestiona toda la infraestructura que se necesita para poder implementar este patrón.&lt;/p&gt;

&lt;p&gt;Durante el proceso de migración, nos podemos encontrar que desde el monolito también se realiza una llamada al servicio que acabamos de migrar. En este caso necesitamos exponer la funcionalidad dentro del monolito para que se pueda llamar desde el nuevo microservicio. Pero, ¿cómo evitamos que el cambio del modelo de dominio o los datos no impacte en el diseño del microservicio?&lt;/p&gt;

&lt;p&gt;Para este problema podemos usar otro patrón de arquitectura llamado Anticorruption Layer.&lt;/p&gt;

&lt;p&gt;Sobre este patrón y otros más, hablaremos en la segunda parte de este artículo: Descomposición de Monolitos: Patrones de Arquitectura aplicados en AWS (Segunda Parte)&lt;/p&gt;

&lt;p&gt;Links y referencias:&lt;br&gt;
&lt;a href="https://martinfowler.com/bliki/StranglerFigApplication.html"&gt;https://martinfowler.com/bliki/StranglerFigApplication.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/es_es/prescriptive-guidance/latest/cloud-design-patterns/strangler-fig.html"&gt;https://docs.aws.amazon.com/es_es/prescriptive-guidance/latest/cloud-design-patterns/strangler-fig.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/migrationhub/index.html"&gt;https://docs.aws.amazon.com/migrationhub/index.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/migrationhub-refactor-spaces/latest/userguide/what-is-mhub-refactor-spaces.html"&gt;https://docs.aws.amazon.com/migrationhub-refactor-spaces/latest/userguide/what-is-mhub-refactor-spaces.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/f2c0706c-7192-495f-853c-fd3341db265a/en-US"&gt;https://catalog.us-east-1.prod.workshops.aws/workshops/f2c0706c-7192-495f-853c-fd3341db265a/en-US&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/aeb7ce2c-b1de-470e-8371-0268f6f21b79/en-US"&gt;https://catalog.us-east-1.prod.workshops.aws/workshops/aeb7ce2c-b1de-470e-8371-0268f6f21b79/en-US&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.oreilly.com/library/view/monolith-to-microservices/9781492047834/"&gt;https://www.oreilly.com/library/view/monolith-to-microservices/9781492047834/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>microservices</category>
      <category>pattern</category>
      <category>architecture</category>
    </item>
    <item>
      <title>How to integrate AWS Step Functions with ECS</title>
      <dc:creator>Irene Aguilar</dc:creator>
      <pubDate>Thu, 01 Jun 2023 16:56:40 +0000</pubDate>
      <link>https://dev.to/ysyzygy/how-to-integrate-aws-step-functions-with-ecs-58oi</link>
      <guid>https://dev.to/ysyzygy/how-to-integrate-aws-step-functions-with-ecs-58oi</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this article we will explain how to integrate AWS Step Functions with ECS and how by adding other services (AWS Lambda functions) you can achieve a completely serverless solution for orchestrating data and services running in containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS Step Functions?
&lt;/h2&gt;

&lt;p&gt;AWS Step Functions is a serverless orchestration service that allows you to define a series of event-driven steps to create a workflow. You can manage AWS Lambda functions and other AWS services to create a distributed application as if it were a state machine.&lt;/p&gt;

&lt;p&gt;This time we are going to talk about the integration of Lambdas with Amazon Elastic Container Service (Amazon ECS) in its serverless mode, Fargate, to define a flow in which given the input data we decide what task we are going to execute, what command we are going to pass to the container, wait for it to finish and save the logs of the execution in an S3 folder.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started: creating step functions
&lt;/h2&gt;

&lt;p&gt;The first thing we have to do is to create our step functions, we will do it through the console to make it more visual. Speaking of visualising, AWS has the option to design a workflow in a completely visual way (drag and drop) called Workflow Studio, but this is the subject of another post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fioxo9mmobdwvel6zdhrc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fioxo9mmobdwvel6zdhrc.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
We select the option "Write your workflow in code" and the first decision we have to make is the type of state machine we want Standard or Express:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvw21ue34hw3pmnx83br.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvw21ue34hw3pmnx83br.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
When looking at the characteristics of each of the types, on this occasion, we selected the Standard type as our containers can be running for more than 5 minutes, so the Express type is discarded.&lt;/p&gt;

&lt;p&gt;For the definition of our workflow we have to use the amazon state language and to test and better understand the data flow and how it is passed between the different steps we can use the data flow simulator that amazon provides:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzkzvnzyeyum888tcf3o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzkzvnzyeyum888tcf3o.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Our example would look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68w6uug6jhz74q9ylm7i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68w6uug6jhz74q9ylm7i.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS ECS?
&lt;/h2&gt;

&lt;p&gt;Amazon Elastic Container Service (Amazon ECS) is a fully managed container control service. Amazon ECS leverages AWS Fargate serverless technology to provide autonomous container operations, reducing configuration, security, and patching time. It integrates easily with the rest of the AWS platform services to build secure, easy-to-use solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  ECS: cluster and task definitions
&lt;/h2&gt;

&lt;p&gt;The ECS cluster and task definition has to be already created before the integration.&lt;/p&gt;

&lt;p&gt;The cluster is created at region level and is needed to group container instances on which to run tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy14p4pc7uksckmwg3zdp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy14p4pc7uksckmwg3zdp.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Task definitions specify the application container information. You can have one or more containers (for example, you can add the X-Ray Daemon for traceability and you can select whether you want to run it in Fargate mode (AWS managed infrastructure) or EC2 mode, it also includes the ability to use it with on-premises infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5k9tltv0if4t5l231yx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5k9tltv0if4t5l231yx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration with ECS
&lt;/h2&gt;

&lt;p&gt;With our ECS already created, we focused on the integration with ECS that would have this aspect within the definition of our state machine:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"image_1": {
      "Next": "task_finished_choice_step",
      "Catch": [
        {
          "ErrorEquals": [
            "States.ALL"
          ],
          "ResultPath": "$.error_result",
          "Next": "handle_error_step"
        }
      ],
      "Type": "Task",
      "Comment": "It runs a ECS task with scenarios mode image_1  image",
      "InputPath": "$.lambda_result.next_stage",
      "ResultPath": "$.image_1_result",
      "Resource": "arn:aws:states:::ecs:runTask.sync",
      "Parameters": {
        "Cluster": "arn:aws:ecs:{region}:{account_id}:cluster/ifgeek-ecs-cluster",
        "TaskDefinition": "image_1-task-name",
        "NetworkConfiguration": {
          "AwsvpcConfiguration": {
            "Subnets": [
              "{subnet_1}",
              "{subnet_2}",
              "{subnet_3}"
            ],
            "SecurityGroups": [
              "{sg-1}"
            ]
          }
        },
        "Overrides": {
          "ContainerOverrides": [
            {
              "Name": "image_1-container",
              "Command.$": "$.command",
              "Environment": [{
                  "Name": "image_2_USER",
                  "Value": "$.user"
                }]
            }
          ]
        },
        "LaunchType": "FARGATE",
        "PlatformVersion": "LATEST"
      }
    }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The most important fields in this integration are the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Define the status of type "Task", this represents a single unit of work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;InputPath is the input data that is transmitted from the lambda (plan_next_step), where it is decided which ECS task has to be executed, what will be its execution command and the necessary environment variables are configured. Note that in the InputPath field value ("InputPath": "$.lambda_result.next_stage" ) we have used JsonPath to transfer the values to the ECS task input:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "name": "image_1",
  "input": {
    "input_file": "input/uploads/example.jpg",
    "lambda_result": {
      "total_stages_count": 1,
      "next_stage": {
        "image": "image_1",
        "command": ["echo", "hello", "world"],
        "user": "ifgeek"
      },
      "processed_stages_count": 1,
      "config": []
    }
  },
  "inputDetails": {
    "truncated": false
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;"Resource": "arn:aws:states:::ecs:runTask.sync": Indicates that the integration is with ECS and that a runTask is executed when it reaches this step and waits for it to finish. There is another resource type: "arn:aws:states:::ecs:runTask.waitForTaskToken" which executes the ECS task and then waits for the task token to be returned.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In Parameters we have to define the configuration of our ECS cluster and the TaskDefinition we want to run in addition to the network configuration which is always convenient in productive environments to configure several subnets to have multi AZ.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;However, the most interesting field in terms of configuration is the "overrides" field that allows us to overwrite the configuration and more specifically the "ContainerOverride" field that overwrites the command with which the container was defined in the TaskDefinition. It can also be used to modify the values of environment variables, which offers a way to change the configuration quickly and with many possibilities.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;"LaunchType" can be of type "FARGATE" or of type EC2 for our solution we don't need to have a container running continuously so we opted for the serverless solution with Fargate.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In our case we developed an api gateway to be able to invoke our state machine through an api, but you can also start an execution from the Step Functions console itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcday2cy13q32wlajvt3j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcday2cy13q32wlajvt3j.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
During execution, you can check which step of the state machine you are in, the input and output of the previous steps and whether it finished successfully or failed.&lt;/p&gt;

&lt;p&gt;Examples of successful and failed execution:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6kj5hxk14ykvazxeut6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6kj5hxk14ykvazxeut6z.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwradm0pw9s9gvx26i8yj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwradm0pw9s9gvx26i8yj.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
In addition to visually, a table is displayed with all the statuses, elapsed time and in all integrations the links to the services are displayed to make traceability easier and steps functions are delegated to check if the container being executed has finished correctly or not (check if it has had an exit code different from 0), which makes it easier to manage and control errors.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdaonultywolu1ig23d4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdaonultywolu1ig23d4.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
AWS ECS Console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tbvy72suqoq3scchflv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tbvy72suqoq3scchflv.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Detail of the command we are executing:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovszmqsgc3zt3j5hryxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovszmqsgc3zt3j5hryxf.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Execution completed with exit code 1:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez1llvu1xgbfndt4itm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez1llvu1xgbfndt4itm5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As we have seen, the integration of services with AWS Step Functions and more specifically with ECS Fargate is quick and easy to start using.  From this basic design it is possible to enrich it with more services such as sending notifications with Amazon SNS. You can also exploit the api of Step Functions and ECS and develop endpoints to track the steps of the state machine, it is already to adapt it to each use case, imagination is the limit!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>containers</category>
      <category>stepfunctions</category>
      <category>ecs</category>
    </item>
    <item>
      <title>How to design and create an AWS Serverless API Builder with CDK Python</title>
      <dc:creator>Irene Aguilar</dc:creator>
      <pubDate>Sun, 28 Aug 2022 20:10:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-design-and-create-an-aws-serverless-api-builder-with-cdk-python-4g32</link>
      <guid>https://dev.to/aws-builders/how-to-design-and-create-an-aws-serverless-api-builder-with-cdk-python-4g32</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj86dbydmnduyxzx1r7pi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj86dbydmnduyxzx1r7pi.png" alt="AWS Serverless API Builder: Amazon API Gateway with AWS Lambdas integrations"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless logic tier
&lt;/h2&gt;

&lt;p&gt;One of the most known serverless pattern is AWS API gateway and AWS Lambda to create a microservice which might represent the logic tier of the three-tier architecture. The union of these two services build a serverless application that is secure, highly available and scalable, using the serverless approach you are not responsible for server management in any capacity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scale project-architecture
&lt;/h2&gt;

&lt;p&gt;For larger scale project-architecture, you might think about migrating web applications associating one API Gateway with one Lambda function if they are using frameworks like Flask (for Python). These web frameworks support routing and separate user contexts that are well suited if the application is running on a web server.&lt;br&gt;
You could think same approach with a new cloud native application and add more functionality in one single Lambda.&lt;br&gt;
With this approach, an Amazon API Gateway proxies all requests to the same Lambda function to handle routing. As the application develops more routes, the Lambda function grows in size (physical file size of packages) and deployments of new versions replace the entire function. It becomes harder for multiple developers to work on the same project in this context.&lt;br&gt;
Moreover, AWS states these deployment package sizes as the limit. It makes sense to break down different parts of the program which can be independent by domain as a separate serverless function.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Deployment package (.zip file archive) size&lt;br&gt;
50 MB (zipped, for direct upload)&lt;br&gt;
250 MB (unzipped)&lt;br&gt;
This quota applies to all the files you upload, including layers and custom runtimes.&lt;br&gt;
3 MB (console editor)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Apart from that, if you design that every AWS Lambda deploys with its own Amazon API Gateway endpoint, you will end up with multiple different URL endpoints and to keep track of the different endpoints and updating different clients will become a massive hassle at scale.&lt;br&gt;
A better architecture is to take advantage of the native routing functionality available in API Gateway. You do not need a web framework in many cases, which increases the size of the package. AWS API Gateway validates parameters, reducing the need for checking parameters inside the lambda code. In addition, you can configure mechanisms for authentication and authorization and other features to lighten the code.&lt;br&gt;
The new architecture looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah2yi3fxcyyu2fdusd7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah2yi3fxcyyu2fdusd7g.png" alt="Amazon API Gateway with AWS Lambdas integrations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the approach is clear it is time to develop. The first thing that we need to create is the infrastructure.&lt;/p&gt;
&lt;h2&gt;
  
  
  Automate infrastructure creation
&lt;/h2&gt;

&lt;p&gt;We use infrastructure as code (IaC) to create this architecture. A few of the many benefits of this approach of using infrastructure as code as opposed to building applications in the console are flexibility, repeatability, and reusability.&lt;br&gt;
You have the flexibility to manage all the resources in one place and deploy a single template to any environment if needed. You can consistently deploy the same application in multiple Regions with one command. These characteristics avoid the environment drift problem. By using infrastructure as code, you can manage your applications more easily in one place.&lt;br&gt;
We will not only use IaC, but also AWS CDK which lets build applications in the cloud with the considerable expressive of a programming language.&lt;/p&gt;
&lt;h2&gt;
  
  
  Serverless API builder
&lt;/h2&gt;

&lt;p&gt;So, we take the advantage of using a programming language like Python to develop a builder whose function is to automate the creation of the lambdas and their integration with the gateway api to avoid boilderplate code.&lt;br&gt;
With this builder, we just need to specify the method and the necessary api resources and configure the lambda values in our property file (with the power of property files we can set different values for different environments with development and production) and our code in cdk takes care of creating the lambda and the integration with the gateway api.&lt;/p&gt;

&lt;p&gt;Let's see how!&lt;/p&gt;

&lt;p&gt;Firstly, we will create a Python AWS CDK project and in the main class app.py where stack are defined, we will create an api gateway stack along with its properties:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;api_gateway_props = ApiGatewayProps(
    props=properties,
    template=env_template
)

api_gateway_stack = ApiGatewayDPPStack(
    app,
    api_gateway_props=api_gateway_props,
    env=cdk_env,
    tags=tags)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Secondly, we will create an API Gateway stack and a private method __get_build_integrations is invoked where is the serverless builder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class ApiGatewayDPPStack(core.Stack):


    def __init__(self, scope: core.Construct, api_gateway_props: api_gateway_props, **kwargs) -&amp;gt; None:
        props = api_gateway_props.props
        template = api_gateway_props.template
        id = template.substitute(name=props["api_gateway"]["stack_name"])
        super().__init__(scope, id, stack_name=id,description=props["api_gateway"]["description"], **kwargs)
        self.__create_api_gateway_role(props, template)
        api = api_gateway.RestApi(self, template.substitute(name=props["api_gateway"]["api_rest_name"]),
                                  description=template.substitute(name=props["api_gateway"]["api_rest_description"]),
                                  deploy_options=api_gateway.StageOptions(
                                      stage_name=template.substitute(name=props["api_gateway"]["stage_name"]),
                                      data_trace_enabled=True,
                                      logging_level=api_gateway.MethodLoggingLevel.INFO,
                                      access_log_destination=log_group_destination)
                                  )

self.__get_build_integrations(
    props,
    api,
    template,
    api_gateway_props
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we will develop the __get_build_integrations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def __get_build_integrations(self, props: dict,
                             api: api_gateway.RestApi, template: Template,
                             api_gateway_props: api_gateway_props):
    for integration in props["api_gateway"]["integrations"]:
        name = integration["name"]
        environment, id_prefix, request_schema, response_schema, role =  
    self.build_api_method(api_gateway_props,integration, name, template)
        lambda_integration = lambda_builder.create_lambda(self,
                                                          template,
                                                          integration["lambda"],
                                                          role,
                                                          environment)
method_builder_props = method_props.MethodBuilderProps(response_schema, lambda_integration,name, api, self, integration["base_resource"],integration["method"], id_prefix, request_schema)
api_method.MethodBuilder(method_builder_props)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To develop this method, we will have a loop which will iterates per integration and need three functions:&lt;br&gt;
1.- Obtain properties and resources of API definition such as request, response schema to be validate.&lt;br&gt;
2.- Create and configurates a lambda with its properties and layers. To make the example more complete, we have created two lambdas that create and delete an item with Amazon DynamoDB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def create_lambda(self, template, props, execution_role, environment = None ):

    lambda_name = template.substitute(name=props['name'])
    lambda_function = _lambda.Function(
        self,
        id=lambda_name,
        function_name=lambda_name,
        description=props['description'],
        runtime=_lambda.Runtime.PYTHON_3_8,
        code=_lambda.Code.from_asset(props['code_from_asset']),
        handler=props['handler'],
        memory_size=props['cpu'],
        timeout=core.Duration.minutes(props['timeout']),
        role=execution_role,
        environment=environment
    )
    __add_layer(lambda_function, props, self, template)
    return lambda_function


def __add_layer(lambda_function, props, self, template):
    if "layer" in props:
        layer_name = template.substitute(name=props["layer"]["name"])
        layer = _lambda.LayerVersion(self,
                                     layer_name,
                                     layer_version_name=layer_name,
                                     compatible_runtimes=[_lambda.Runtime.PYTHON_3_8],
                                     code=_lambda.Code.from_asset(props["layer"]["code"]),
                                     description=props["layer"]["description"])
        lambda_function.add_layers(layer)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.- Add a new method in API Gateway with all required information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def addMethod(self, baseResource, props, responseModel, requestModel, requestValidator):
    baseResource.add_method(
        http_method=props.httpMethod,
        integration=self.lambdaIntegration,
        request_parameters=props.queryParameters,
        method_responses=[
            api_gateway.MethodResponse(status_code=str(props.responseStatusCode),
                                       response_models=responseModel)],
        request_validator=requestValidator,
        request_models=requestModel,
        operation_name=self.methodName)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All the code of the project is in the following github repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ysyzygy/aws-serverless-api-builder-cdk" rel="noopener noreferrer"&gt;https://github.com/ysyzygy/aws-serverless-api-builder-cdk&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The example in the AWS CDK documentation gives a basic idea of the methods you need, but the more complex the API Rest you are building, the more complex the code you need to automate it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_apigateway/README.html#aws-lambda-backed-apis" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_apigateway/README.html#aws-lambda-backed-apis&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is all, as always feedback is welcome :)&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>iac</category>
      <category>python</category>
    </item>
  </channel>
</rss>
