<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Opemipo Disu</title>
    <description>The latest articles on DEV Community by Opemipo Disu (@coderoflagos).</description>
    <link>https://dev.to/coderoflagos</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/coderoflagos"/>
    <language>en</language>
    <item>
      <title>Are AI Observability Tools Actually Helping?</title>
      <dc:creator>Opemipo Disu</dc:creator>
      <pubDate>Mon, 30 Mar 2026 13:03:35 +0000</pubDate>
      <link>https://dev.to/coderoflagos/are-ai-observability-tools-actually-helping-3337</link>
      <guid>https://dev.to/coderoflagos/are-ai-observability-tools-actually-helping-3337</guid>
      <description>&lt;p&gt;Observability tools have been feeling very different lately.&lt;/p&gt;

&lt;p&gt;Almost every platform now claims to offer some “AI-powered” feature, such as anomaly detection, root cause analysis, automated insights, and even suggested fixes.&lt;/p&gt;

&lt;p&gt;But I’m not sure how much of it is actually useful in workflows.&lt;/p&gt;

&lt;p&gt;From what I’ve seen, most teams still deal with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Too many alerts
&lt;/li&gt;
&lt;li&gt;Jumping between logs, metrics, and traces
&lt;/li&gt;
&lt;li&gt;Spending so much time figuring out root causes &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And even with AI features, a lot of tools still feel like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Here’s more data… just slightly reorganized”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At the same time, there &lt;em&gt;are&lt;/em&gt; some interesting improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;automatic correlation between signals
&lt;/li&gt;
&lt;li&gt;faster incident investigation
&lt;/li&gt;
&lt;li&gt;less manual digging in some cases
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So it’s not all hype, but it also doesn’t feel like a complete shift yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Curious about real usage
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Are you actually using AI features in your observability stack?
&lt;/li&gt;
&lt;li&gt;Has it reduced alert fatigue at all?
&lt;/li&gt;
&lt;li&gt;Or are you mostly ignoring those features?
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I recently looked into this while comparing a bunch of AI-powered observability tools and how they’re evolving.&lt;/p&gt;

&lt;p&gt;If anyone’s interested in the full breakdown, I put it here:&lt;br&gt;&lt;br&gt;
&lt;a href="https://metoro.io/blog/best-observability-tools-with-ai" rel="noopener noreferrer"&gt;https://metoro.io/blog/best-observability-tools-with-ai&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Feels like we’re in that phase where:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The idea is solid, but the execution is still catching up&lt;br&gt;&lt;br&gt;
It would be interesting to hear what others are seeing in production.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>devops</category>
      <category>sre</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Opemipo Disu</dc:creator>
      <pubDate>Sat, 20 Dec 2025 13:45:53 +0000</pubDate>
      <link>https://dev.to/coderoflagos/-33bm</link>
      <guid>https://dev.to/coderoflagos/-33bm</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/coderoflagos" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F392943%2F708b2716-2bb6-45ee-9662-ddb0288e3079.JPG" alt="coderoflagos"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/coderoflagos/a-practical-guide-to-building-your-first-automation-workflow-4k7l" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;A Practical Guide to Building Your First Automation Workflow&lt;/h2&gt;
      &lt;h3&gt;Opemipo Disu ・ Dec 19&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#programming&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#beginners&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>programming</category>
      <category>ai</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>A Practical Guide to Building Your First Automation Workflow</title>
      <dc:creator>Opemipo Disu</dc:creator>
      <pubDate>Fri, 19 Dec 2025 11:38:08 +0000</pubDate>
      <link>https://dev.to/coderoflagos/a-practical-guide-to-building-your-first-automation-workflow-4k7l</link>
      <guid>https://dev.to/coderoflagos/a-practical-guide-to-building-your-first-automation-workflow-4k7l</guid>
      <description>&lt;p&gt;Recently, I’ve been exploring the world of automation basically because I got tired of working with tools and handling tasks manually by writing scripts to make things happen. However, I wanted an easier approach; something visual and flexible, and my search led me to the world of automation workflows.&lt;/p&gt;

&lt;p&gt;Along the way, I found a life-changing tool for workflow automation, ByteChef, that enabled me to integrate different tools and build workflows in a straightforward way. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is ByteChef?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://bytechef.io" rel="noopener noreferrer"&gt;Bytechef&lt;/a&gt; is an open-source workflow automation and integration tool that lets you work with APIs and tools; it’s a tool that visually and seamlessly orchestrates connections with your favorite tools. With ByteChef, you can easily work with your best tools without necessarily writing a single line of code. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F44sn2dpvbrofyjm1vgr6.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F44sn2dpvbrofyjm1vgr6.gif" alt="Easy GIF"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, instead of manually working with data from different tools, or creating complex processes, ByteChef provides an interactive interface and robust capabilities to bring your automation ideas to life.&lt;/p&gt;

&lt;p&gt;ByteChef’s capabilities aren't just limited to building workflow automations, it also enables you to work with different APIs of your choice. For example, you can connect APIs into your application with ByteChef. In this blog post, we won’t be focusing on this feature as we will have a deep look into the workflow automation feature.&lt;/p&gt;

&lt;p&gt;ByteChef can be ran in two ways: &lt;strong&gt;cloud&lt;/strong&gt; (we manage it for you) and &lt;strong&gt;self-hosted&lt;/strong&gt; (you run it on Docker). Cloud is easier to start with. If you want to self-host, you can check the &lt;a href="https://github.com/bytechefhq/bytechef" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; for setup steps. For this guide, we'll use the Cloud approach, but everything works the same on both.&lt;/p&gt;

&lt;p&gt;Before moving to the next section of this article, I just dropped a video on YouTube where you can see how to create your first workflow in ByteChef. In this video, you’ll learn about ByteChef and how to work with it - you can see it as a video version of this blog post.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/Bs_AnyxmcEk"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of ByteChef
&lt;/h3&gt;

&lt;p&gt;Even while ByteChef provides the workflow automation feature, there are other great features that work with the workflow automation feature. In this section of the article, we’ll have a look at features that makes up the workflow automation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multi-Application Integration&lt;/strong&gt;: With ByteChef, you can work with over 200+ tools including Gmail, Slack, Google Drive, and OpenAI. The fun of this integration is that you're not locked into a single ecosystem. Whether you're using productivity tools like Google Workspace, communication platforms like Slack, or AI services like OpenAI, ByteChef connects them all seamlessly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Visual Workflow Builder&lt;/strong&gt;: ByteChef lets you design and configure workflows using an interactive interface without writing code. This doesn’t mean you aren’t allowed to write code; while using Bytechef, you don’t need to write complex code - in fact, writing code is optional when you’re using ByteChef. The main work while using ByteChef is just adding components, triggers, and configuring them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flow Logic&lt;/strong&gt;: ByteChef enables you to create smart workflows with branching logic, conditions, and decision points. This is what transforms workflows from simple into smart automation. You can create complex decision trees with multiple conditions; in ByteChef, we have different flows:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Condition&lt;/strong&gt;: Here’s a very basic flow; it checks something (for example; if something is urgent or not) and goes left if true, right if false. Pretty simple.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Branch&lt;/strong&gt;: This flow is very similar to condition, but handles more than just true/false. You can have multiple different paths.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Each&lt;/strong&gt;: When you have a list of things, this flow runs the same steps for each item. For example, if you have 10 emails, it processes all 10 the same way.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Loop (In Progress)&lt;/strong&gt;: Repeats the same thing over and over until something changes. Useful if you need to keep trying until it works.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Map (In Progress)&lt;/strong&gt;: It takes a list and transforms every item in it. Changes the format or adds information to each item.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fork/Join&lt;/strong&gt;: It helps run multiple things at the same time instead of one after another. Also help save time by doing parallel work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Parallel&lt;/strong&gt;: This flow is quite similar to fork/join. It runs multiple tasks at once.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error Handler&lt;/strong&gt;: If something breaks, this flow catches it and does something else instead of crashing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Wait for Approval&lt;/strong&gt;: Pauses and waits for a user’s approval before moving forward. Good for important decisions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Subflow (In progress)&lt;/strong&gt;:  This flow calls another workflow from within your workflow and helps organize complex workflows into smaller pieces.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the conditions themselves, ByteChef uses &lt;a href="https://docs.spring.io/spring-framework/docs/3.0.x/reference/expressions.html" rel="noopener noreferrer"&gt;&lt;strong&gt;SpEL (Spring Expression Language&lt;/strong&gt;)&lt;/a&gt; to create and manage them. You can do simple stuff like checking if something equals something, or more complex stuff like math. An example of condition expressions are: &lt;code&gt;anthropic_1.urgent == true&lt;/code&gt; and &lt;code&gt;email.from == 'boss@company.com'&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Trigger-Based Automation&lt;/strong&gt;: This compliments workflows to take actions automatically based on certain events for example; taking new emails, form submissions, or scheduled times.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Transformation&lt;/strong&gt;: Process and transform data as it moves between applications using built-in tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Integration&lt;/strong&gt;: Leverage AI capabilities to make your workflows smarter and more intelligent.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Core Concepts Of ByteChef
&lt;/h2&gt;

&lt;p&gt;Before building your first workflow, there are things you need to know about the basic pieces that makes up a workflow in ByteChef, let’s have a look at them in this section of the blog post:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Triggers&lt;/strong&gt;: These are events that start your workflow; it’s the first thing after creating your workflow. An example of a trigger is when a new email arrives, a form being submitted, or a scheduled time is reached. Triggers are just like the "when" of your automation; it initiates automations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Actions&lt;/strong&gt;: Actions are similar to triggers, but these are tasks each component performs. An action could be sending a message, creating a file, analyzing text with AI, or updating a record. Actions are the specific task a component should perform. In ByteChef, the action you want depends on the component you’re working with. Some components even enable you to create custom actions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Components&lt;/strong&gt;: These are the building blocks, you can also refer to them as nodes. Each component connects to an external app (like OpenAI, Google Drive, or even Spotify) and lets you use that app within a workflow. Components have two parts: &lt;strong&gt;triggers&lt;/strong&gt; and &lt;strong&gt;actions&lt;/strong&gt;; to work with a component, you need to connect with the tool and configure properties.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Projects&lt;/strong&gt;: Projects are like folders that hold your workflows - it’s a place where you can keep all your workflows. A project can hold as many workflows as you want.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Workflows&lt;/strong&gt;: In ByteChef, workflows are the actual automation. A workflow is a series of steps and triggers that work together to automate something. It's the whole thing here, but the automation process itself.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Connections&lt;/strong&gt;: ByteChef uses connections to work with services, they’re like authentication links to third-party applications, allowing ByteChef to interact with your accounts securely. With ByteChef, you can connect services once, and ByteChef remembers it so workflows can use them securely.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are are the major concepts of ByteChef and all you need to know before building your first workflow in ByteChef - now, let’s put what we know to work!&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Your First Workflow: An Email and AI Triage Example
&lt;/h2&gt;

&lt;p&gt;Now, let’s build an actual workflow. In this guide, we will be building an automated email system that checks incoming emails for urgency and routes them to different Slack channels through the flows and Anthropic AI for smart decision making. This workflow will showcase the capabilities of the key concepts you'll use in building basic and complex automations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create a New Project
&lt;/h3&gt;

&lt;p&gt;As mentioned earlier, a project in ByteChef is the folder where the workflow lives.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the dashboard, head over to the &lt;strong&gt;Projects&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;Click on the &lt;strong&gt;New Project&lt;/strong&gt; button at the top right corner.&lt;/li&gt;
&lt;li&gt;Fill the necessary credentials such as its name, category, description, and tags.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxur8gruhncyf0bewu1bx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxur8gruhncyf0bewu1bx.png" alt="Create project modal via ByteChef’s Dashboard."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create project modal via ByteChef’s Dashboard.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create a New Workflow
&lt;/h3&gt;

&lt;p&gt;When done creating your project, you can create a new workflow by clicking the &lt;strong&gt;+ Workflow&lt;/strong&gt; button in the project’s pane. When clicked, a modal should pop up requesting for the name and description of your workflow - this workflow will be saved in the project where it was created. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7o65vs1yf60643jfw7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7o65vs1yf60643jfw7s.png" alt="Create new workflow modal via ByteChef’s Dashboard."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create new workflow modal via ByteChef’s Dashboard.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Set Up Your Trigger
&lt;/h3&gt;

&lt;p&gt;After creating the workflow, the first step is to configure the trigger in the workflow. The configurable &lt;strong&gt;manual trigger&lt;/strong&gt; is the first thing you see in the workflow, now the first thing to do is to change the trigger and make configurations.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click the replace icon when you hover on the trigger button&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Because of what we’ll be working on, search for and select "Gmail" when you want to replace the trigger&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the trigger type: "New Email"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Connect to Gmail’s OAuth.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure the trigger parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Format: Simple (for straightforward configuration)&lt;/li&gt;
&lt;li&gt;Topic Name: Based on what is configured in your Google Pub/Sub.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This setup means your workflow will be activated whenever a new email arrives.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Add a new component
&lt;/h3&gt;

&lt;p&gt;Now that emails are triggering your workflow, let's add the Anthropic component to determine if each email is urgent.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the + icon button to add a new task&lt;/li&gt;
&lt;li&gt;Search for and select "&lt;strong&gt;Anthropic&lt;/strong&gt;" (the AI provider)&lt;/li&gt;
&lt;li&gt;Connect Anthropic using the your client ID (API Key)&lt;/li&gt;
&lt;li&gt;Select "Ask" as the action type&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure the AI task in the properties pane:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Model: Choose any Anthropic models of your choice&lt;/li&gt;
&lt;li&gt;Format: Simple&lt;/li&gt;
&lt;li&gt;Response Format: Structured Data&lt;/li&gt;
&lt;li&gt;Max Models: The maximum number of token to generate in the chat completion.&lt;/li&gt;
&lt;li&gt;User Prompt: Create an instruction like:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;Analyze this email and determine if it is urgent with "This is urgent" or "This is not urgent"&lt;/span&gt;
&lt;span class="na"&gt;Email Subject&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${trigger_1.subject}&lt;/span&gt;
&lt;span class="na"&gt;Email From&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${trigger_1.from}&lt;/span&gt;
&lt;span class="na"&gt;Email Body&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${trigger_1.bodyPlain}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;${trigger_1}&lt;/code&gt; syntax references data from your Gmail trigger.&lt;/li&gt;
&lt;li&gt;In the Anthropic component, create a schema for the incoming conditional flow.&lt;/li&gt;
&lt;li&gt;Add a new data pill and select &lt;strong&gt;Boolean&lt;/strong&gt; as the pill type - give it a title.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dbhtcy3abcuwodz6vas.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dbhtcy3abcuwodz6vas.png" alt="Add new data pill"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This action sends each email to Claude for analysis, which will respond with a clear classification.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Add Conditional Flow
&lt;/h3&gt;

&lt;p&gt;With the urgency classified in the Anthropic component, now we need to route emails to different destinations based on the result.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the add icon in the workflow&lt;/li&gt;
&lt;li&gt;Select "Condition" in the flows tab as the flow type&lt;/li&gt;
&lt;li&gt;Configure the condition:

&lt;ul&gt;
&lt;li&gt;Set "Raw Expression" to TRUE to enable custom logic&lt;/li&gt;
&lt;li&gt;Enter the expression: &lt;code&gt;anthropic_1.isUrgent&lt;/code&gt; from what we created in the schema builder&lt;/li&gt;
&lt;li&gt;This checks if the AI response equals "URGENT".&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This creates a branch point in your workflow. Depending on whether the condition is true or false, different actions will execute.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Define the True Branch (Urgent Emails)
&lt;/h3&gt;

&lt;p&gt;Here’s for emails classified as urgent:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under the "True" section, add a new component&lt;/li&gt;
&lt;li&gt;Search and select the "Slack" component&lt;/li&gt;
&lt;li&gt;Choose "Send Channel Message" for the action&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Give the component a name&lt;/li&gt;
&lt;li&gt;Channel: Select your "urgent-emails" channel (or create a channel where you want urgent emails to be delivered)&lt;/li&gt;
&lt;li&gt;Message: Create an informative message like:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;URGENT EMAIL&lt;/span&gt;

&lt;span class="nv"&gt;*From&lt;/span&gt;&lt;span class="s"&gt;:* ${trigger_1.from}&lt;/span&gt;
&lt;span class="nv"&gt;*Subject&lt;/span&gt;&lt;span class="s"&gt;:* ${trigger_1.subject}&lt;/span&gt;
&lt;span class="nv"&gt;*Classification&lt;/span&gt;&lt;span class="s"&gt;:*&lt;/span&gt; 
&lt;span class="nv"&gt;*Reason&lt;/span&gt;&lt;span class="s"&gt;:*&lt;/span&gt; 

&lt;span class="s"&gt;Requires immediate attention&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 6: Define the False Branch (Non-Urgent Emails)
&lt;/h3&gt;

&lt;p&gt;For emails classified as not urgent:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the "Case False" section, add a new task&lt;/li&gt;
&lt;li&gt;Select "Slack" and "Send Channel Message"&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Give the component a name&lt;/li&gt;
&lt;li&gt;Channel: Select your "routine-emails" channel (or any other channel where you want unimportant messages to be delivered)&lt;/li&gt;
&lt;li&gt;Message: Create a message like:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;New Email Received&lt;/span&gt;

&lt;span class="nv"&gt;*From&lt;/span&gt;&lt;span class="s"&gt;:* ${trigger_1.from}&lt;/span&gt;
&lt;span class="nv"&gt;*Subject&lt;/span&gt;&lt;span class="s"&gt;:* ${trigger_1.subject}&lt;/span&gt;
&lt;span class="nv"&gt;*Classification&lt;/span&gt;&lt;span class="s"&gt;:*&lt;/span&gt; 
&lt;span class="nv"&gt;*Reason&lt;/span&gt;&lt;span class="s"&gt;:*&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, your workflow should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8gmc5z2n47o4hrfn6tt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8gmc5z2n47o4hrfn6tt.png" alt="Workflow result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Test Your Workflow
&lt;/h3&gt;

&lt;p&gt;Before activating your workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Review the entire flow to ensure it makes sense&lt;/li&gt;
&lt;li&gt;Click the  "Test" button at the top-right corner of the workflow,  to test the workflow with sample data&lt;/li&gt;
&lt;li&gt;Check that the flow follows the correct path&lt;/li&gt;
&lt;li&gt;Verify that messages are formatted correctly&lt;/li&gt;
&lt;li&gt;When testing, you should see something like this in the Slack channel where you want non-urgent messages to be delivered:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;New Email Received&lt;/span&gt;
&lt;span class="na"&gt;From&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample from&lt;/span&gt;
&lt;span class="na"&gt;Subject&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample subject&lt;/span&gt;
&lt;span class="na"&gt;Classification&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;Reason&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 8: Publish and Deploy
&lt;/h3&gt;

&lt;p&gt;Once you're satisfied with the workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the &lt;strong&gt;Publish&lt;/strong&gt; button&lt;/li&gt;
&lt;li&gt;Navigate to the &lt;strong&gt;Deployments&lt;/strong&gt; tab in the dashboard&lt;/li&gt;
&lt;li&gt;Click New Deployment&lt;/li&gt;
&lt;li&gt;Select the project you want to deploy, the version, and add other necessary credentials&lt;/li&gt;
&lt;li&gt;Ensure the connections are configured properly&lt;/li&gt;
&lt;li&gt;Enable the workflow&lt;/li&gt;
&lt;li&gt;Send a test urgent/non-urgent email to your configured email address&lt;/li&gt;
&lt;li&gt;Monitor the configured Slack channels to verify emails are being routed correctly&lt;/li&gt;
&lt;li&gt;Check the workflow logs to see execution history and identify any issues&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If a message is urgent, it’ll drop in the expected channel in this format:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcht8t3rwot3kc3ibkw3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcht8t3rwot3kc3ibkw3.png" alt="Result in Slack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, to monitor the deployment, you can check your logs in the workflow executions section of the dashboard&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3utq54lw7qfcfzoh8koz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3utq54lw7qfcfzoh8koz.png" alt="ByteChef logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that you have your first workflow running, there are ways to make the most of it. You can add error handling to catch failures, so if something breaks, your workflow knows what to do instead of just stopping. &lt;/p&gt;

&lt;p&gt;To achieve this, you can try creating a loop to process multiple emails at once if you want to batch operations, or even instruct your LLM to make sentiment detection and content categorization to make your workflow even smarter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;ByteChef enables you to automate routine tasks and create smart workflows that save time and reduce errors. Your first workflow is just the beginning. You can create more workflows to be better with automation. In ByteChef, the best way to understand the platform better is by playing around with it - as you become more comfortable with the platform, you'll discover countless ways to streamline your work and work with your favourite applications.&lt;/p&gt;

&lt;p&gt;The best way to successful automation is starting simple, testing thoroughly, and gradually building more complex workflows as your confidence and needs grow.&lt;/p&gt;

&lt;p&gt;Thanks for taking the time to learn about workflow automation with ByteChef in this blog post. Please, do well to use &lt;a href="https://bytechef.io" rel="noopener noreferrer"&gt;ByteChef&lt;/a&gt; and &lt;a href="https://discord.com/invite/VKvNxHjpYx" rel="noopener noreferrer"&gt;join the community&lt;/a&gt;, can’t wait to have you join the community.&lt;/p&gt;

&lt;p&gt;Lastly, if you ever have any issues navigating ByteChef, refer to the &lt;a href="https://docs.bytechef.io" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpw40fxsns9gfgvpkpbmh.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpw40fxsns9gfgvpkpbmh.gif" alt="Thank you for readinng"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy automating! 🛠️&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Incident Event Pipelines for Real-Time Notifications with Windmill and Checkly</title>
      <dc:creator>Opemipo Disu</dc:creator>
      <pubDate>Fri, 17 Oct 2025 13:10:38 +0000</pubDate>
      <link>https://dev.to/coderoflagos/incident-event-pipelines-for-real-time-notifications-with-windmill-and-checkly-2b2h</link>
      <guid>https://dev.to/coderoflagos/incident-event-pipelines-for-real-time-notifications-with-windmill-and-checkly-2b2h</guid>
      <description>&lt;p&gt;When applications and APIs have a downtime, customers are usually affected as operations remain broken. However, engineers do their job by fixing the broken operation - but a big question often comes up in engineering teams is how to tell users what exactly is happening without spamming or sending duplicate messages. That's where building a well-structured incident notification pipeline comes in.&lt;/p&gt;

&lt;p&gt;In this article, we'll explore how to build an event pipeline for real-time notifications using Windmill. There are a bunch of monitoring tools like Sentry, New Relic, and even Grafana, for generating raw incidents but turning alerts into clear and reliable alerts for users means you need to orchestrate the alerts, clean them up, and make sure they always gets delivered.&lt;/p&gt;

&lt;p&gt;We will show how to use Windmill to gather incident events from a monitoring tool, apply routing rules, and deliver notifications through various channels. With this piece, you'll know how to orchestrate a production-ready pattern for ensuring that updates are sent to users without duplications or delay.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jbv817y01z2to4qd81r.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jbv817y01z2to4qd81r.gif" alt="This is not an alert from Windmill, please. 😂" width="600" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is not an alert from Windmill, please. 😂&lt;/p&gt;

&lt;h2&gt;
  
  
  System Architecture: From Incidents to Alerts
&lt;/h2&gt;

&lt;p&gt;Now that we’ve set the context and had a little laugh with that caption 😅, let’s get straight to the point and look at how this whole system fits together. The goal of this workflow is simple: when something breaks within an application, users should be able to get real-time alerts only once. To make this work, we will integrate a monitoring tool with Windmill for orchestration and notification delivery. Now, let's have a deeper look at the major components of this workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitoring Tool&lt;/strong&gt;: While there are many tools such as Grafana, Sentry, and Prometheus, we will be using &lt;a href="https://checklyhq.com/" rel="noopener noreferrer"&gt;Checkly&lt;/a&gt; to detect when something goes wrong within an application. These tools can fire a webhook or alert when things break.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Windmill&lt;/strong&gt;: Windmill sits in the middle of the workflow as the orchestrator as it helps ingest incident events from the monitoring tool (Checkly, in this context), and normalize them into a consistent schema so all the alerts are in the right destination. Windmill will also decide the destination of each type of alerts (whether they're critical, warnings, or minor). Lastly, we'll be ensuring there's a high level of reliability and deduplication with alerts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Notification Channels&lt;/strong&gt;: Once the orchestration with Windmill is done, it can deliver notifications through various channels like Slack, email, SMS, or webhook endpoints to other services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feedback Loop&lt;/strong&gt;: Some alerts might not appear at the expected destination the first time. Windmill allows you to implement retry flows to reattempt delivery or escalate the issue to another channel.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnnzhc5xf671i8p161x6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnnzhc5xf671i8p161x6.png" alt="Windmill flow" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Gathering Events From Checkly in Windmill
&lt;/h2&gt;

&lt;p&gt;The first step of this workflow is collecting incident events from the monitoring tool. Since we're using Checkly as our monitoring tool, we'll integrate Checkly with Windmill for orchestration and notification delivery whenever an error or downtime occurs within an application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create a Webhook in Windmill
&lt;/h3&gt;

&lt;p&gt;Windmill lets you expose a flow as a webhook, which Checkly can call whenever something breaks. We need to work with Windmill for the webhook trigger configuration as this we'll need the incident pipeline for Checkly's webhook configuration.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Windmill account and log into your Windmill workspace via &lt;a href="https://app.windmill.dev/" rel="noopener noreferrer"&gt;Windmill Cloud&lt;/a&gt; or the &lt;a href="https://www.windmill.dev/docs/advanced/self_host" rel="noopener noreferrer"&gt;self-hosted versions&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;In your Windmill dashboard, create a new &lt;strong&gt;Flow&lt;/strong&gt; from the Home tab.&lt;/li&gt;
&lt;li&gt;In the flow editor, give the flow a name and add a trigger.&lt;/li&gt;
&lt;li&gt;Select the &lt;strong&gt;Webhook Trigger&lt;/strong&gt; option.

&lt;ul&gt;
&lt;li&gt;This automatically generates an endpoint that accepts &lt;code&gt;POST&lt;/code&gt; requests.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Next, inspect the incoming data and add a step to log the payload (so you can see what you want Checkly to send).&lt;/li&gt;
&lt;li&gt;Add a preprocessor module to handle incoming webhook events (click the "&lt;strong&gt;+&lt;/strong&gt;" button after the trigger node) - once this is done, select &lt;strong&gt;TypeScript&lt;/strong&gt; as the language.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faulo5wsuchnej1qbe3jv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faulo5wsuchnej1qbe3jv.png" alt="Preprocessor with TypeScript" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the preprocessor editor, you can access the webhook’s payload. In this case, we will be replacing the template with the following code to process the data to ensure that the webhook and payload are working properly:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;preprocessor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;webhook&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Record&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;{};&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;check_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;check_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;run_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;run_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toISOString&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script above receives JSON data from Checkly, logs them and returns a confirmation message. If you test the script, you should receive this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;preprocessor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;webhook&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Record&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;{};&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;check_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;check_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;run_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;run_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toISOString&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script above receives JSON data from Checkly, logs them and returns a confirmation message. If you test the script, you should receive this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dg2ub3yo90chxtk1vth.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dg2ub3yo90chxtk1vth.png" alt="Webhook's test step on Windmill" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Copy your Windmill webhook endpoint. Now, you have to reopen the Trigger node, then you’ll see a URL like this:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;https://app.windmill.dev/api/w/[workspace]/jobs/run_wait_result/[flow_path]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Step 2: Configure Checkly to Send Incidents
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://app.checklyhq.com/signup" rel="noopener noreferrer"&gt;Create a Checkly account&lt;/a&gt; and log into &lt;a href="https://app.checklyhq.com/" rel="noopener noreferrer"&gt;Checkly’s dashboard&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Head over to the &lt;strong&gt;Alerts Channels&lt;/strong&gt; tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;Add more channels&lt;/strong&gt; button and select the &lt;strong&gt;Webhook&lt;/strong&gt; option.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Give the Webhook instance a name and paste the Windmill webhook URL you created earlier (be sure to use the POST method).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save the webhook instance on Checkly and you should get a JSON payload with different fields. If anything fails, Windmill will receive this payload instantly when you start capturing the issues with your monitored services.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fea77t4q4n6eilxo4o5ni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fea77t4q4n6eilxo4o5ni.png" alt="Checkly's Flow" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Orchestrating Incidents in Windmill
&lt;/h2&gt;

&lt;p&gt;Raw incidents aren’t enough, they’re often noisy and inconsistent and that’s why there’s a need to clean and route them properly. In Windmill’s environment, workflows  are like blocks for routing, and providing contexts to alerts. Once Checkly pushes an incident payload, Windmill takes over to make sense of it.&lt;/p&gt;

&lt;p&gt;A basic orchestration pattern includes the following components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Normalization&lt;/strong&gt;: Convert Checkly’s raw JSON payload format into a consistent schema. Each monitoring tool has its own format of displaying schemas. Normalization in this context ensures every incident follows a predictable format like this:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"coderoflagos-check"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"critical"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Timeout on /register route"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"region"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"eu-west-1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"incident_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"run_8472"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach makes it easier to work with Slack notifications to process alerts without working with too much logic.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deduplication&lt;/strong&gt;: Incidents can be disturbing. For example, if the same check fails 10 times within a minute, you don't want users to be sent 10 alerts on that. In Windmill, you can store or reference previous incidents in a temporary key-value store or even cache and check whether an identical incident already exists before sending another one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Routing Rules&lt;/strong&gt;: Not all incidents are the same; there are different classes they fall into. While there are different classes, you can build conditional logic in your Windmill flow to route alerts based on severity, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Critical&lt;/strong&gt;: Send immediately via SMS and urgent channels.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Warning&lt;/strong&gt;: Forward via email or Slack only.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Info:&lt;/strong&gt; Log silently.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach ensures developers or end-users only get the right level of attention when something occurs.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Retries and Recovery&lt;/strong&gt;: Delivery sometimes fails. For example, a notification service might return a network timeout. Windmill allows you to implement retry flows to reattempt delivery or escalate the issue to another channel.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example normalization step in Windmill
&lt;/h2&gt;

&lt;p&gt;Once your flow receives a payload from Checkly, the next step is to normalize it into your schema and prepare it for notification delivery. Here's how to achieve this:&lt;/p&gt;

&lt;p&gt;After your preprocessor, add a new code step that handles normalization:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the &lt;strong&gt;"+"&lt;/strong&gt; button after your preprocessor.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Code&lt;/strong&gt; option and choose &lt;strong&gt;Python&lt;/strong&gt; as the language.&lt;/li&gt;
&lt;li&gt;Give the step a unique name.&lt;/li&gt;
&lt;li&gt;After, you can apply the following code in the code step:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;flow_input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="dl"&gt;"""&lt;/span&gt;&lt;span class="s2"&gt;
    Normalize Checkly data into a consistent incident schema
    &lt;/span&gt;&lt;span class="dl"&gt;"""&lt;/span&gt;
    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Normalize&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;incident&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;
    &lt;span class="nx"&gt;normalized&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;service&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;flow_input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;check_name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Unknown Service&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;severity&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;critical&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;flow_input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;status&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;failed&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;info&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;message&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;flow_input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Check completed&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;region&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;flow_input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;region&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;unknown&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;incident_id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;flow_input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;run_id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;timestamp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;flow_input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;timestamp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;source&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;checkly&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Normalized incident: {normalized}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;normalized&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This step takes the preprocessed Checkly data and normalizes it into a consistent format that can be used by downstream notification steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Delivering Notifications
&lt;/h2&gt;

&lt;p&gt;After normalization, you can add steps to deliver notifications through various channels. Here's how Windmill transforms your Checkly incidents into actionable notifications:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Add Notification Delivery Steps
&lt;/h3&gt;

&lt;p&gt;You can add multiple notification steps based on your needs. Each step receives the normalized incident data and delivers it through a specific channel:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Slack Notification Step:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add another code step after your normalization step&lt;/li&gt;
&lt;li&gt;Choose Python as the language&lt;/li&gt;
&lt;li&gt;Add &lt;code&gt;slack_webhook_url&lt;/code&gt; as a parameter&lt;/li&gt;
&lt;li&gt;Here’s an example code snippet of what you should have in the code step trigger:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;requests&lt;/span&gt;

&lt;span class="nx"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;incident&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;slack_webhook_url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="dl"&gt;"""&lt;/span&gt;&lt;span class="s2"&gt;
    Send incident notification to Slack
    &lt;/span&gt;&lt;span class="dl"&gt;"""&lt;/span&gt;
    &lt;span class="nx"&gt;severity_emoji&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;🚨&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;incident&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;severity&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;critical&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;⚠️&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

    &lt;span class="nx"&gt;slack_message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;{severity_emoji} Incident Alert&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;blocks&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;section&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mrkdwn&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;*{incident['service']}* is experiencing issues&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;*Error:* {incident['message']}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;*Severity:* {incident['severity']}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;*Region:* {incident['region']}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;*Incident ID:* {incident['incident_id']}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;slack_webhook_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;slack_message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;status&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;success&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status_code&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;channel&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;slack&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;incident_id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;incident&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;incident_id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;message_sent&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Notified team about {incident['service']} incident&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach ensures that even non-critical incidents still reach your engineering team without repeating emails&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Windmill makes it easy to send alerts from Checkly to your team or product users. You can use it to recieve alerts, clean data, and send alerts via Slack, email, or any other destination you prefer. Everything works automatically, so your team can focus on fixing issues instead of managing notifications manually - however, note that with this approach, there are still some efforts you can apply to make things work more efficently.&lt;/p&gt;

&lt;p&gt;To get the best results, try using clear alert formats and test your flow often to make sure messages are sent correctly. You can also explore other Windmill features like custom logic and extra integrations to make your system stronger.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before you go… 😂
&lt;/h3&gt;

&lt;p&gt;Thanks for reading! If this article helped you, check out &lt;a href="https://windmill.dev/docs" rel="noopener noreferrer"&gt;Windmill’s docs&lt;/a&gt; or join &lt;a href="https://discord.com/invite/V7PM2YHsPB" rel="noopener noreferrer"&gt;Windmill’s community&lt;/a&gt; to learn more. Sharing your feedback or building something cool with Windmill helps everyone in the developer community grow together.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>javascript</category>
      <category>beginners</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Opemipo Disu</dc:creator>
      <pubDate>Mon, 13 Oct 2025 13:36:23 +0000</pubDate>
      <link>https://dev.to/coderoflagos/-2e3l</link>
      <guid>https://dev.to/coderoflagos/-2e3l</guid>
      <description>&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/coderoflagos/incident-event-pipelines-for-real-time-notifications-with-windmill-and-checkly-2290" class="crayons-story__hidden-navigation-link"&gt;Incident Event Pipelines for Real-Time Notifications with Windmill and Checkly&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/coderoflagos" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F392943%2F708b2716-2bb6-45ee-9662-ddb0288e3079.JPG" alt="coderoflagos profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/coderoflagos" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Opemipo Disu
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Opemipo Disu
                
              
              &lt;div id="story-author-preview-content-2915549" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/coderoflagos" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F392943%2F708b2716-2bb6-45ee-9662-ddb0288e3079.JPG" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Opemipo Disu&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/coderoflagos/incident-event-pipelines-for-real-time-notifications-with-windmill-and-checkly-2290" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Oct 13 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/coderoflagos/incident-event-pipelines-for-real-time-notifications-with-windmill-and-checkly-2290" id="article-link-2915549"&gt;
          Incident Event Pipelines for Real-Time Notifications with Windmill and Checkly
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/webdev"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;webdev&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/programming"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;programming&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/tutorial"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;tutorial&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/coderoflagos/incident-event-pipelines-for-real-time-notifications-with-windmill-and-checkly-2290" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/raised-hands-74b2099fd66a39f2d7eed9305ee0f4553df0eb7b4f11b01b6b1b499973048fe5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;8&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/coderoflagos/incident-event-pipelines-for-real-time-notifications-with-windmill-and-checkly-2290#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            7 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;




</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Amazon Bedrock and Retrieval-Augmented Generation (RAG): Building Smarter AI Systems with Context-Aware Responses</title>
      <dc:creator>Opemipo Disu</dc:creator>
      <pubDate>Wed, 09 Apr 2025 12:25:38 +0000</pubDate>
      <link>https://dev.to/microtica/amazon-bedrock-and-retrieval-augmented-generation-rag-building-smarter-ai-systems-with-28lm</link>
      <guid>https://dev.to/microtica/amazon-bedrock-and-retrieval-augmented-generation-rag-building-smarter-ai-systems-with-28lm</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In &lt;a href="https://dev.to/microtica/amazon-bedrock-a-practical-guide-for-developers-and-devops-engineers-kag"&gt;Part 1 of this series&lt;/a&gt;, we delved into Amazon Bedrock and how DevOps engineers and developers could build their first generative AI applications and deploy with AWS Lambda. In the last blog post, we also focused on use cases that could be built with Amazon Bedrock even best practices for working with Amazon Bedrock.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, we’ll take a step further by showcasing how to address a very important challenge in AI; providing context-aware responses for generating quality and relevant responses.&lt;/p&gt;

&lt;p&gt;In this tutorial, you will learn how Amazon Bedrock works with RAG, its structure, and get a practical guide on integrating Amazon Bedrock and RAGs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Retrieval Augmented Generation?
&lt;/h2&gt;

&lt;p&gt;Retrieval Augmented Generation (RAG) is a technique used to improve the quality of responses from LLMs by adding relevant information from external sources. Normally, when working with a traditional LLM, you get responses based on what they’re trained to give back, but with RAG, you get responses from real-time data and more accurate information. Working with standard models can be static, whereas working with RAGs is super dynamic; that’s a major difference.&lt;/p&gt;

&lt;h3&gt;
  
  
  RAG vs. Standard LLMs
&lt;/h3&gt;

&lt;p&gt;In this section of the tutorial, we will look at some of the differences in features between Standard LLMs and RAG-Enabled LLMs.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Feature&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Standard LLM&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;RAG-Enabled LLM&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Knowledge Source&lt;/td&gt;
&lt;td&gt;Static / Fine-tuned data&lt;/td&gt;
&lt;td&gt;Dynamic / Real-time data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;False Information Risk&lt;/td&gt;
&lt;td&gt;High (inaccurate or outdated)&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Use of Private Data&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;Large-scale&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost Efficiency&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;td&gt;More efficient with caching&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In the table above, you'll see that working with traditional LLMs can be very static, fixed, and inaccurate because they operate based on how humans fine-tune the model. However, RAG-Enabled LLMs work directly with data retrieved from external sources and real-time data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use-Cases of RAGs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chatbots:&lt;/strong&gt; This is probably the most known use-case for RAGs. With RAGs, you get more accurate and current information as responses for better customer interactions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal Knowledge Search:&lt;/strong&gt; RAG-enabled models enable users to easily access and utilize articles, guides, documentation, and even other public resources without any manual intervention or worrying if the information generated is relevant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Support Automation:&lt;/strong&gt; RAGs could also improve incident resolution by pulling from logs and past tickets. It helps provide quicker and accurate answers for resolving issues faster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Financial Services:&lt;/strong&gt; RAG-enabled models can also generate stats reports in different formats based on real-time market data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Does Retrieval Augmented Generation Work?
&lt;/h2&gt;

&lt;p&gt;RAG improves AI responses by using information retrieval with language models. For example; when a user submits a query, the system uses an existing knowledge base like S3, Elasticsearch, Pinecone, or OpenSearch to find relevant data. This data is added to the user’s query and sent to the AI model to make the response more accurate.&lt;/p&gt;

&lt;p&gt;Here’s an architectural diagram of how RAG works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ctgay2ciiju8sq09xw3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ctgay2ciiju8sq09xw3.webp" alt="Workflow Diagram" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of Using RAG-Enabled LLMs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lower Risk of Inaccurate Information:&lt;/strong&gt; Using RAG-Enabled LLMs generates responses based on reliable sources rather than making assumptions. Unlike using standard LLMs, where you have to provide models with data that might be inaccurate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved Accuracy:&lt;/strong&gt; Using them also ensures AI responses are more accurate and up-to-date by retrieving information from trusted external sources instead of relying on fine-tuned data. This reduces the risk of giving users incorrect information for responses, thereby improving the accuracy of generated content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Efficiency:&lt;/strong&gt; They also help save money and reduce API calls by using existing data sources instead of making unnecessary requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security &amp;amp; Compliance:&lt;/strong&gt; Using RAG-Enabled LLMs even helps provide data privacy by retrieving information only from authorized and secure sources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS Bedrock + RAG: The Perfect Match
&lt;/h2&gt;

&lt;p&gt;AWS Bedrock provides a solid base for RAG-enabled apps by offering various foundation models with built-in tools for storing and retrieving custom data.&lt;/p&gt;

&lt;p&gt;Its serverless setup ensures that there is security and compliance while integrating with services like &lt;strong&gt;Amazon OpenSearch&lt;/strong&gt; for search-based retrieval, &lt;strong&gt;Amazon S3&lt;/strong&gt; for storage, &lt;strong&gt;AWS RDS&lt;/strong&gt; for relational databases, and &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; Knowledge Bases for vector-based retrieval. This setup allows developers to create scalable AI applications easily.&lt;/p&gt;

&lt;p&gt;Before delving into implementing RAG into Amazon Bedrock, lets have a look at Amazon Bedrock’s workflow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop6tf4bl9ls25v50amv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop6tf4bl9ls25v50amv4.png" alt="Image description" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Implement RAG with AWS Bedrock
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Store Your Custom Data
&lt;/h3&gt;

&lt;p&gt;Since we will be using Amazon S3 to store our data, follow these steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Head over to Amazon S3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbb2wx7w2o05dglj9kyw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbb2wx7w2o05dglj9kyw.png" alt="Image" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create an &lt;a href="https://us-east-1.console.aws.amazon.com/s3/get-started?region=us-east-1&amp;amp;bucketType=general" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon S3 Bucket&lt;/strong&gt;&lt;/a&gt; to store your data and make necessary configurations. Be sure to &lt;strong&gt;enable&lt;/strong&gt; Bucket Versioning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj44ouy3la97buwa52j8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj44ouy3la97buwa52j8.png" alt="Image" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Upload to documents to your Amazon S3 bucket.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Choose an Embedding Model
&lt;/h3&gt;

&lt;p&gt;Choose any model to work with; your decision on selecting a model should be based on whether it fits the project you're working on. &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html" rel="noopener noreferrer"&gt;&lt;strong&gt;Refer to this guide&lt;/strong&gt;&lt;/a&gt; to find a list of models you can work with.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Create a Knowledge Base&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to Amazon Bedrock’s Console&lt;/li&gt;
&lt;li&gt;In the side navigation by the left, select "Knowledge bases".&lt;/li&gt;
&lt;li&gt;Click "Create knowledge base" and select the “Knowledge Base with Vector Store” option.&lt;/li&gt;
&lt;li&gt;Provide the knowledge base details&lt;/li&gt;
&lt;li&gt;When providing the knowledge base details, choose the S3 URI location as data source.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fansi1b90ngv267qdt2fa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fansi1b90ngv267qdt2fa.png" alt="S3 URI" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Finally, select your embedding model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffke29lqjuxvpk6fhu5zk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffke29lqjuxvpk6fhu5zk.png" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the vector store (Amazon OpenSearch Serverless will be used by default).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 4: Chunking (Data Preparation):
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Head over to the &lt;strong&gt;Knowledge Base&lt;/strong&gt; in the &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; console.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Select Your Knowledge Base&lt;/strong&gt; From the Knowledge Bases section, choose the knowledge base you want to work with.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add or Edit a Data Source&lt;/strong&gt; If you haven't already, add a new data source pointing to your S3 bucket. If you have an existing S3 data source, select it for editing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure Chunking Strategy&lt;/strong&gt; In the data source settings, look for the "Chunking Configuration" section. Here, you'll find options to set your chunking strategy:

&lt;ul&gt;
&lt;li&gt;Fixed-size chunking.&lt;/li&gt;
&lt;li&gt;Hierarchical chunking.&lt;/li&gt;
&lt;li&gt;Semantic chunking.&lt;/li&gt;
&lt;li&gt;No chunking (treats each file as one chunk).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Here’s where to configure that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq038ukicnp1blqmobncx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq038ukicnp1blqmobncx.png" alt="Chunk" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choose and Configure Your Chunking Strategy&lt;/strong&gt; Select the strategy that fits your data and use case.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;For Fixed-size&lt;/strong&gt;: Specify the number of tokens per chunk and overlap percentage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For Hierarchical&lt;/strong&gt;: Also define parent and child chunk sizes and overlap.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For Semantic&lt;/strong&gt;: Set the maximum tokens, buffer size, and breakpoint percentile threshold.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Review and Save Changes&lt;/strong&gt; After configuring your chunking strategy, review your settings and save the changes to your data source.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 5: Test and Query the Knowledge Base
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Before testing, ensure the data is synced; &lt;strong&gt;syncing&lt;/strong&gt; means fetching the data from S3, chunking it, embedding it, and storing it in the vector database.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the knowledge base console, hit the &lt;strong&gt;Test&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2e229phecywfq59ejnq2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2e229phecywfq59ejnq2.png" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When the panel pops out, enter a question related to your uploaded documents.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Run&lt;/strong&gt; to get AI-generated responses.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 6: Verify the Retrieval Process&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;If Bedrock correctly retrieves relevant data, your RAG setup is working! 🎉&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Best Practices &amp;amp; Tips for RAG Pipelines on AWS Bedrock
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Optimize Context Window:&lt;/strong&gt; It's important to keep the context clear and focused on what's needed for the prompt. Too much information can confuse the model, so a great thing to do will be always try to provide only necessary data to get accurate responses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Balance Cost vs. Accuracy:&lt;/strong&gt; Retrieving more data also helps improve accuracy, but it can also increase costs. Find a balance by fetching only the necessary data to reduce costs while maintaining the same quality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fine-Tune Retrieval Thresholds:&lt;/strong&gt; Set limits for retrieval relevance to make sure you're only getting the most useful data. This helps prevent overwhelming the model with unnecessary information and keeps responses clear.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Caching:&lt;/strong&gt; Cache frequently used data to speed things up and reduce unnecessary API calls. This makes your pipeline more efficient and reduces costs, especially for common queries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure Data Sources:&lt;/strong&gt; Protect your data by using IAM policies and encryption for sensitive sources. This practice ensures that only authorized users can access your data, keeping everything safe and compliant.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Best Use Cases for RAG
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DevOps &amp;amp; Observability:&lt;/strong&gt; You can use RAG for retrieving logs and metrics (in real-time) for automated incident resolution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Product Search &amp;amp; Recommendations:&lt;/strong&gt; You can use RAG to refine personalized product recommendations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Document QA:&lt;/strong&gt; You can also use RAG for employees to query company documents with chatbots and AI agents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;E-Commerce:&lt;/strong&gt; You can also use RAGs to fetch dynamic product descriptions and search results.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  RAG vs. Fine-Tuning Decision Framework
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Use Case&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Best Approach&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Static Knowledge&lt;/td&gt;
&lt;td&gt;Fine-Tuning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dynamic Data&lt;/td&gt;
&lt;td&gt;RAG&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost-Sensitive&lt;/td&gt;
&lt;td&gt;RAG (less training required)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data Privacy&lt;/td&gt;
&lt;td&gt;RAG (retrieves private data)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What’s Next: RAG-Enabled AI Agents in Production
&lt;/h2&gt;

&lt;p&gt;Now, the next step for you is to create AI agents that automate workflows using Amazon Bedrock and integrate with RAG. To deploy your application, you can use AWS services like ECS, Lambda, or other Bedrock-managed services.&lt;/p&gt;

&lt;p&gt;In Part 3 of this series, we’ll explore how to scale RAG architectures in production, improve performance, and ensure seamless integrations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;RAG with AWS Bedrock makes models smarter by adding real-world context to its responses. That being said you’ll get more accurate answers, and reduce costs since you’re only pulling in the data you actually need. Anther thing is that AWS handles the backend, so you don’t have to stress about infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before you go… 🥹
&lt;/h3&gt;

&lt;p&gt;Thank you for taking the time to learn about integrating RAGs with AWS Bedrock. If you found this article helpful, please consider supporting Microtica by creating an account and &lt;a href="https://discord.gg/N8WdXyXxZR" rel="noopener noreferrer"&gt;&lt;strong&gt;joining the community&lt;/strong&gt;&lt;/a&gt;. Your support helps us keep improving and offering valuable resources like this, for the developer community!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>beginners</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>10 Internal Developer Platforms to Improve Your Developer Workflow 🚀</title>
      <dc:creator>Opemipo Disu</dc:creator>
      <pubDate>Fri, 28 Mar 2025 12:34:24 +0000</pubDate>
      <link>https://dev.to/microtica/10-internal-developer-platforms-to-improve-your-developer-workflow-55ee</link>
      <guid>https://dev.to/microtica/10-internal-developer-platforms-to-improve-your-developer-workflow-55ee</guid>
      <description>&lt;p&gt;&lt;strong&gt;Internal Developer Platforms&lt;/strong&gt; (IDPs) are essential tools in the software development process because they deliver the software faster and more efficiently. This means that it boosts the whole process's productivity. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtwr6vnyko7xkr0cdymw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtwr6vnyko7xkr0cdymw.gif" alt="let's go gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The need for these platforms is increasing because companies tend to find better solutions that will speed up software delivery. As businesses grow, the complexity of the development process is increasing too. That’s why IDPs can help to solve the challenges that a company may face.&lt;/p&gt;

&lt;p&gt;This ultimate guide will help you choose the best IDPs including their features, benefits, and functionalities. Understanding these platforms better will transform software development within a business.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is the Purpose of an Internal Developer Platform?
&lt;/h2&gt;

&lt;p&gt;Internal developer platforms (IDPs) are tools that help businesses streamline their development procedures. Compared with the traditional approach, IDPs provide a solution that simplifies processes, automates repetitive work, and allows developers to concentrate on what they do best - write code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpsp6s3io1vzuihph831p.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpsp6s3io1vzuihph831p.gif" alt="relived gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These platforms ease the work of the developers, which means they can focus on their main job rather than manage the IT infrastructure. Tasks including configuration, provision, and deployment are all part of the self-services that IDPs offer. &lt;/p&gt;

&lt;p&gt;The main goal of an IDP is to increase developers' productivity. By providing enhanced technologies, developers are able to organize their time and focus on producing creative solutions. &lt;/p&gt;

&lt;p&gt;Because platform engineering creates an environment that helps both the business and the customers, developers can save time and effort while still providing value to clients. Using all these necessary tools ensures that your development procedures as well as the outcomes will be improved. &lt;/p&gt;

&lt;p&gt;Let’s take a closer look at the best platforms to use within your development workflow:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Microtica
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foe0vrr3qonwurzqnftm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foe0vrr3qonwurzqnftm6.png" alt="Microtica image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://microtica.com/?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;Microtica&lt;/a&gt; is an AI-powered platform that offers better cloud-native app deployment and management. It automates lots of activities, such as &lt;a href="https://www.microtica.com/blog/optimize-your-ci-cd-pipeline-for-faster-deployments" rel="noopener noreferrer"&gt;deployment pipelines&lt;/a&gt;, monitoring, and cost optimization, while providing ready templates for rapid implementation.&lt;/p&gt;

&lt;p&gt;In order to improve efficiency, save &lt;a href="https://www.microtica.com/blog/7-challenges-with-aws-costs" rel="noopener noreferrer"&gt;cloud expenses&lt;/a&gt;, and increase monitoring visibility, the platform offers developers the tools they need to manage apps on their cloud accounts. Microtica is suitable for all-size businesses because it also provides insights for operational and cost-saving improvements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://microtica.com/?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⭐️ Try Microtica For Free ⭐️&lt;/a&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Qovery
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdk2lxt9gglsxokg0oy9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdk2lxt9gglsxokg0oy9.png" alt="Qovery Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.qovery.com/" rel="noopener noreferrer"&gt;Qovery&lt;/a&gt; stands out as a powerful DevOps automation platform that aims to streamline the development process. It provides a comprehensive solution for provisioning, managing repetitive tasks, and maintaining a secure and compliant infrastructure to improve user experience and cost efficiency.&lt;/p&gt;

&lt;p&gt;Qovery streamlines deployment workflows while ensuring scalability, and compliance. It reduces the need for manual DevOps tasks by offering self-service tools that ensure the developers will manage and deploy cloud infrastructure efficiently. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.qovery.com/" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⭐️ Try Qovery ⭐️&lt;/a&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  3. OpsLevel
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0i8igelkkm4zuvxfaib2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0i8igelkkm4zuvxfaib2.png" alt="OpsLevel Header"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.opslevel.com/" rel="noopener noreferrer"&gt;OpsLevel&lt;/a&gt;, developers can manage all of their tools, services, and systems from a single location using OpsLevel's standardized interface. As an internal developer portal, it offers automated services, helps to speed up the delivery of software quality, and enhances the visibility of the context. &lt;/p&gt;

&lt;p&gt;OpsLevel enables businesses to efficiently manage complex structures, while at the same time maintaining outstanding service with its user-friendly interface and powerful monitoring features. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.opslevel.com/" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⭐️ Try OpsLevel ⭐️&lt;/a&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Coherence
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7dgm3sn3anmefxsq34j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7dgm3sn3anmefxsq34j.png" alt="Coherence header"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.withcoherence.com/" rel="noopener noreferrer"&gt;Coherence&lt;/a&gt; is a platform that helps companies build a strong environment by testing, developing, and deploying web apps, and managing the full SDLC. It enables users to choose the features of the dataset, ensuring accuracy. &lt;/p&gt;

&lt;p&gt;Coherence is a helpful IDP because allows development tasks to be delivered faster and with greater accuracy than with conventional techniques.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.withcoherence.com/" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⭐️ Try Coherence ⭐️&lt;/a&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Humanitec
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9bvk58j8s46qznzlxqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9bvk58j8s46qznzlxqb.png" alt="Humanitec Homepage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://humanitec.com/" rel="noopener noreferrer"&gt;Humanitec&lt;/a&gt; provides a platform that automates and standardizes infrastructure management for developers. It improves DevOps workflow by enhancing the collaboration between the operations and developers teams to achieve faster and better delivery. &lt;/p&gt;

&lt;p&gt;It also focuses on self-service tools that improve deployment automation, and environment management while cutting time and costs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://humanitec.com/" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⭐️ Try Humanitec ⭐️&lt;/a&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Mia Platform
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3ytxlhhcg5r1hagujdm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3ytxlhhcg5r1hagujdm.png" alt="Mia Platform"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mia-platform.eu/" rel="noopener noreferrer"&gt;Mia Platform&lt;/a&gt; provides a variety of products for building digital platforms. The platform is associated with several international technological standards and focuses mostly on encouraging the use of Cloud Native and Open Source applications. &lt;/p&gt;

&lt;p&gt;Among its services is the main product, the Mia-Platform Console, which is a platform that improves developer experience, accelerates the creation of microservices architectures, and streamlines development procedures. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://mia-platform.eu/" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⭐️ Try Mia Platform ⭐️&lt;/a&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Appvia
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fox4gp8zf1jz1v2y3pe88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fox4gp8zf1jz1v2y3pe88.png" alt="Appvia homepage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.appvia.io/" rel="noopener noreferrer"&gt;Appvia&lt;/a&gt; offers solutions that simplify and secure public cloud distribution. By offering solutions that are safe, affordable, and scalable, they enable businesses to proactively pursue cloud computing.&lt;/p&gt;

&lt;p&gt;Among its many features are infrastructure management, automated deployment, and integration with leading cloud providers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.appvia.io/" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⭐️ Try Appvia ⭐️&lt;/a&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Portainer
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4o3ablmvhj1lwrvwem1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4o3ablmvhj1lwrvwem1u.png" alt="Portainerr image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As an open-source tool, &lt;a href="https://www.portainer.io/" rel="noopener noreferrer"&gt;Portainer&lt;/a&gt; makes it easier to deploy, monitor, and secure systems using Docker, Kubernetes, Swarm, and Podman. Everyone can use it, from small companies to big enterprises, by offering user-friendly interface automation, and developer self-service. &lt;/p&gt;

&lt;p&gt;The main goal of Portainer is to streamline operations, enforce best practices, and accelerate container adoption.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.portainer.io/" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⭐️ Try Portainer ⭐️&lt;/a&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  9. WarpBuild
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2prv37lg7gj81dyq2bt9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2prv37lg7gj81dyq2bt9.png" alt="Warpbuild homepage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.warpbuild.com/" rel="noopener noreferrer"&gt;WarpBuild&lt;/a&gt; provides fast, cost-effective GitHub action runners that improve CI/CD performance. The main goal is to offer cloud-based deployment workflow automation and allow developers to manually handle activities like configuration and deployment. &lt;/p&gt;

&lt;p&gt;WarpBuild is designed for developers to accelerate deployments while reducing costs, with features such as automated testing and integration with any cloud provider.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.warpbuild.com/" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⭐️ Try WarpBuild ⭐️&lt;/a&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  10. Nullstone
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w7jwclpnuhu311uwlnq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w7jwclpnuhu311uwlnq.png" alt="Nullstone Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.nullstone.io/" rel="noopener noreferrer"&gt;Nullstone&lt;/a&gt; helps developers faster deploy secure, full-stack applications on their own cloud infrastructure. It supports containers with self-service deployments, automated tools, and strong monitoring.&lt;/p&gt;

&lt;p&gt;Nullstone integrates with Terraform, Helm, and third-party services like Datadog and New Relic, providing flexibility and security while enabling faster software delivery, and better software development efficiency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.nullstone.io/" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⭐️ Try Nullstone ⭐️&lt;/a&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to Consider While Choosing the Best IDP
&lt;/h2&gt;

&lt;p&gt;Choosing the right platform for your business can be challenging due to many factors, such as the best features, procedures, and services they offer. By comparing their characteristics, and what they offer, you can easily find the best one that fits your business needs. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgxrffa9lp9mp90vk999.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgxrffa9lp9mp90vk999.gif" alt="Multiple buttons gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition, you should consider several factors, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ease of Use: An effective IDP should be easy to use and provide a smooth experience for developers. Find platforms that have a user-friendly interface, and self-service capabilities that will boost productivity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability: The IDP should support scalable infrastructure, especially if your team needs to deploy applications across multiple environments or clouds. A flexible platform allows teams to manage complex infrastructures that can easily scale and adapt.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security: Analyze the security characteristics of each platform. The platform should provide robust compliance features to help teams manage risks, follow regulations, and ensure that only authorized people have access to sensitive resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integration: Consider how well each platform integrates. A good platform integrates with tools, new technologies, and infrastructure. The integration speeds up the software delivery process and improves collaboration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost: It’s essential to evaluate the general and additional costs such as support, training, and other requirements. Each platform offers different features that come with different costs. Choose the one that best fits your company’s budget. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Suitability: All platforms offer different features that may be better for certain applications and processes. Consider platforms that better respond to your requirements and solve your business needs. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Support: When choosing an IDP a main consideration is the quality of customer support they offer. The platform should have a responsive support team, providing 24/7 support, and comprehensive documentation. This will ensure your team will get timely assistance when facing challenges to reduce downtime. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All the factors listed above are essential when choosing the best IDP for your business. By taking into account the cost, ease of use, security scalability, integration, sustainability, and support, you will find the best IDP that will fill your business requirements and bring greater productivity. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Finding the best internal developer platform involves several factors to consider. Each of these platforms provides different features and benefits that meet certain organizational requirements. By carefully evaluating each platform's advantages, disadvantages, and special features, your company may increase developer productivity, optimize processes, and maintain better security and compliance.   &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Amazon Bedrock: A Practical Guide for Developers and DevOps Engineers</title>
      <dc:creator>Opemipo Disu</dc:creator>
      <pubDate>Mon, 24 Mar 2025 13:05:12 +0000</pubDate>
      <link>https://dev.to/microtica/amazon-bedrock-a-practical-guide-for-developers-and-devops-engineers-kag</link>
      <guid>https://dev.to/microtica/amazon-bedrock-a-practical-guide-for-developers-and-devops-engineers-kag</guid>
      <description>&lt;p&gt;Once upon a time, building AI applications required deep experience with traditional technologies and some Machine Learning expertise. While that was the norm, developers had to configure models to their needs, provide GPUs, and manually optimize performance, which required a lot of effort and costs.&lt;/p&gt;

&lt;p&gt;While that approach seemed difficult, the AWS team built Amazon Bedrock, a tool that allows developers to easily create their AI applications through an API or the AWS Management Console using its embedded foundation models. Amazon Bedrock enables developers to build generative AI applications without the stress of directly managing the underlying stack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4cj7cwuebobrrjpfsmc4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4cj7cwuebobrrjpfsmc4.gif" alt="Relax GIF" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, you’ll learn everything about Amazon Bedrock, the prerequisites of using Amazon Bedrock, how to get started with Amazon Bedrock, best practices for working with Amazon Bedrock, and even the core concepts of Amazon Bedrock. Other than that, you’ll also see some code samples of how you can work with AWS’s Bedrock API. So, this article will serve as an &lt;strong&gt;A-Z guide&lt;/strong&gt; for people who are interested in using Bedrock to build their generative AI applications.&lt;/p&gt;




&lt;h2&gt;
  
  
  Please, support Microtica 😅 🙏
&lt;/h2&gt;

&lt;p&gt;Before moving on, I’d love it if you could support our work at Microtica joining our community! ⭐️&lt;/p&gt;

&lt;p&gt;&lt;a href="https://discord.gg/N8WdXyXxZR" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;⭐️ Join Microtica’s Discord Community ⭐️&lt;/a&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpnhpxhp08zhpsuxqnzg.gif" alt="Thank you GIF" width="640" height="358"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  What is Amazon Bedrock?
&lt;/h2&gt;

&lt;p&gt;Amazon Bedrock is a service that lets DevOps engineers and teams build gen AI applications. Instead of building or fine-tuning models manually, Amazon Bedrock uses foundational models from AI infrastructure providers to provide a ready-made API for developers to use easily. This approach removes the complexity that comes with building generative applications and working with underlying stacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;Before getting in hands with Amazon Bedrock, I just thought it would be nice to share some benefits of Amazon Bedrock and include examples of how these benefits have a positive impact on your development workflow.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Quicker Development&lt;/strong&gt;: Compared to working manually, where you always have to work with models and fine-tune them, AWS lets you work with a single API. With this approach, you don’t have much to focus on. This approach saves a lot of time as it requires less effort than directly working with models.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Example&lt;/strong&gt;: My colleague made an AI Assistant using Amazon Bedrock's text generation models without having to handle any ML models directly, which saved him time. He found this method quicker because he could add AI features in just a few days with an API call instead of spending months or weeks.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/U1mT6VjArKs"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Amazon Bedrock is built on AWS’s Cloud Infrastructure and uses models from various companies, including AI21 Labs, Anthropic, Cohere, DeepSeek, Luma, Meta, Mistral AI, and Stability AI. This enables teams to scale their applications easily and under heavy workloads, Bedrock ensures excellent application performance without requiring manual intervention.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Example&lt;/strong&gt;: An e-commerce service using Amazon Bedrock for product recommendations can easily scale resources during shopping seasons without compromising performance or experiencing downtime.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Integration with AWS Ecosystem&lt;/strong&gt;: As Amazon Bedrock is an AWS product, it seamlessly integrates with Amazon SageMaker, Lambda, and S3 for building, deploying, and managing applications.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Example&lt;/strong&gt;: A bank using Amazon Bedrock for fraud detection can create automated workflows. For instance, AWS Lambda can identify suspicious transactions, save the reports in S3, and use SageMaker to check patterns.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Cost-Effectiveness:&lt;/strong&gt; Amazon Bedrock has flexible pricing, so you only pay for what you use. Instead of spending a lot on expensive servers and models, you can use any of Bedrock’s models that you think will help save costs while still getting powerful AI features. You can take a look at this page for Amazon Bedrock's pricing models.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Example&lt;/strong&gt;: One of my colleagues automated blog posts with Amazon Bedrock and only paid for the API requests she used, saving money on monitoring and fine-tuning AI models.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started with Amazon Bedrock 🚀
&lt;/h2&gt;

&lt;p&gt;Now, it’s time to prepare to get our hands dirty. In this section, we will look into doing the real work and the things you should have before getting started. Even though Amazon Bedrock can be used to accomplish many tasks, this article is focused only on building generative AI applications easily with the AWS Management Console.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites For Using Amazon Bedrock
&lt;/h3&gt;

&lt;p&gt;Before getting started with Amazon Bedrock, here are some things you need to have ready:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic Python Knowledge.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;An AWS Account&lt;/strong&gt;: This is primary: to get started, you need to &lt;a href="https://aws.amazon.com/?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;create an AWS account&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Management Console&lt;/strong&gt;: You need the console to interact with models if you don’t want to write code. Alternatively, you can use:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS CLI&lt;/strong&gt;: You can use the CLI to create AWS CLI profiles using the API. For instructions on how to use this option, refer to this documentation. This option requires some basic Python knowledge.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;IAM Permissions&lt;/strong&gt;: You also need to assign the required IAM roles needed for Bedrock.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6td27syir0iz5jjb5q3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6td27syir0iz5jjb5q3.gif" alt="Lets go GIF" width="298" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we will be using the AWS Management Console option for operations. You can use the CLI option to go a bit hands-on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic AI/ML Concepts
&lt;/h3&gt;

&lt;p&gt;Even though Amazon Bedrock helps with removing the complexities that come with building AI applications, a basic understanding of AI and ML is helpful because, even when using Amazon Bedrock, there are things you might encounter, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Foundational Models (FMs)&lt;/strong&gt;: These are the pre-built AI models that Amazon Bedrock provides for operations in generative applications. You might wonder if AWS owns the entire model, but some are from different companies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt Engineering:&lt;/strong&gt; This is the process of creating and improving input prompts to help AI models produce accurate and great responses. Good prompt engineering improves the model's understanding and ensures it aligns with the output.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model fine-tuning&lt;/strong&gt;: With Amazon Bedrock, it’s possible to fine-tune models to your needs. You can always configure models to fit what you want; the approach for doing this is different from the manual approach.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Core Concepts of Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;Now, let’s take a look at some key concepts of Amazon Bedrock - by having a glance at them, you’ll have a deeper understanding of how Amazon Bedrock works and how to get the best out of it&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding Foundation Models
&lt;/h3&gt;

&lt;p&gt;Let’s take a look at some foundation models that Amazon Bedrock uses and things to consider before using them.&lt;/p&gt;

&lt;p&gt;To find the list of models you could work with, their capabilities, and their availability in your region, &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;refer to this guide&lt;/a&gt;. In that guide, you’ll see everything about the models and the types of outputs they generate—for example, image, text, or code.&lt;/p&gt;

&lt;p&gt;Here are some things you should consider before using any of the models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use Case&lt;/strong&gt;: You should know if the model aligns with your application’s needs. For example, if you’re building a chatbot that generates text for responses, you need to work with one of the models that support text for their outputs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance vs. Cost&lt;/strong&gt;: Now, this is where the saying "quality over quantity" really matters. You need to think about two things: performance and cost. Some models work quickly but are usually expensive. If you want a model that fits your budget, you might have to find a balance between how well it performs and how much it costs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customization&lt;/strong&gt;: Amazon Bedrock lets you adjust models for some uses. Depending on your needs, you might want a model that can be customized to fit your project.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Amazon Bedrock API
&lt;/h2&gt;

&lt;p&gt;Now, let's explore Amazon Bedrock's API and SDK and learn how to use them. First, we'll have a look at the API and learn how to work with the foundational models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;Apart from interacting with the service, Amazon Bedrock API allows you to do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Work with the foundation models to generate text, create images, generate code&lt;/li&gt;
&lt;li&gt;Adjust the model’s behaviour with settings you can change.&lt;/li&gt;
&lt;li&gt;Get information about the model, including its ARN and ID.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Amazon Bedrock APIs use AWS's normal authentication and authorization methods, and that requires IAM roles and permissions for security.&lt;/p&gt;

&lt;h3&gt;
  
  
  Authentication &amp;amp; Access Control
&lt;/h3&gt;

&lt;p&gt;To use the Bedrock API, you need to install the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;latest version of AWS CLI&lt;/a&gt; and log in with AWS IAM credentials. Make sure your IAM user or role has the required permissions, too.:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MarketplaceBedrock"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"aws-marketplace:ViewSubscriptions"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"aws-marketplace:Unsubscribe"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"aws-marketplace:Subscribe"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This policy allows you to use the API to run models. To follow up with working with Amazon Bedrock’s APIs, you can refer to and read these guides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/bedrock/latest/APIReference/welcome.html?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon Bedrock API Reference&lt;/strong&gt;&lt;/a&gt;: In this documentation, you’ll find the service endpoints you’ll likely work with.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;&lt;strong&gt;Getting Started With Amazon Bedrock API&lt;/strong&gt;&lt;/a&gt;: This documentation will walk you through everything you need to know about Amazon Bedrock’s API—from its installation requirements to the How-tos; it’s a more detailed guide for setup.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AWS SDK Integration
&lt;/h3&gt;

&lt;p&gt;AWS provides SDKs to integrate with your favourite programming languages, such as Python, Java, Go, JavaScript, Rust, etc. Now, let’s have a look at some examples of how they work with different languages.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Python (Boto3)&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;    &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;
    &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
    &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;

    &lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;botocore.exceptions&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ClientError&lt;/span&gt;

    &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;basicConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getLogger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;list_foundation_models&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bedrock_client&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bedrock_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;list_foundation_models&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="n"&gt;models&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;modelSummaries&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Got %s foundation models.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;models&lt;/span&gt;

        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;ClientError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Couldn&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;t list foundation models.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;raise&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;

        &lt;span class="n"&gt;bedrock_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;service_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bedrock&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;fm_models&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;list_foundation_models&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bedrock_client&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;fm_models&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Model: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;modelName&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;indent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;---------------------------&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Done.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s a clear example of how to list the available Amazon Bedrock models with Python (Boto3). To learn more about how to the Python SDK works, &lt;a href="https://docs.aws.amazon.com/code-library/latest/ug/python_3_bedrock_code_examples.html" rel="noopener noreferrer"&gt;read this guide&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JavaScript Example (AWS SDK for JavaScript v3)&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;fileURLToPath&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;node:url&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;BedrockClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="nx"&gt;ListFoundationModelsCommand&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-sdk/client-bedrock&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;REGION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;us-east-1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;BedrockClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;REGION&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;main&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ListFoundationModelsCommand&lt;/span&gt;&lt;span class="p"&gt;({});&lt;/span&gt;

          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;models&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;modelSummaries&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

          &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Listing the available Bedrock foundation models:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

          &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;models&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;repeat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;` Model: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;modelId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;repeat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;` Name: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;modelName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;` Provider: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;providerName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;` Model ARN: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;modelArn&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;` Input modalities: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;inputModalities&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;` Output modalities: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;outputModalities&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;` Supported customizations: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;customizationsSupported&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;` Supported inference types: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;inferenceTypesSupported&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;` Lifecycle status: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;modelLifecycle&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;repeat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;\n`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;

          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;active&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;modelLifecycle&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ACTIVE&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;legacy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;modelLifecycle&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;LEGACY&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

          &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="s2"&gt;`There are &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;active&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; active and &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;legacy&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; legacy foundation models in &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;REGION&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;);&lt;/span&gt;

          &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;

        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nf"&gt;fileURLToPath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code snippet above also shows the listing of the available Bedrock foundation models. To learn more about how to use the JavaScript SDK, &lt;a href="https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/javascript_bedrock_code_examples.html?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;read this guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are code samples that showcase how to integrate Bedrock SDKs into your favorite programming languages. You can find them in &lt;a href="https://docs.aws.amazon.com/code-library/latest/ug/bedrock-runtime_code_examples.html?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;this guide&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding Amazon Bedrock API Responses
&lt;/h3&gt;

&lt;p&gt;Now, lets have a look at the main API operations that Amazon Bedrock provides for model prediction:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;InvokeModel&lt;/strong&gt; – Sends one prompt and gets a response.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Converse&lt;/strong&gt; – Allows ongoing conversations by including previous messages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, Amazon Bedrock supports streaming responses with &lt;code&gt;InvokeModelWithResponseStream&lt;/code&gt; and &lt;code&gt;ConverseStream&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To see the type of responses you’ll get when you submit a single prompt with &lt;code&gt;InvokeModel&lt;/code&gt; and &lt;code&gt;Converse&lt;/code&gt;, check the following guides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-call.html?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;&lt;strong&gt;Converse API&lt;/strong&gt;&lt;/a&gt;: This guide showcases how to use Amazon Bedrock using the Converse API. It also includes how you can make a request with an &lt;a href="https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-rt" rel="noopener noreferrer"&gt;Amazon Bedrock runtime endpoint&lt;/a&gt; and examples of the response you’ll get with either &lt;code&gt;Converse&lt;/code&gt; or &lt;code&gt;ConverseStream&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html" rel="noopener noreferrer"&gt;&lt;strong&gt;InvokeModel&lt;/strong&gt;&lt;/a&gt;: This guide explains how to use the &lt;code&gt;InvokeModel&lt;/code&gt; operation in Amazon Bedrock. It also covers how to send requests to foundation models, set parameters for the best results, and manage responses.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building A Conversational AI Application With Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;Now, let's start building our first application in Amazon Bedrock using the AWS Management Console. For this first project, we'll create a conversational AI assistant that works only with text.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Get started with Amazon Bedrock in the AWS Management Console
&lt;/h3&gt;

&lt;p&gt;Firstly, sign into the AWS Management Console from the main AWS sign-in URL. When, you’re signed in, you’ll be redirected to the Dashboard. In the dashboard, select the &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; option (search for "Bedrock" in the AWS search bar).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqzmug543vdcmic3m926.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqzmug543vdcmic3m926.png" alt="Searching AWS Bedrock from Console" width="800" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After selecting Amazon Bedrock, head over to the &lt;strong&gt;Model Access&lt;/strong&gt; tab and ensure you have access to any &lt;strong&gt;Amazon Titan&lt;/strong&gt; text generation models by requesting access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fethhnpez4mjz74kfppu4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fethhnpez4mjz74kfppu4.png" alt="Request Access Models" width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After selecting the model you'd like to work with, hit the &lt;strong&gt;Next&lt;/strong&gt; button. Afterwards, you'll be redirected to a tab where you should submit a request to access the model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Building the Chatbot Using Amazon Titan
&lt;/h3&gt;

&lt;p&gt;Head over to the &lt;strong&gt;Playgrounds&lt;/strong&gt; section in the side navigation and select the &lt;strong&gt;Chat / Text&lt;/strong&gt; section. Enter a prompt in the playground. Click the &lt;strong&gt;Run&lt;/strong&gt; button to generate a response from Titan’s text model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpn9wkt5wgg6pbpzjlins.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpn9wkt5wgg6pbpzjlins.png" alt="Single Prompt AWS" width="800" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Deploy the Chatbot with AWS Lambda
&lt;/h3&gt;

&lt;p&gt;Now, let’s deploy the chatbot with &lt;a href="https://aws.amazon.com/lambda/?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt; as a serverless application! First, we need to create an AWS Lambda Function. Here are some steps to follow to create an AWS Lambda Function:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to AWS Lambda and Create a Function.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbx4ojl0m9xfbq915r5ip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbx4ojl0m9xfbq915r5ip.png" alt="AWS Lambda Homepage" width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the &lt;strong&gt;Author from Scratch&lt;/strong&gt; tab and make deployment configurations. Note that the runtime should be &lt;strong&gt;Python 3.10&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnwrp2wa31d28iw9trc0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnwrp2wa31d28iw9trc0.png" alt="Author from Scratch selection" width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a role with full access to the &lt;strong&gt;Bedrock and CloudWatch permissions&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhc54don0leobdqjle5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhc54don0leobdqjle5u.png" alt="Creating Role" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create function! 🚀&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Add some code to the Lambda function and hit the &lt;strong&gt;Deploy&lt;/strong&gt; button.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;

&lt;span class="n"&gt;bedrock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;bedrock-runtime&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;user_input&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;queryStringParameters&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bedrock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;maxTokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;
        &lt;span class="p"&gt;}),&lt;/span&gt;
        &lt;span class="n"&gt;modelId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;amazon.titan-text-lite-v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;model_output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;headers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Content-Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;response&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;model_output&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;completions&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]})&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Deploy the API with API Gateway
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Under the &lt;strong&gt;Function Overview&lt;/strong&gt;, click the &lt;strong&gt;Add Trigger&lt;/strong&gt; button and select the &lt;strong&gt;API gateway&lt;/strong&gt; option.&lt;/li&gt;
&lt;li&gt;Create HTTP API and configure the security method for your API endpoint.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deploy the API! 🤘&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjia8esazc8mtjy13dqfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjia8esazc8mtjy13dqfj.png" alt="Deploying HTTP API" width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Note down your &lt;strong&gt;Invoke URL&lt;/strong&gt; to interact with the chatbot! 🚀&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1sa9rcznwfc4yraoie4k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1sa9rcznwfc4yraoie4k.png" alt="Invoking URL" width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Finally, you can interact with your API’s endpoint and build with it. 😎&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a Code Generation Tool Using Amazon Bedrock and Anthropic Bedrock
&lt;/h2&gt;

&lt;p&gt;Now, let’s build something more fun and technical. In this section, we will be building a code generation tool using Amazon Bedrock and Anthropic Claude’s 2.0 Model (a model that generates code as its form of response). Don’t fret, we’ll still be working with the AWS Management Console but a basic Python knowledge is required for this use case.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Navigate to Bedrock and Select the Anthropic Claude 2.0 Model
&lt;/h3&gt;

&lt;p&gt;Just like we did in the chatbot use case, access Amazon Bedrock in the AWS Management Console. Go to the &lt;strong&gt;Chat/Text&lt;/strong&gt; section under the &lt;strong&gt;Playground&lt;/strong&gt; section.&lt;/p&gt;

&lt;p&gt;Under the &lt;strong&gt;Select Model&lt;/strong&gt; dropdown, select the Anthropic Claude 2.0 Model. Once done, you can now enter a code-related prompt in the chat.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskq8wrpq4vnkuwsdifrt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskq8wrpq4vnkuwsdifrt.png" alt="Selecting Model from Dropdown in Chat." width="800" height="669"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s a super great model to work with as it doesn’t just generate code; it also explains what the code does and how it works. It’s a super fast and effective model to work with!&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy the Code Generation Use-Case With AWS Lambda
&lt;/h3&gt;

&lt;p&gt;Also just as we did in the first use-case, we will also deploy code generation using AWS Lambda.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a New Lambda Function&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Add some Python code to Lambda Function (Runtime: 3.9):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;

&lt;span class="n"&gt;bedrock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;bedrock-runtime&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

  &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;queryStringParameters&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bedrock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;max_length&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;temperature&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;k&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;p&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.75&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;frequency_penalty&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;presence_penalty&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="n"&gt;modelId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arn:aws:bedrock::account:model/claude-v2-20221215&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;headers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Content-Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;code&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;generated_text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]})&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Deploy an API for Code Generation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Go to &lt;strong&gt;API Gateway&lt;/strong&gt; and select the &lt;strong&gt;HTTP API&lt;/strong&gt; option.&lt;/li&gt;
&lt;li&gt;Integrate it with the code generator.&lt;/li&gt;
&lt;li&gt;Deploy the API and get the &lt;strong&gt;Invoke URL&lt;/strong&gt; for interactions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Best Practices For Working With Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;When working with &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; you need pay attention to security, cost, and performance. By following these best practices, you can make sure your AI applications are secure, efficient, and cost-effective.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Data Security and Privacy
&lt;/h3&gt;

&lt;p&gt;Ideally, you want to keep your data private because AI models often handle sensitive user data, so security is very important in this case. To protect data when using AWS Bedrock, here some practices to follow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use IAM Roles and Policies:&lt;/strong&gt; Follow the &lt;a href="https://community.aws/content/2dsQs3aTnwV3LKeUDFkXNSndHjp/understanding-the-principle-of-least-privilege-in-aws?lang=en&amp;amp;utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;least privilege principle&lt;/a&gt; to limit access to Bedrock APIs and data storage. This means only giving people the permissions they need and nothing more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encrypt Data:&lt;/strong&gt; You use &lt;strong&gt;AWS Key Management Service (KMS)&lt;/strong&gt; to protect sensitive data both when it's stored and when it's being sent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor and Audit Access:&lt;/strong&gt; Enable &lt;strong&gt;CloudWatch&lt;/strong&gt; and &lt;strong&gt;AWS Config&lt;/strong&gt; to keep track of who accesses AI models, data, and logs; and how they’re being accessed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mask Data:&lt;/strong&gt; Before sending data to Bedrock, remove any personally identifiable information to reduce the risk.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Cost Optimization (Managing Bedrock Usage and Expenses) 💸
&lt;/h3&gt;

&lt;p&gt;AWS Bedrock uses a &lt;strong&gt;pay-per-use&lt;/strong&gt; pricing model, so it's important to manage costs well. You get billed based on what you use. Here's how can optimize cost using AWS Bedrock:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choose the Right Foundation Model:&lt;/strong&gt; Different models cost different amounts; select the one that best fits your needs and budget.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimize API Calls:&lt;/strong&gt; Cut down on unnecessary API requests by using caching and batching when you can.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor Usage:&lt;/strong&gt; Use &lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;AWS Cost Explorer&lt;/a&gt; and &lt;a href="https://aws.amazon.com/aws-cost-management/aws-budgets/?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;AWS Budgets&lt;/a&gt; to track your spending and set up alerts for any unexpected cost increases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Auto Scaling:&lt;/strong&gt; When using Bedrock with AWS Lambda, adjust the number of requests to reduce unnecessary API calls.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Bias and Fairness
&lt;/h3&gt;

&lt;p&gt;AI models can pick up biases based on the data they are trained on, which can cause problems. To make sure things are fair:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Check Model Responses:&lt;/strong&gt; Regularly test the model's outputs with prompts to identify any biases or errors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Diverse Data for Fine-Tuning:&lt;/strong&gt; When adjusting models, make sure the data includes various groups and viewpoints.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Performance Tuning
&lt;/h3&gt;

&lt;p&gt;To enhance response times and overall performance, follow these practices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tune API Parameters:&lt;/strong&gt; Adjust settings like &lt;code&gt;temperature&lt;/code&gt; and &lt;code&gt;maxTokens&lt;/code&gt; to get the best results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use an GPU-Optimized Infrastructure:&lt;/strong&gt; If you are deploying custom models, use &lt;a href="https://aws.amazon.com/ai/machine-learning/inferentia/?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;AWS Inferentia&lt;/a&gt; to boost performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load Balance Requests:&lt;/strong&gt; If you have a lot of traffic, use &lt;a href="https://aws.amazon.com/elasticloadbalancing/application-load-balancer/?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;AWS Application Load Balancer&lt;/a&gt; to distribute requests more efficiently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduce Latency:&lt;/strong&gt; Place applications closer to users with &lt;a href="https://aws.amazon.com/global-accelerator/?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;AWS Global Accelerator&lt;/a&gt; or AWS edge services.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS Bedrock makes it easier to integrate AI by offering scalable foundation models from Amazon and other infrastructures without the stress of training models and managing infrastructure. To get the best results, developers should focus more on security, cost-effectiveness, and performance improvements without working manually.&lt;/p&gt;

&lt;p&gt;To keep exploring AWS Bedrock, developers should try out different models, adjust outputs, and connect with other AWS services. Keeping up with Amazon Bedrock’s guides, blogs and other resources will help make the most of Bedrock and encourage new ideas in AI-powered applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before you go… 🥹
&lt;/h3&gt;

&lt;p&gt;Thank you for taking the time to learn about building AI applications with AWS Bedrock. If you found this article helpful, please consider supporting Microtica by creating an account and &lt;a href="https://discord.gg/N8WdXyXxZR" rel="noopener noreferrer"&gt;joining the community&lt;/a&gt;. Your support helps us keep improving and offering valuable resources for the developer community!&lt;/p&gt;

&lt;p&gt;&lt;a href="http://app.microtica.com/?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Join Microtica for free! 🚀&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5i3lvc9gaht4vyuljyme.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5i3lvc9gaht4vyuljyme.gif" alt="Thank You GIF Minions" width="498" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>🚀 ICYMI: AI-powered DevOps is changing how teams deploy software, automate tasks, optimize performance, and scale effortlessly. Missed our article? Read it here 👇:</title>
      <dc:creator>Opemipo Disu</dc:creator>
      <pubDate>Mon, 03 Mar 2025 15:53:11 +0000</pubDate>
      <link>https://dev.to/coderoflagos/icymi-ai-powered-devops-is-changing-how-teams-deploy-software-automate-tasks-optimize-2jbl</link>
      <guid>https://dev.to/coderoflagos/icymi-ai-powered-devops-is-changing-how-teams-deploy-software-automate-tasks-optimize-2jbl</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/microtica" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__org__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F2332%2Fc35f5609-003b-46f7-850b-33e49873761f.png" alt="Microtica" width="746" height="677"&gt;
      &lt;div class="ltag__link__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F392943%2F708b2716-2bb6-45ee-9662-ddb0288e3079.JPG" alt="" width="800" height="1199"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/microtica/deploy-smarter-not-harder-the-ai-powered-devops-revolution-2b04" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Deploy Smarter, Not Harder – The AI-Powered DevOps Revolution ☁️&lt;/h2&gt;
      &lt;h3&gt;Opemipo Disu for Microtica ・ Feb 27&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#programming&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>programming</category>
      <category>tutorial</category>
      <category>aws</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Deploy Smarter, Not Harder – The AI-Powered DevOps Revolution ☁️</title>
      <dc:creator>Opemipo Disu</dc:creator>
      <pubDate>Thu, 27 Feb 2025 13:00:11 +0000</pubDate>
      <link>https://dev.to/microtica/deploy-smarter-not-harder-the-ai-powered-devops-revolution-2b04</link>
      <guid>https://dev.to/microtica/deploy-smarter-not-harder-the-ai-powered-devops-revolution-2b04</guid>
      <description>&lt;p&gt;Container deployment with AWS can be quite complex, requiring some advanced configuration and more hands-on management. AWS is undoubtedly a great tool for DevOps engineers, but developers constantly feel the need for it to streamline deployment and management processes.&lt;/p&gt;

&lt;p&gt;Over the years, developers' work with cloud infrastructures like AWS has significantly improved. This is because tools like Microtica streamline deployment processes and reduce management complexities. Microtica is one of the few tools reinventing the approach to working with the cloud, and it also helps engineers save time.&lt;/p&gt;

&lt;p&gt;In this article, you’ll learn how AI simplifies AWS Cloud integration and transforms container deployment. This tutorial will also walk you through the steps on how to deploy containers using AWS as the underlying cloud infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug9zg1iiq284tf5ap5ot.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug9zg1iiq284tf5ap5ot.gif" alt="are you ready gif" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Role of AI in Container Deployment&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Just as AI has a huge responsibility in modern cloud orchestration, it also plays a huge role in container deployment. In this section, we will look into the importance of AI in container deployment and how it helps reduce deployment stress.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Automating infrastructure provisioning&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Working with AWS containers requires you to manually set up clusters and make advanced network configurations. However, with AI-powered tools like Microtica, you do not need to worry about this, as it helps provide added support for infrastructure automatically and automates tasks, thereby reducing setup and management complexities.&lt;/p&gt;

&lt;p&gt;AI-powered tools help to reduce the complexity of working with cloud infrastructures by automating infrastructure provisioning and providing ease with setting up integrations.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Intelligent resource allocation and scaling&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Working with AI-powered solutions for cloud delivery automatically monitors the needs of the cloud infrastructure, thereby adjusting resources to eliminate common &lt;a href="https://www.microtica.com/blog/gen-ai-for-ci-cd?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;bottlenecks&lt;/a&gt; and slow performance. Ideally, you don't want your application to be laggy or slow—these issues are usually caused by insufficient storage and memory.&lt;/p&gt;

&lt;p&gt;Engineers could do this manually, but the approach is time-consuming and could even lead to more complexities for them and developers while trying to allocate resources and scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. AI-Driven Cost Optimization&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Most AI-powered solutions for cloud delivery help to reduce costs by predicting what you'll most likely need based on historical analysis. It ensures that resources are allocated based on exact demand to cut unnecessary costs; this automated practice ensures that teams don't spend too much money on excessive cloud storage. Refer to this guide to learn how Microtica &lt;a href="https://medium.com/microtica/maximizing-cloud-cost-optimization-with-ai-driven-solutions-f02ee3804e1d?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;optimizes costs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With that being said, it prevents both under-provisioning and over-provisioning; it works with only the things you need.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Microtica?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;People often refer to Microtica as a platform that eases the stress of cloud delivery.&lt;/p&gt;

&lt;p&gt;Microtica goes beyond easing the stress of cloud delivery. It is a versatile cloud delivery platform that simplifies the way developers work with infrastructure and deploy applications in the cloud using just one UI. With Microtica, you don’t need to worry about writing scripts or manually managing the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;Although you can still do some things yourself, Microtica provides prebuilt templates for different technologies in every facet—they serve as quickstarts for getting started with Microtica. This article is focused on deploying applications on top of AWS using Microtica.&lt;/p&gt;

&lt;p&gt;Apart from that, Microtica offers several other features that we will explore in the next section of this tutorial.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Benefits Of Using Microtica&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before getting our hands dirty, I thought sharing some benefits of using Microtica in your development and deployment workflows would be great. In this section, you’ll learn about some of Microtica’s capabilities and why you should use Microtica for container deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Unified Platform&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In Microtica, you have everything you need in just one user interface without manually working with any tools. Imagine a world where you don’t need to worry about learning Kubernetes or how to use any cloud or containerization tools—it would be great, right? That’s exactly what Microtica provides!&lt;/p&gt;

&lt;p&gt;You don’t need to be an expert at Kubernetes before working with it; it’s like working with underlying infrastructures indirectly and still getting the same results without manual constraints. Working with Microtica involves you working with a UI; there’s no need to do anything locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Pre-built templates&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Microtica offers its users &lt;a href="https://www.microtica.com/templates?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;pre-built templates&lt;/a&gt; for deploying their applications in any container environment of their choice. There are templates for working with frameworks, libraries, and even cloud tools. The templates are mainly for getting your code into production quickly without making too many configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Integrated Container Monitoring&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Microtica gives real-time updates on the container’s health, warnings, and errors. It’s like an observability tool embedded inside Microtica. It also provides updates on performance and resource usage. An added advantage is that this feature lets you track previous performance, health, and resource usage as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Microtica makes developers more productive&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As you do not have to worry about manual controls and advanced configurations, a lot of time is saved. This lets engineers get the best out of their work and makes them more productive. Microtica has proven that there is a lot you can do without focusing on configuration complexities.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Hands-on! Let’s see Microtica In Action 🎉&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;By now, you should have seen Microtica’s capabilities and how it can boost your DevOps team workflow by focusing on what matters. With Microtica’s unified platform, there are a lot of powerful things you can do in a few minutes.&lt;/p&gt;

&lt;p&gt;In the next section, we will dive into the main thing for the article—deploying a container on top of AWS with Microtica.&lt;/p&gt;

&lt;p&gt;Let’s get our hands dirty! 👨‍💻 🙌&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekpttlbmdldmf359qys8.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekpttlbmdldmf359qys8.gif" alt="i like to get my hands dirty gif" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Creating a Microtica Account&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To get started, you need to create a Microtica account. This is the first step. You can sign up using your email, GitHub, or Google Auth. Once you do this, you'll be redirected to the unified platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwoq8mzurh8uok2z5bb4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwoq8mzurh8uok2z5bb4.png" alt="sign up gif" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2: Connecting Your AWS Account&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When you create an account on Microtica, you'll go through an onboarding process where you set up your own project. During this process, you’ll also add your &lt;strong&gt;AWS account&lt;/strong&gt;. If you need to manage cloud accounts later, you get that done from the &lt;strong&gt;Integrations&lt;/strong&gt; tab under the &lt;strong&gt;Cloud Accounts&lt;/strong&gt; section.&lt;/p&gt;

&lt;p&gt;From here, it’s &lt;strong&gt;Integrations &amp;gt; Cloud Accounts &amp;gt; Connect AWS Account &amp;gt; Connect&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95e63f9hxjzgndtn02qp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95e63f9hxjzgndtn02qp.png" alt="integrations image" width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you click the &lt;strong&gt;Connect&lt;/strong&gt; button in the modal, you'll see a dialog that redirects you to your AWS account. Fill in the required credentials, tick the required capabilities checkbox, and click the &lt;strong&gt;Create stack&lt;/strong&gt; button. Once the CloudFormation stack is created, your AWS account will automatically show up in Microtica’s Console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1739981020394%2F0d5beaa6-5cd3-46a9-aceb-2b76b23026ac.webp%3Fauto%3Dcompress%2Cformat%26format%3Dwebp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1739981020394%2F0d5beaa6-5cd3-46a9-aceb-2b76b23026ac.webp%3Fauto%3Dcompress%2Cformat%26format%3Dwebp" alt="microtica's console image" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yay, you now have your AWS Cloud account connected. 😃🎉&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Choosing the Right Template&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Choose any template of your choice. Microtica lets you explore any of the available technologies. You can explore them in the &lt;a href="https://www.microtica.com/templates?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;pre-built template directories&lt;/a&gt; or under the &lt;strong&gt;Templates&lt;/strong&gt; tab on the platform. In this article, we will be working with &lt;strong&gt;EKS&lt;/strong&gt;, so we will be using EKS's pre-built template for containerization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ozraz6wm8puhjr8aeqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ozraz6wm8puhjr8aeqb.png" alt="EKS Template image" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the &lt;strong&gt;Amazon EKS&lt;/strong&gt; starter pack template in the &lt;strong&gt;templates&lt;/strong&gt;’ directory.&lt;/p&gt;

&lt;p&gt;You’ll have to configure the template to create a Kubernetes Cluster from here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7hsgnz4st8zmd1ixg34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7hsgnz4st8zmd1ixg34.png" width="800" height="581"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You need to give the cluster a unique name and select the node instance and configurations you want the cluster to use.&lt;/p&gt;

&lt;p&gt;These environment variables use an &lt;a href="https://aws.amazon.com/ec2/instance-types/t3/?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;&lt;strong&gt;EC2 t3.medium instance&lt;/strong&gt;&lt;/a&gt; with 1 node, which is a minimal configuration for trying out the template. For more serious purposes, you would need more compute power, such as &lt;code&gt;t3.large&lt;/code&gt;, &lt;code&gt;t3.xlarge&lt;/code&gt;, and &lt;code&gt;t3.2xlarge&lt;/code&gt;. If you are working on something smaller, you can stick with a &lt;code&gt;t3.small&lt;/code&gt; EC2 instance.&lt;/p&gt;

&lt;p&gt;Click the &lt;strong&gt;Save&lt;/strong&gt; button to proceed to configure the environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiuaiqmhxssnhtfcsote3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiuaiqmhxssnhtfcsote3.png" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the image above, you now need to create an environment where you want to deploy your EKS Cluster so you can own your infrastructure and data.&lt;/p&gt;

&lt;p&gt;You need to create a name and description for your environment. Additionally, you need to specify the cloud provider where you want to deploy the cluster. In this article, we will be using AWS as the cloud provider.&lt;/p&gt;

&lt;p&gt;Once you’re done, click the &lt;strong&gt;Create&lt;/strong&gt; button and link your AWS account to the environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrnhw0p0xa85z6b4moxk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrnhw0p0xa85z6b4moxk.png" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose the AWS account and the region where your cluster will be deployed. Once done, click the &lt;strong&gt;Next&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;You should see this after clicking the button 👇:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F646pdfgk1wzl0ep7bhxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F646pdfgk1wzl0ep7bhxp.png" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This shows the process it uses for the component deployment. It provides enough transparency that you can even see the &lt;a href="https://github.com/microtica/templates/tree/master/aws-eks?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;&lt;strong&gt;template’s GitHub&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you see this, click the &lt;strong&gt;Deploy&lt;/strong&gt; button to deploy it!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Deploying Your First Container&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When done, you’ll be redirected to the pipelines page, where you can see the deployed pipelines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fboqbyyafmmd52p2xzuai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fboqbyyafmmd52p2xzuai.png" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After doing this, head to the &lt;strong&gt;Environments&lt;/strong&gt; tab and click &lt;strong&gt;Add Application&lt;/strong&gt; to the specific component you’re working with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wvy6v4vnyjr12mnjt29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wvy6v4vnyjr12mnjt29.png" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After clicking &lt;strong&gt;Add Application&lt;/strong&gt;, a modal should pop up with a list of templates Microtica provides. In this article, we will be working with the &lt;strong&gt;Next.js&lt;/strong&gt; template.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7tswkekigcv9szwwzrr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7tswkekigcv9szwwzrr.png" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After clicking the &lt;strong&gt;Deploy&lt;/strong&gt; button, you’ll be redirected to the next deployment steps, which involve creating a Git repository, configuring the template, choosing where to deploy, and finally deploying. 🚀&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0k3ufhp8sx4wnfnldxhm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0k3ufhp8sx4wnfnldxhm.png" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are some things to do after clicking the &lt;strong&gt;Next&lt;/strong&gt; button:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Give the application a name in the &lt;strong&gt;AppName&lt;/strong&gt; input field.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;When selecting where to deploy, select the Cluster you’d love to work with. In this case, we will work with the one we created earlier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a1xashlwxbfhq6lb347.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a1xashlwxbfhq6lb347.png" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Next&lt;/strong&gt;, and &lt;strong&gt;Deploy&lt;/strong&gt; your application to the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Wait for the application to build before deploying it. To verify that it’s building, you can check the logs to see what’s happening.&lt;/p&gt;

&lt;p&gt;When it’s done building, you can head over to the &lt;strong&gt;Environments&lt;/strong&gt; tab to see what’s happening.&lt;/p&gt;

&lt;p&gt;Head over to the application in the cluster component and click the &lt;strong&gt;Assign domain&lt;/strong&gt; button to create a domain where your application will be deployed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jcgtmo7va40oqhxybiq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jcgtmo7va40oqhxybiq.png" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Microtica offers a free domain, you can use that if you want. Alternatively, you can add your custom domain. Click the &lt;strong&gt;Next&lt;/strong&gt; button when you’re done with any of these.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82cdoojztjp38m42li2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82cdoojztjp38m42li2f.png" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After clicking the button, a CNAME record for your domain will be created for you automatically if you’re working with the free domain.&lt;/p&gt;

&lt;p&gt;If you’re using a custom domain, you might need more configurations to set a CNAME record. In this guide, we’re working with a free domain, which automatically creates a CNAME record.&lt;/p&gt;

&lt;p&gt;Afterward, you'll need to restart your application for it to be deployed. Click the &lt;strong&gt;Restart&lt;/strong&gt; button to do this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4b4vo70gbb83s71t8if8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4b4vo70gbb83s71t8if8.png" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once restarted, head over to the &lt;strong&gt;Environments&lt;/strong&gt; tab and check the application to view the domain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9g0phpgtlfacq3lb0wad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9g0phpgtlfacq3lb0wad.png" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once done, click the domain, and you should see Next.js’s default page.&lt;/p&gt;

&lt;p&gt;Now, you have your application deployed in the Cluster! ☁️ 🚀&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Managing and Scaling Deployed Applications&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In Microtica, you don’t need to use third-party tools to observe or monitor your application’s health, performance, memory, etc. However, it’s great to keep track of your application and its potential capabilities, as this saves you from the stress and potential risks of slowing down the application’s performance.&lt;/p&gt;

&lt;p&gt;Also, Microtica's Cost Explorer feature helps you scale your applications and save costs on cloud infrastructure and deployment.&lt;/p&gt;

&lt;p&gt;In this section of the article, we will explore how to manage, scale, and save costs for the cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Monitoring Application’s Performance&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Microtica has an integrated monitoring tool on the platform - we’ll be using it in this section of the article.&lt;/p&gt;

&lt;p&gt;To monitor your application, you need to enable monitoring for your Cluster. To get this done, head to the Cluster, and enable monitoring.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy792dkzlndkpybbhveq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy792dkzlndkpybbhveq.png" width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After enabling monitoring, you should see your metrics in the &lt;strong&gt;Monitoring&lt;/strong&gt; tab. The metrics include CPU usage, memory, cached items, errors, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabeq2u2kgdd29v03d6oc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabeq2u2kgdd29v03d6oc.png" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another way to monitor your application is to check its logs. To do this, go to the Application’s environment and click the Logs tab. You'll then see the current logs of your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7du14l8z6hjffa17epi2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7du14l8z6hjffa17epi2.png" width="800" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One underrated feature of Microtica is that you can easily check previous logs of selected dates. To learn more about Monitoring and alerting with Microtica, watch this video 👇:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/SQKdn2tiD8c"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Scaling Applications&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Scaling applications in Microtica is very easy. You just need to make some configuration changes; it's that simple. In Microtica, you can scale your application either vertically or horizontally, all within the Microtica environment. In the application's settings, under &lt;strong&gt;Scaling&lt;/strong&gt;, you will find all the resource options you can adjust, such as CPU, memory, and instance replication.&lt;/p&gt;

&lt;p&gt;To learn how to scale applications in Microtica, &lt;a href="https://docs.microtica.com/scaling?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;&lt;strong&gt;read this guide&lt;/strong&gt;&lt;/a&gt;. It will walk you through the easy steps on how to scale apps in Microtica.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Cost Optimization 💸&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Microtica offers a feature within the platform for managing and reducing your AWS cloud costs. Apart from that, it helps analyze your expenses on your cloud infrastructure (AWS) and acts as an advisor for your expenses. It seamlessly integrates with your AWS account, requiring just a &lt;strong&gt;CloudFormation stack setup&lt;/strong&gt;, which grants Microtica the necessary permissions.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Cost Explorer&lt;/strong&gt; feature in the platform is used for monitoring expenses, and this also helps with cost optimization. To see how Microtica optimizes cloud costs, &lt;a href="https://docs.microtica.com/cloud-cost-optimization?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;&lt;strong&gt;read this article&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Advanced Features of Microtica&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We’ve previously looked into the basic features of Microtica. However, there are a lot of things you can still do with Microtica. While Microtica is a powerful platform that doesn’t let you worry about getting things done manually, there are other things that the platform offers.&lt;/p&gt;

&lt;p&gt;In this section, we will look into some other capabilities of Microtica and why you need them in your workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Custom Domain Setup&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Earlier, we explored how you can get a free domain while deploying your Next.js application—it was also mentioned that Microtica lets you configure your custom domain by integrating with your preferred DNS provider.&lt;/p&gt;

&lt;p&gt;Now, we’ll have a look at how to set up a custom domain in Microtica. 🚀&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Head to your Next.js application’s settings.&lt;/li&gt;
&lt;li&gt;Move to the &lt;strong&gt;Domain&lt;/strong&gt; tab and select the &lt;strong&gt;Add your own custom domain&lt;/strong&gt;. When done, input your domain.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg06agcu18u0fq3fmdbge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg06agcu18u0fq3fmdbge.png" width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update your &lt;strong&gt;DNS records&lt;/strong&gt; for the domain by adding the given CNAME records to your provider.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qavmnf4erlzduwzo3jo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qavmnf4erlzduwzo3jo.png" width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click the &lt;strong&gt;Next&lt;/strong&gt; button, and wait for the application to propagate for some time.&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Restart&lt;/strong&gt; button and you have your application deployed to your custom domain! ☁️ 👨‍💻&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s how easy it is to configure a custom domain with your Microtica application.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Continuous Integration/Continuous Deployment (CI/CD)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Instead of working with CI/CD pipelines manually, Microtica uses its embedded release engineer feature to automate CI/CD pipelines. By using Microtica for CI/CD automation and optimization, you don’t have to worry about your pipelines’ management, too, as Microtica helps you manage them. With the Release Engineer feature, Microtica also automates deployments with &lt;code&gt;git push&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you want to learn how Microtica uses the Release Engineer for CI/CD automation and management, &lt;a href="https://www.microtica.com/blog/gen-ai-for-ci-cd?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;&lt;strong&gt;refer to this guide&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Infrastructure as Code (Iac)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Microtica allows teams to manage and provision infrastructure through Infrastructure as Code (Iac) instead of manual processes. With Microtica, you can define and version-control infrastructure configurations with &lt;strong&gt;CloudFormation&lt;/strong&gt; or &lt;strong&gt;Terraform&lt;/strong&gt; for consistency.&lt;/p&gt;

&lt;p&gt;Microtica works with AWS and GCP to allow you to manage infrastructure as code (IaC) with ease. When using CloudFormation, you work with &lt;strong&gt;JSON&lt;/strong&gt; or &lt;strong&gt;YAML&lt;/strong&gt; for defining templates. But when working with Terraform, you're expected to have some expertise in &lt;strong&gt;Hashicorp Configuration Language&lt;/strong&gt; (HCL).&lt;/p&gt;

&lt;p&gt;To learn more about the Iac feature in Microtica, &lt;a href="https://www.microtica.com/blog/building-custom-cloud-components?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;&lt;strong&gt;refer to this guide&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Summary&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://microtica.com/?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;&lt;strong&gt;Microtica&lt;/strong&gt;&lt;/a&gt; is one of the best DevOps tools that provides a seamless way to deploy clusters and applications to the cloud. Apart from these, it minimizes workload and enhances productivity for developers and teams.&lt;/p&gt;

&lt;p&gt;This article focused on how developers and teams can automatically deploy containers without any manual constraints. Additionally, we looked into how teams and engineers can scale their applications and monitor their logs, metrics, and costs, as well as the basic and advanced features of Microtica.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwp8h2wvdmf6uvacq1bv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwp8h2wvdmf6uvacq1bv.gif" alt="flying in plane gif" width="500" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After exploring Microtica’s capabilities, I’m sure that you’ll love to try it out—by doing this, I’m sure your future self will thank you. 😂&lt;/p&gt;

&lt;p&gt;&lt;a href="https://microtica.com/?utm_source=DEV&amp;amp;utm_medium=post&amp;amp;utm_campaign=devrel" rel="noopener noreferrer"&gt;&lt;strong&gt;Deploy with Microtica ☁️.&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for taking the time to read this article. If you have any questions about Microtica and deploying containers with it, you can join our &lt;a href="https://discord.com/invite/ADaFvAsakW" rel="noopener noreferrer"&gt;&lt;strong&gt;Discord Community&lt;/strong&gt;&lt;/a&gt; or leave some comments below. I'm looking forward to hearing what you think about Microtica; see you in the cloud! 😛☁️&lt;/p&gt;

</description>
      <category>programming</category>
      <category>tutorial</category>
      <category>aws</category>
      <category>webdev</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Opemipo Disu</dc:creator>
      <pubDate>Sun, 16 Feb 2025 09:20:31 +0000</pubDate>
      <link>https://dev.to/coderoflagos/-2fjn</link>
      <guid>https://dev.to/coderoflagos/-2fjn</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/microtica" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__org__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F2332%2Fc35f5609-003b-46f7-850b-33e49873761f.png" alt="Microtica" width="746" height="677"&gt;
      &lt;div class="ltag__link__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F392943%2F708b2716-2bb6-45ee-9662-ddb0288e3079.JPG" alt="" width="800" height="1199"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/microtica/why-cicd-is-a-bottleneck-and-how-ai-can-help-3pb4" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Why CI/CD is a Bottleneck and How AI Can Help ⚙️&lt;/h2&gt;
      &lt;h3&gt;Opemipo Disu for Microtica ・ Feb 14&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#programming&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#devops&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>programming</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>ai</category>
    </item>
    <item>
      <title>Why CI/CD is a Bottleneck and How AI Can Help ⚙️</title>
      <dc:creator>Opemipo Disu</dc:creator>
      <pubDate>Fri, 14 Feb 2025 14:41:05 +0000</pubDate>
      <link>https://dev.to/microtica/why-cicd-is-a-bottleneck-and-how-ai-can-help-3pb4</link>
      <guid>https://dev.to/microtica/why-cicd-is-a-bottleneck-and-how-ai-can-help-3pb4</guid>
      <description>&lt;p&gt;It can be hard to work with CI/CD pipelines even though they are meant to make development and deployment faster. However, they have become a major setback for developers and teams due to manual setup, long build times, and complex testing steps. Additionally, poor use of resources often leads to broken workflows.&lt;/p&gt;

&lt;p&gt;AI can make development and deployment workflows easier. It can improve pipelines, automate jobs, predict failures, and manage pipelines by itself. Existing AI tools can help transform CI/CD from a pain point to an ease.&lt;/p&gt;

&lt;p&gt;In this article, you’ll learn how CI/CD can slow down developer workflows. You’ll also see how to fix this problem using an AI feature for automating tasks to make deployment easier and more reliable.&lt;/p&gt;

&lt;p&gt;Let’s dive in! 🏊‍♀️ &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficw421bysa798kdsb1ep.gif" alt="Let's do this" width="480" height="400"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  What you’ll also learn…
&lt;/h2&gt;

&lt;p&gt;Here are some key takeaways from this article:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why traditional CI/CD pipelines slow down progress.&lt;/li&gt;
&lt;li&gt;Cons of managing pipelines manually.&lt;/li&gt;
&lt;li&gt;How AI can improve and simplify CI/CD workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this guide, you will learn about &lt;strong&gt;Microtica’s Release Engineer feature&lt;/strong&gt; for CI/CD automation. This will help to make deployments better. We will discuss how the release engineer can work as an AI tool to enhance workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Microtica&lt;/strong&gt; is a cloud delivery platform that makes deployment and scaling faster for developers and enterprises. With Microtica, you do not have to worry much about managing your underlying infrastructure, as it helps make cloud operations much simpler.&lt;/p&gt;

&lt;p&gt;Here are the powerful features that Microtica offers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Release Engineer&lt;/strong&gt;: This will be our major focus in this article – Microtica’s smart feature for building, improving, and handling release management processes in cloud environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt;: Microtica lets you check logs and builds from any date and time. Apart from that, it gives alerts and errors to help find problems quickly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Management&lt;/strong&gt;: Microtica helps you manage cloud resource costs by watching what you spend and cutting unnecessary bills. This makes your shipping cheaper.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unified Platform&lt;/strong&gt;: You have everything you need in a single platform. With the platform. The platform simplifies your pipeline and infrastructure management; you have full control. You can use Microtica’s ready-made templates for quickstarts or bring your custom configurations - Microtica helps orchestrate delivery either way. There’s no need to use multiple tools, but you're not locked into any specific setup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scaling&lt;/strong&gt;: Microtica automatically adjusts your resource capacity up or down based on usage, helping you run smoothly without spending much on servers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will focus on Microtica’s Release Engineer feature and have a look at how it can automate and simplify deployments.&lt;/p&gt;

&lt;p&gt;If you find Microtica cool, you can &lt;a href="https://app.microtica.com/" rel="noopener noreferrer"&gt;try it out for free&lt;/a&gt;. We can’t wait to have you use Microtica!&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Traditional CI/CD Pipelines Become Bottlenecks
&lt;/h2&gt;

&lt;p&gt;CI/CD pipelines are meant to make development and deployment easier, but they often become bottlenecks that can frustrate both individual developers and teams for several reasons. Here are some reasons why CI/CD pipelines can slow them down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Manual Setup&lt;/strong&gt;: Some CI/CD tools require you to set up pipelines manually. This takes a lot of effort and skill. It can also lead to delays and errors during setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependencies Management&lt;/strong&gt;: Tracking dependencies can be hard; if you update them manually across several environments, conflicts can arise. This slows down deployments because of package issues and version choices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Difficult Testing Processes&lt;/strong&gt;: As applications grow, testing can become more complicated, resulting in longer execution times and delayed feedback loops. Manual testing adds more challenges, especially with new features and tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Poor Resource Use&lt;/strong&gt;: As the application keeps getting a lot of users, the application also needs extra resources to run well. If resources are not managed properly, it can slow the application’s performance. Traditional pipelines often fail to predict or prevent failures, resulting in broken builds and deployment problems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Misconfigurations&lt;/strong&gt;: As things are handled manually, human errors often lead to setup issues, errors in the system, security risks, and problems during deployment. These errors can cause unexpected downtime that could lead to development delays.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Poor Error Detection&lt;/strong&gt;: Finding errors can also be challenging. You always need to figure out which part of the pipeline has issues. Traditional pipelines often fail to figure out or fix errors, which leads to failed builds and deployment troubles.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CI/CD has become a major challenge because developers have to set up and manage it manually. Working with CI/CD pipelines should be simpler. However, this leads to issues for developers when they use pipelines.&lt;/p&gt;

&lt;p&gt;Developers want to find ways to remove these problems. They look for anything that lowers stress and stops security issues. Doing tasks by hand increases stress and makes work harder for developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  How AI Can Streamline and Optimize CI/CD Workflows
&lt;/h2&gt;

&lt;p&gt;Traditional pipelines have many limits because of manual work and unexpected problems. AI-powered solutions can enhance automation and improve workflows. This helps make deployments quicker and more reliable. Here are some ways AI can change your CI/CD workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pipeline Optimization&lt;/strong&gt;: AI can help you understand your past build data and performance patterns. With this information, AI can change pipeline settings automatically. It finds problems, suggests fixes, and changes resource use quickly. This results in quicker build times and more reliable launches.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improving Observability&lt;/strong&gt;: Some AI tools for CI/CD optimization give real-time insights, alerts, logs, and error detection. This helps developers find problems faster and respond without manual work. They can even look back at old logs and errors. Instead of searching through logs by hand, AI can quickly find the cause of pipeline issues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Management&lt;/strong&gt;: AI keeps track of how resources are used. It automatically adjusts resources based on what is needed. This helps maintain great performance while cutting costs, so there’s no need for manual planning.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automating Tasks&lt;/strong&gt;: AI can take care of regular pipeline tasks like building, testing, and deploying code by itself. This reduces manual work and allows developers to focus on creating new features instead of maintaining infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improving Security&lt;/strong&gt;: Machine learning tools check code for security issues in real time. They can detect risks quicker than human checks and can automatically block or highlight harmful code before it goes into production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code Quality Checks&lt;/strong&gt;: AI-powered solutions for CI/CD look at code for bugs, style problems, and performance issues. It gives quick feedback to developers, helping them fix mistakes early and keeping the code clean and effective. If there are any issues, they are listed in the logs for manual review.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CI/CD helps developers and enterprises get software out faster, but it's still a pain. The current process is full of manual work that makes things complicated and slow. Developers want something simpler that doesn't require constant checking and fixing. In the next section of the article, you will be introduced to &lt;strong&gt;Microtica’s Release Engineer&lt;/strong&gt; feature that improves CI/CD workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Microtica’s Release Engineer
&lt;/h2&gt;

&lt;p&gt;Imagine a life where you do not have to worry about configuring deployment settings and manually checking every step. Wouldn’t that be cool? &lt;/p&gt;

&lt;p&gt;Microtica’s Release Engineer has a fix for that! 😎&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feofkyphlanfqqmkc8z6o.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feofkyphlanfqqmkc8z6o.gif" alt="Fix GIF" width="498" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What Microtica’s Release Engineer Does
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automate the Painful Pipeline Setup&lt;/strong&gt;: Setting up deployment pipelines takes a long time. Microtica’s built-in release engineer does it in a few minutes. It learns your system and prepares everything automatically.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Monitoring&lt;/strong&gt;: With the release engineer, you won’t have to read through log files anymore. You get clear alerts about what’s going on. If something goes wrong, you’ll know right away in simple words. It spots risks before they turn big and notes them in logs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Auto-Scaling:&lt;/strong&gt; When your traffic changes suddenly - up or down - the release engineer adjusts your setup automatically. No more manual updates or performance issues. The system works smoothly in the background, so you can forget about scaling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Engineering Through Automation&lt;/strong&gt;: The release engineer changes how we manage deployments. Instead of developers spending hours on infrastructure work, they can build better software. The system takes care of the hard parts - from setup to performance adjustments.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As software development gets tougher, AI tools like Microtica’s Release Engineer have become important. They aren't just extra features now. They are becoming essential for developers who want to make their work easier and for teams that want to save time and be effective.&lt;/p&gt;

&lt;p&gt;By letting Microtica’s release engineer handle the heavy lifting of your CI/CD, developers and teams can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ship features faster&lt;/li&gt;
&lt;li&gt;Reduce deployment problems&lt;/li&gt;
&lt;li&gt;Maintain better security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The future of CI/CD is not about working harder; it is about working smarter. Every developer wants to make their work easier and do less manual work. With the release engineer feature, developers can focus on making and improving applications. They do not need to waste time on managing pipelines and fixing problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Once again, manually working with CI/CD pipelines can be very annoying. When you are trying to set up things, progress slows down. People often make mistakes when they copy and paste settings, and you spend too much money on servers that are not needed. It gets worse when you’re trying to scale as the application gets bigger. Then, when it’s time to ship, things don’t work as expected.&lt;/p&gt;

&lt;p&gt;Developers can spend their time creating and releasing great features. They don't have to struggle with Jenkins or CircleCI all day. It's simple. Let the release handle the routine tasks. This way, your team can focus on their strengths. 😉&lt;/p&gt;

&lt;p&gt;If you’ve reached this point of the article, you’ll have learned that CI/CD doesn’t have to be hard. Using the embedded AI release engineer to automate tasks is a smart choice. By doing this, you will work faster, have fewer errors, and likely save some money. It is all about working smarter, not harder, eh? 😂&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqul62byj1qqf6st1ai01.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqul62byj1qqf6st1ai01.gif" alt="if you know image" width="500" height="500"&gt;&lt;/a&gt;&lt;br&gt;
We’re launching the Release Engineer in March 🎉. Want early access? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://microtica.com/free-trial" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Join the beta! 🚀&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;Thank you for taking the time to read this article. I’m sure you now have enough reasons to use AI-powered solutions in your CI/CD workflows. If you have any questions, please refer to our &lt;a href="https://discord.com/invite/ADaFvAsakW" rel="noopener noreferrer"&gt;Discord community&lt;/a&gt; and share them with us. &lt;/p&gt;

&lt;p&gt;Can’t wait to have you there, and stay tuned for the next blog post! 👋&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources 🌱
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.microtica.com/" rel="noopener noreferrer"&gt;Microtica’s Website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/@microtica3194" rel="noopener noreferrer"&gt;Microtica’s YouTube Channel&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microtica.com/" rel="noopener noreferrer"&gt;Microtica’s Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.microtica.com/features/pipeline-automation" rel="noopener noreferrer"&gt;Microtica’s Pipeline Automation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://discord.com/invite/ADaFvAsakW" rel="noopener noreferrer"&gt;Microtica’s Discord Community&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.microtica.com/how-it-works" rel="noopener noreferrer"&gt;How Microtica works&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
