<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sabari Rohith</title>
    <description>The latest articles on DEV Community by Sabari Rohith (@sabarirohith).</description>
    <link>https://dev.to/sabarirohith</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sabarirohith"/>
    <language>en</language>
    <item>
      <title>How to get Azure Data Factory Pipeline Failure Notification?</title>
      <dc:creator>Sabari Rohith</dc:creator>
      <pubDate>Fri, 23 Jun 2023 15:01:00 +0000</pubDate>
      <link>https://dev.to/sabarirohith/how-to-get-azure-data-factory-pipeline-failure-notification-1gkj</link>
      <guid>https://dev.to/sabarirohith/how-to-get-azure-data-factory-pipeline-failure-notification-1gkj</guid>
      <description>&lt;h2&gt;
  
  
  What is Azure Data Factory Pipeline?
&lt;/h2&gt;

&lt;p&gt;Azure Data Factory is a cloud-based data integration tool focusing on data extraction, transformation, and loading. A pipeline in Azure Data Factory is a collection of processes that move data to a shared repository, such as a data warehouse.&lt;/p&gt;

&lt;p&gt;Why it is important to monitor Azure Data Factory pipeline failures?&lt;/p&gt;

&lt;p&gt;An Azure Data Factory pipeline is an orchestration that automates numerous tasks related to data extraction, transformation, and loading; therefore, if any of the events fail, the entire pipeline will fail.&lt;/p&gt;

&lt;p&gt;Organizations depending on Data Factory pipelines will have a hassle-free experience if they keep track of the failed pipeline runs in the Data Factory and rectify it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to monitor failed pipeline runs in Data Factory?
&lt;/h2&gt;

&lt;p&gt;Azure Data Factory pipeline run history can be accessed in the Studio of the respective Factory. The runs, inputs, outputs, and failure details will be available for each run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UR-fi6hE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dfds68wxeflp46x3utv0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UR-fi6hE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dfds68wxeflp46x3utv0.png" alt="Monitor Failed pipeline runs" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Failed runs in a Data Factory Pipeline can be monitored by navigating to &lt;strong&gt;Monitor -&amp;gt; Alerts &amp;amp; metrics&lt;/strong&gt;. You will need to identify the monitoring criteria to define the alert logic and evaluation period.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gq-itlgK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kkm5efv1ex91fzybl317.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gq-itlgK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kkm5efv1ex91fzybl317.png" alt="Alerts and Metrics" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can set up notification channels to get alerts and stay informed on violations. Azure Action groups enable you to group several notification channels and instantly notify failure alerts to multiple channels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The defined criteria will only apply to the pipelines in a single Data Factory.&lt;/p&gt;

&lt;p&gt;The image shown below displays a sample alert triggered using the alert rule configured in the Data Factory:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ERLq6Bjb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n4bjlalnw5fjx8vsgm7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ERLq6Bjb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n4bjlalnw5fjx8vsgm7r.png" alt="Azure Monitor notification alert" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Visualization
&lt;/h2&gt;

&lt;p&gt;Data Factory supports a wide range of metrics to understand performance, reliability, and availability. By default, the metrics are available at the level of a Data Factory. You can apply filters to visualize the data from a specific pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GJnoPziW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dgtcd7zirk9n6ifqhfjx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GJnoPziW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dgtcd7zirk9n6ifqhfjx.png" alt="Azure data factory monitoring" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the aid of visualization, you can configure new alert rules for monitoring at the pipeline level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges faced by Azure for pipeline monitoring
&lt;/h2&gt;

&lt;p&gt;Monitoring Data Factory pipelines using Azure can present quite a few challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is difficult for someone without expertise in Data Factory to understand the relationship between the factory and pipelines and how to manage and monitor them&lt;/li&gt;
&lt;li&gt;Most support cases are going to escalate to your Data Factory expert resulting in increased costs and a longer time to resolution&lt;/li&gt;
&lt;li&gt;It takes much effort to configure the monitoring
Managing multiple pipelines from different data factories is complicated&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Azure Data Factory Pipeline monitoring in Serverless360
&lt;/h2&gt;

&lt;p&gt;Serverless360 is a robust pipeline monitoring tool that can be handled by anyone, even those with limited product knowledge. In addition, Serverless360 has an effective support team that responds almost instantly to customer questions and continues tracking customer feedback.&lt;/p&gt;

&lt;p&gt;Serverless360 offers &lt;a href="https://www.serverless360.com/azure-data-factory-monitoring-troubleshooting"&gt;Azure Data Factory monitoring&lt;/a&gt;, allowing you to easily monitor multiple pipelines from various data factories at one location. This centralized monitoring allows improved visualization, easy access to data, and real-time insights. It simplifies troubleshooting issues, reducing the efforts spent monitoring multiple pipelines from different factories.&lt;/p&gt;

&lt;h2&gt;
  
  
  Business Application
&lt;/h2&gt;

&lt;p&gt;Business Application is a logical container that groups Azure services, comprising a line of a business solution, and enables centralized management and monitoring.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring Profiles
&lt;/h2&gt;

&lt;p&gt;A Monitoring Profile is a customizable collection of monitoring rules that can be used to set up monitoring for Azure services associated with a Business Application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem statement
&lt;/h2&gt;

&lt;p&gt;A business orchestration that utilizes many pipelines can be challenging to map to the business function they are intended to perform. Business Applications make it possible to model a business environment and group pipelines together to make a logical container called a Business Application, which processes business data (e.g., data concerning a company’s engineering team). This allows you to manage the business function rather than the entire data factory.&lt;/p&gt;

&lt;p&gt;Imagine the following scenario: You are responsible for managing and monitoring more than 50 pipelines that contribute to data related to the Recruitment process and more than 100 pipelines that contribute to the Onboarding process managed by the HR team at the company. It is essential to ensure that every pipeline operates properly because any failure in one pipeline activity fails the entire pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;A combined operation of Business Applications and Monitoring Profiles in Serverless360 will help you implement what you need in this scenario.&lt;/p&gt;

&lt;p&gt;The first step is to create two Business Applications that represent the different processes of the HR department, as discussed above: Recruitment and Onboarding.&lt;/p&gt;

&lt;p&gt;Associate the necessary pipelines in the respective Business Applications. After creating the Business Application, you can use the Add option in the Resources section to associate pipelines.&lt;/p&gt;

&lt;p&gt;The next step is creating and applying Monitoring Profiles for the Business Applications representing the Recruitment and Onboarding process.&lt;/p&gt;

&lt;p&gt;The image below illustrates the profile configuration for the Recruitment process, where the Warning and Error thresholds for the Failed runs metric are set for a higher value:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VrmKY4wC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7n43ww3ym3xgp8vyb97g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VrmKY4wC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7n43ww3ym3xgp8vyb97g.png" alt="Azure data factory pipeline monitoring" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image below illustrates the profile configuration for the Onboarding process. Here the Warning and Error thresholds for the Failed runs metric are set for a minimal value:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pRMxdXuJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qs9qt4zhocyr32ymyiij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pRMxdXuJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qs9qt4zhocyr32ymyiij.png" alt="monitoring profile for Azure data factory" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once these profiles are created, you must apply them to the corresponding Business Applications. A Monitoring Profile can be applied using the following path in a Business Application: Monitoring -&amp;gt; Profile settings -&amp;gt; Apply profile.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sJvm5O3---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jhowp5idpbqss90egyn6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sJvm5O3---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jhowp5idpbqss90egyn6.png" alt="Azure data factory pipeline monitoring" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Monitoring will be initiated after profiles are applied, and you will be notified of failures instantly. Following is how the status of each Business Application is represented in Serverless360. You can navigate into the Business Application, which has errors, and act on it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KaSLrwED--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2m45k1asw3pqf480eoa1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KaSLrwED--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2m45k1asw3pqf480eoa1.png" alt="Azure data factory monitoring" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking on the health state next to a pipeline will reveal the monitoring details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OIWRRY0u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/112s5znzsywooqyfz53a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OIWRRY0u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/112s5znzsywooqyfz53a.png" alt="Image description" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Alert Escalation
&lt;/h2&gt;

&lt;p&gt;Escalation Policies can be set up at the Monitoring Profile level. Each escalation time frame can use different notification channels, such as Slack, Service Now, and Pager Duty, to transmit alerts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nRSeYukN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rx13b4fvs8394baba9sy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nRSeYukN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rx13b4fvs8394baba9sy.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Manually resolve violations
&lt;/h2&gt;

&lt;p&gt;Each violation alert triggered from Serverless360 contains a link to the violated resources in the respective Business Application, which will navigate you to the appropriate resource in Serverless360.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ilub2BHh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lzm9skgyy8t3uuwd2ra7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ilub2BHh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lzm9skgyy8t3uuwd2ra7.png" alt="Image description" width="623" height="664"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The pipeline can be manually rerun to resolve the violation after navigating the violated resource.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tD3O0kcp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1drmpbudzzioiy4bpvz7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tD3O0kcp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1drmpbudzzioiy4bpvz7.png" alt="Image description" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Automatically resolve violations
&lt;/h2&gt;

&lt;p&gt;In addition to failure monitoring, you can specify an action to be taken in case of a rule violation. The combined event of an automated action with a rule violation saves the time taken over manual intervention and improves business continuity.&lt;/p&gt;

&lt;p&gt;For example, suppose you have ‘n’ failed activity runs in the past few hours and want to rerun them when the n value reaches the configured threshold. A task can be configured to rerun failed pipeline runs in a specific time frame and mapped to the rule violation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9iKPEl4j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fei4roxd7spfny1pfnzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9iKPEl4j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fei4roxd7spfny1pfnzg.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let’s put it all together
&lt;/h2&gt;

&lt;p&gt;Microsoft Azure provides pipeline monitoring via Azure Monitor by configuring a set of rules and alert logic shared by all pipelines available in an Azure Data Factory. The alert logic helps to detect data quality issues and performance bottlenecks. Managing multiple pipelines from different data factories requires repeated alert configurations multiple times, making it challenging to track the alerts.&lt;/p&gt;

&lt;p&gt;By combining pipelines from multiple data factories into a logical container and using Monitoring Profiles to track the failures, Serverless360 increases its edge over Microsoft Azure. This helps to identify any issues quickly and proactively take corrective action, reducing downtime and increasing productivity. Having features like Automated Reruns in case of violations will be very useful for keeping pipelines intact.&lt;/p&gt;

&lt;p&gt;To sum up, Serverless360 is a mature product to monitor and manage Data Factory pipelines. This allows you to delegate support from Azure experts to IT operations, lowering support costs and freeing up resources to work on delivering new solutions.&lt;/p&gt;

&lt;p&gt;Try Serverless360 for free!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>azurefunctions</category>
    </item>
    <item>
      <title>Configuring Azure Logic App Failure Alerts To Stay Ahead</title>
      <dc:creator>Sabari Rohith</dc:creator>
      <pubDate>Thu, 25 May 2023 09:58:03 +0000</pubDate>
      <link>https://dev.to/sabarirohith/configuring-azure-logic-app-failure-alerts-to-stay-ahead-4c8f</link>
      <guid>https://dev.to/sabarirohith/configuring-azure-logic-app-failure-alerts-to-stay-ahead-4c8f</guid>
      <description>&lt;p&gt;Azure Logic Apps is a cloud-based service provided by Microsoft Azure that allows users to create and run automated workflows. A trigger is the first step of a workflow that specifies the condition for running further steps in that workflow. Azure Logic Apps creates a workflow run each time the trigger fires successfully.&lt;/p&gt;

&lt;p&gt;The details of each run, including the status, inputs, and outputs of each step of the workflow instance, can be accessed in the run history section of the Logic App. Each run can either execute successfully or fail due to some reasons.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RKRpHHeb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/631cbm7xk25wijr0vm20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RKRpHHeb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/631cbm7xk25wijr0vm20.png" alt="Azure Logic Apps portal overview" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is it essential to track Logic App failures?
&lt;/h2&gt;

&lt;p&gt;Like any technology, Logic Apps can experience failures, which can cause serious consequences such as data loss, and disrupted business flows.&lt;/p&gt;

&lt;p&gt;Take a sample scenario where the Logic App orchestrates a credit card payment. An HTTP request triggers the Logic App with payment details from the user, such as the card number, expiration date, billing address, etc. Then it uses a connector to authenticate the payment information. Once the information is authenticated, Logic App processes the payment using a payment gateway. Once the payment is successful, the Logic App updates the database and sends a confirmation email or text message to the customer indicating that the payment has been successfully processed.&lt;/p&gt;

&lt;p&gt;In such a scenario, if there are any errors or exceptions during the payment processing, it is crucial that the Logic App can track these failures and quickly alert the relevant teams to mitigate the issue.&lt;/p&gt;

&lt;p&gt;Azure Logic Apps provides capabilities to track and handle failures. The logic app failure alerts can be various channels like email, SMS, or other communication channels.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring alert rules in the Azure portal
&lt;/h3&gt;

&lt;p&gt;Alert rules can be created from the Monitoring section of the Logic App from the Azure portal. Azure provides an extensive list of metrics representing the critical aspects of the resources. Metrics and their respective thresholds can be configured with the alert rules.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4AZZsaYw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2n7ilku4w33lbc2iz3wm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4AZZsaYw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2n7ilku4w33lbc2iz3wm.png" alt="Creating alert rule in Logic Apps" width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges Monitoring Logic Apps
&lt;/h2&gt;

&lt;p&gt;Azure Monitor is handy when monitoring the resource as an individual entity. When there is a requirement to monitor the resources constituting a business flow as an application, it isn’t easy to implement monitoring which will help a support user understand the role of the Logic App in the broader context. The alerts received through Azure monitor are at the resource level. When multiple Logic Apps are monitored, it is challenging to track the alerts.&lt;/p&gt;

&lt;p&gt;In such cases, Azure Monitor may only meet some of your needs. Serverless360 provides features that will help you monitor and manage Logic Apps in the real world.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to monitor Logic Apps failures using Serverless360
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.serverless360.com/"&gt;Serverless360&lt;/a&gt; is a cloud-based platform designed to allow users to manage and monitor their applications running on the Microsoft Azure cloud platform. It offers much tooling that allows users to monitor, troubleshoot, and manage serverless applications.&lt;/p&gt;

&lt;p&gt;Setting up &lt;a href="https://www.serverless360.com/azure-logic-apps-monitoring-management"&gt;Azure Logic App monitoring&lt;/a&gt; using Serverless360 is straightforward and can be achieved by Business Applications. It is a logical container that groups a particular application’s resources.&lt;/p&gt;

&lt;p&gt;A Business Application can be created by adding the required Logic Apps. In addition, various resources of different types constituting a business flow can be added.&lt;/p&gt;

&lt;p&gt;Monitoring profiles allow users to configure monitoring rules for multiple resources of the same type or different types. Instead of configuring the monitoring rules at each resource level, a monitoring profile to monitor the Logic App failures can be created and applied to the Business Application, which will monitor all the Logic Apps in it.&lt;/p&gt;

&lt;p&gt;When applying the monitoring profile to a Business Application, users can opt to automatically apply the profile to any other Logic App that will be added to the Business Application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DMqP076s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/koleyhyec17jqj9fn3xp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DMqP076s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/koleyhyec17jqj9fn3xp.png" alt="Creating monitoring profile in Serverless360" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As soon as the resources are added to the Business Application in which the monitoring profile is applied, the resources get automatically monitored, and the status of the resources will be updated as below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9eCySF-L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d1pot6v3c5wals6ohyey.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9eCySF-L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d1pot6v3c5wals6ohyey.png" alt="Azure Logic App failure alert" width="800" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In some cases, more than monitoring the metrics of the Logic App is required, as there may be a slight delay for the metrics to get emitted in Azure. Serverless360 can monitor the failures in such cases by investigating the actual Logic App runs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UROD1vN9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sbpsm3126nrprl9iirzs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UROD1vN9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sbpsm3126nrprl9iirzs.png" alt="Azure Logic Apps monitoring with Serverless360" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Resolving failures
&lt;/h3&gt;

&lt;p&gt;In addition to alerting the failures, Serverless360 can resolve them by resubmitting the failed runs. Even though resubmitting a run is possible in the Azure portal, the challenge is identifying the already resubmitted runs. Serverless360 overcomes this challenge by adding a Resubmitted tag to the resubmitted runs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manual resubmission
&lt;/h3&gt;

&lt;p&gt;Manual resubmission of runs is a straightforward operation, select the runs to be resubmitted in the run history of the respective Logic App and click on Resubmit runs option. It is also possible to resubmit runs in bulk.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PN8egBVc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5umn89hd19dscsalfqhl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PN8egBVc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5umn89hd19dscsalfqhl.png" alt="Resubmitting failed runs through Serverless360" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Automated resubmission
&lt;/h3&gt;

&lt;p&gt;The manual resubmission is handy when the number of runs to be resubmitted is less. But automated resubmission is useful when there are many runs to be resubmitted. Automated resubmission has the below advanced features to enhance the efficiency of resubmission.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Option to include or exclude the already resubmitted runs.&lt;/li&gt;
&lt;li&gt;Resubmitting runs based on one or more error reasons.&lt;/li&gt;
&lt;li&gt;Resubmitting runs from a specific trigger.&lt;/li&gt;
&lt;li&gt;Resubmitting runs with the selected run actions based on their state.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YfnPvlvT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4cog8hyecp1903mk2ii7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YfnPvlvT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4cog8hyecp1903mk2ii7.png" alt="Automated resubmission of failed Logic App runs" width="629" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A more convenient way than manually running an automated task each time there is a violation to configure the automated task as part of the monitor rule that will be executed each time the configured rule is violated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W4PXRC71--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jslfu6lrg3h7ly8v08z6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W4PXRC71--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jslfu6lrg3h7ly8v08z6.png" alt="Azure Logic App failure alert" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What if you need to do more
&lt;/h3&gt;

&lt;p&gt;Serverless360 Business Applications are aimed at providing the tools your Support Operator needs to perform daily operations for your integration solutions.&lt;/p&gt;

&lt;p&gt;You may want to allow less experienced support users or Business Super Users to have visibility of your integration processes and to be able to perform a level of self-service. In this case, Serverless360 provides a &lt;a href="https://www.serverless360.com/business-activity-monitoring"&gt;Business Activity Monitoring&lt;/a&gt; module that can be used alongside Business Applications, allowing you to provide an even more fantastic experience for your users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Tracking Azure Logic Apps failures is essential to ensure the smooth functioning of the business. Azure provides Logic App monitoring via Azure Monitor by configuring alert rules. The alert rules help to detect failures and performance bottlenecks. But, monitoring multiple Logic Apps requires repeated alert configurations multiple times, making it challenging to track the alerts.&lt;/p&gt;

&lt;p&gt;By combining Logic Apps into a logical container and using Monitoring Profiles to track the failures, Serverless360 increases its edge over Microsoft Azure. This helps to identify and mitigate the issues proactively, reducing downtime and increasing productivity. Having features like Automated resubmission in case of violations will be particularly useful.&lt;/p&gt;

&lt;p&gt;Azure offers fundamental monitoring features and works well with a few Logic Apps. Serverless360 is the go-to option for businesses managing multiple resources.&lt;/p&gt;

&lt;p&gt;Experience Business Application with a 15-day &lt;a href="https://www.serverless360.com/signup"&gt;free trial&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>logicapps</category>
      <category>azuremonitoring</category>
    </item>
    <item>
      <title>Monitoring Azure Integration Services with Proactive Strategies</title>
      <dc:creator>Sabari Rohith</dc:creator>
      <pubDate>Fri, 19 May 2023 11:53:14 +0000</pubDate>
      <link>https://dev.to/sabarirohith/monitoring-azure-integration-services-with-proactive-strategies-gmb</link>
      <guid>https://dev.to/sabarirohith/monitoring-azure-integration-services-with-proactive-strategies-gmb</guid>
      <description>&lt;p&gt;Enterprises are increasingly turning to cloud-based integration solutions to streamline their application development and management processes. Azure Integration Services is a cloud-based integration platform provided by Microsoft, designed to facilitate the integration of various enterprise applications and systems. It offers a range of tools and services that help to simplify and accelerate the development of enterprise applications, as well as improve their scalability, reliability, and security. The below diagram shows some of the common resources on Azure which are used in the Azure Integration Services offering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nEfcX9tI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwwi940g0meywcz8t9e5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nEfcX9tI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwwi940g0meywcz8t9e5.png" alt="Image description" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Effective monitoring is a crucial aspect of any Azure Integration Services implementation. Without proactive monitoring, it can be difficult to identify and address issues before they become critical. But creating a fail-proof monitoring strategy for Azure Integration Services can be challenging. In this blog post, we’ll explore the key components of a proactive monitoring strategy for Azure Integration Services that will help you stay on top of potential issues and ensure smooth operations for your business-critical applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario: A booking management sequence
&lt;/h2&gt;

&lt;p&gt;To understand the power of the Azure Integration Services platform, let us consider a booking management system that may have a flow similar to the below steps:&lt;/p&gt;

&lt;p&gt;A customer initiates a booking request through a mobile app or website.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The request is sent to the business application, which processes it and validates the request based on various criteria such as availability, pricing, and other business rules.&lt;/li&gt;
&lt;li&gt;The business application interacts with the existing ERP (Enterprise Resource Planning) system to ensure that the requested product or service is available and that it can be fulfilled within the specified timeframe.&lt;/li&gt;
&lt;li&gt;The CRM (Customer Relationship Management) system is also consulted to ensure that the customer’s data and history are up-to-date and that any relevant preferences or special requests are considered.&lt;/li&gt;
&lt;li&gt;If the request is approved, the business application generates an order or booking confirmation and sends it to the customer via the mobile app or website.&lt;/li&gt;
&lt;li&gt;The ERP system updates its inventory and order management systems to reflect the new transaction.&lt;/li&gt;
&lt;li&gt;The CRM system updates its customer database with new orders or booking information, including any feedback or ratings provided by the customer.&lt;/li&gt;
&lt;li&gt;The business application generates any necessary notifications or alerts to internal stakeholders, such as fulfillment teams or support personnel, to ensure that the order or booking is processed and delivered as expected.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Logical Architecture
&lt;/h3&gt;

&lt;p&gt;To implement a solution that would meet the needs of the business requirements we could implement a logical architecture shown in the below diagram. This would give us the capabilities needed to provide a lightweight and flexible solution making the most of modern cloud platforms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qQaWVcHe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xrc4a97lk8pyhkjz0r11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qQaWVcHe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xrc4a97lk8pyhkjz0r11.png" alt="Image description" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each block identified within the diagram has a specific purpose:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Messaging Service&lt;/strong&gt; – The Customer facing website or mobile app needs to be decoupled from the backend Orchestrator to deal with any unpredictable spikes in traffic. This asynchronous messaging pattern also ensures durability to survive intermittent failures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event Routing Service&lt;/strong&gt;– The goal is to eliminate polling and save the associated cost and latency. The orchestrator service should be informed of the availability of a new booking message in the messaging service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orchestrator Service&lt;/strong&gt; – This is the heart of the business application. The entire workflow of processing the booking will be defined in this service. Integrating the existing Cloud or on-premises systems like ERP and CRM is necessary to complete the booking.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Management Service&lt;/strong&gt; – Connect on-premises systems like the ERP and CRM applications to the Orchestrator Cloud Service to safely integrate the Cloud and on-premises environments. Protect the APIs with keys, tokens, and IP filtering. As the orchestrator definition evolves, the existing client applications should continue functioning without modification.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compute Service&lt;/strong&gt; – The service should allow end-to-end development experience in a language the existing team has expertise in to solve complex orchestration requirements. The need is to build and deploy locally without additional setup, deploy and operate at scale in the Cloud, and integrate with other services using triggers and bindings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Transformation Service&lt;/strong&gt;– The data from the external ERP and CRM systems need to be transformed and stored in the SQL database for the computing service to use.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Designing an application using Azure Integration Services
&lt;/h2&gt;

&lt;p&gt;With Azure, an Enterprise Integration application can be built by assembling Lego blocks. Choosing the appropriate Azure Integration Service is critical in building the Integration Application correctly.&lt;/p&gt;

&lt;p&gt;The above logical architecture could be implemented using the following Azure Integration Service components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service Bus&lt;/li&gt;
&lt;li&gt;Event Grid&lt;/li&gt;
&lt;li&gt;Logic Apps&lt;/li&gt;
&lt;li&gt;API Management&lt;/li&gt;
&lt;li&gt;Azure Functions&lt;/li&gt;
&lt;li&gt;Azure Data Factory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The implementation of the logical architecture would look something like the below diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yzpL42mJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nf7g3pl4gjpgx48zg1qi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yzpL42mJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nf7g3pl4gjpgx48zg1qi.png" alt="Image description" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us understand the capabilities of each Azure Integration Service in the diagram above.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Service Bus&lt;/strong&gt; – Azure Service Bus provides a reliable cloud messaging service between applications and services, even when they are offline. Moreover, the Enterprise must ensure order processing and/or duplicate detection can be achieved through simple configurations, saving considerable time and effort. In this scenario, as the requirement is 1-1 decoupling, Service Bus Queue can be the best fit. Enabling Sessions ensures ordered processing; duplicate detection can be turned on with a duplicate detection time frame window.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event Grid&lt;/strong&gt; – Manages the routing of all events from any source to the destination. In this case, event-driven architecture is adopted to prevent the Orchestrator service from polling the Service Bus queue for any new message. The Event Grid will trigger an event to the downstream application on the arrival of the new message, hence being cost-effective.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logic App&lt;/strong&gt; – Seamlessly and securely connect Logic Apps to cloud-based and on-premise solutions, in this case, to the ERP and CRM solutions taking advantage of the hundreds of out-of-the-box connectors available and implement complex workflows that orchestrate across your services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Management (APIM)&lt;/strong&gt; – Inbound policies in APIM help control how data and services are being exposed to participating applications by allowing the definition of authentication, authorization, and usage limits. We can meet the security and compliance requirements with unified management experience across all internal and external APIs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure Functions&lt;/strong&gt; – Functions extensions on Visual Studio and Visual Studio Code facilitate efficient development in the preferred language on a local machine, fully integrated with the Azure platform. Continuous Integration and Continuous Delivery (CI/CD) can be achieved using Azure pipeline definitions. With a programming model based on triggers and bindings, use triggers to define how functions should get invoked and use bindings to connect to other services declaratively.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure Data Factory&lt;/strong&gt; – Offers a single solution to ingest data from diverse and multiple sources allowing support to 90 built-in connectors to acquire data from Big Data sources like Amazon Redshift, Google BigQuery, and HDFS; enterprise data warehouses like Oracle Exadata, and Teradata; SaaS apps like Salesforce, Marketo, and ServiceNow; and all Azure data services.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Azure Integration Services provides a microservices-friendly approach allowing you to build more scalable and stable event-driven applications!&lt;/p&gt;

&lt;h2&gt;
  
  
  The importance of monitoring Azure Integration Services
&lt;/h2&gt;

&lt;p&gt;Integration is the heart of modern business applications. In any typical Enterprise, distributed subsystems are integrated to complete the business workflow. Failure in these integrations should be spotted and fixed before it impacts the customer. Hence, it is critical to enable continuous monitoring of the underlying integration.&lt;/p&gt;

&lt;p&gt;Consider the business application mentioned above; what if the Logic App listening to the incoming messages to Service Bus Queue goes down?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Incoming messages will not be processed, they may end up stuck on the queue or errors processing then will end up with messages on the dead letter queue.&lt;/li&gt;
&lt;li&gt;The Integration is broken!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What does it mean to the business?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Orders are blocked&lt;/li&gt;
&lt;li&gt;Failure to serve the customer on time.&lt;/li&gt;
&lt;li&gt;Loss of business and goodwill.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How to prevent this?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The entire integration should be continuously monitored for any failures.&lt;/li&gt;
&lt;li&gt;Access the right toolset to analyze the root cause and fix the failure before business impacts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Available monitoring solutions
&lt;/h2&gt;

&lt;p&gt;The Azure portal offers native monitoring solutions to monitor Azure Integration Services on their metrics. Choosing the key performance indicators can help to keep an eye on any failure.&lt;/p&gt;

&lt;p&gt;Monitoring the ‘Runs Failed’ and ‘Trigger Failed’ metrics on a Logic App can help detect failed runs.&lt;/p&gt;

&lt;p&gt;Upon detecting a failure in the Logic App, one can investigate the Trigger and Run histories to gather more details on the failure.&lt;/p&gt;

&lt;p&gt;Enabling &lt;a href="https://www.serverless360.com/blog/azure-application-insights-vs-log-analytics-which-one-you-choose"&gt;Log Analytics and Application Insights&lt;/a&gt; can also help you to analyze the root cause of a failure.&lt;/p&gt;

&lt;p&gt;However, we need to be aware of the challenges in operating the application in the real world:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Azure portal lacks one all-inclusive view of the entire Azure estate with the real-time status of the participating Azure Integration Services. In most cases, the Azure Integration Services will spread across multiple Azure subscriptions, making the scenario even more complex. For efficient operations, it is necessary to spotlight the failure, which is challenging through the interface offered by the Azure portal.&lt;/li&gt;
&lt;li&gt;Alert Storm – The scope of the Azure metric monitoring alert rule is restricted to a resource; hence to get the status of the entire integration, multiple alert rules should be created, leading to several alerts. Also, note that there is a cost associated with every alert configuration. After all the efforts and costs, ending up with many discrete alert reports is less valuable.&lt;/li&gt;
&lt;li&gt;Azure Experts – Log Analytics and Application insights are excellent tools for debugging or performing a root cause analysis. However, they invite Azure experts to be involved in Azure support, which leads to the elevated total cost of ownership, one of the biggest challenges Enterprises are keen to solve. Shifting left the operations is the desired solution, but the Azure portal is too complex for the Operations team with less Azure expertise.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In our decade of experience working with Integration customers, we observed that 90% of the issues in production are functional or data errors. The challenge for most customers is that support of the applications can only be performed by developers and experts in the implementation of the solution. The support or operations team cannot easily be involved in the day-to-day operation of your solution because of the restrictive tooling and significant skill sets required.&lt;/p&gt;

&lt;p&gt;The Serverless360 value proposition is to lower those barriers to supporting your solution and to make it easier for your operations team to take charge of the day-to-day operations and only need your developers and experts by exception rather than by default. We see this as shifting support to the left in your organization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--reev0QlI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nopnfab3c3fkvdnglptt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--reev0QlI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nopnfab3c3fkvdnglptt.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This Azure support strategy can save considerable time and effort for Azure Experts, allowing them to focus on innovation for business and ultimately reducing the Total Cost of Ownership.&lt;/p&gt;

&lt;h2&gt;
  
  
  How is Serverless360 different from any APM products?
&lt;/h2&gt;

&lt;p&gt;Numerous Application Performance Monitoring (APM) products are available for monitoring Azure Integration Services. Any APM will only detect a problem in the Azure Integration, but what is more important is the deeply integrated toolset to fix the issue and restore the business. Serverless360 is crafted with capabilities to complement the Azure portal and aims at ‘Shift Left Support.’&lt;/p&gt;

&lt;h3&gt;
  
  
  Essential monitoring perspectives for Azure Integration Services
&lt;/h3&gt;

&lt;p&gt;An Azure integration should be monitored from 4 essential perspectives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Static monitoring of the status of the participating resources, key performance indicators from the available metrics, and resource properties.&lt;/li&gt;
&lt;li&gt;Run time monitoring, real-time end-to-end tracking of the message flowing through the Azure Integration Services.&lt;/li&gt;
&lt;li&gt;Keep an eye on the evolution of the Azure subscription and document the platform for governance and audit purposes&lt;/li&gt;
&lt;li&gt;Cost monitoring to be aware of your efficiency and to prevent unexpected costs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Serverless360 offers four core modules, each mapping to one of the perspectives mentioned above ideal for different Stakeholders involved in Cloud Operations. Below, you will find an overview of those modules.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9x9LQ4SX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hcwgtcr5bjzkm5vmam8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9x9LQ4SX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hcwgtcr5bjzkm5vmam8o.png" alt="Image description" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Unified Observability – Business Applications
&lt;/h2&gt;

&lt;p&gt;An Azure subscription can grow much faster than we expect. Below is a view of resources in a subscription in the Azure portal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0OEb2fvM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vlccq79hbzrxz0cej1ah.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0OEb2fvM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vlccq79hbzrxz0cej1ah.png" alt="Image description" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The resources exist in siloes in the Azure portal. Moreover, one cannot derive critical information required to support integration in production, like ‘Which integration this resource belongs to?’ or ‘What is its current status?’&lt;/p&gt;

&lt;p&gt;Serverless360 can offer an operation-friendly interface that allows visualizing the siloed resources in an application context, spotting the failure in service, and offering the toolset to fix the identified issue and fulfilling the security requirement of granting the need-only permissions to the authorized team members.&lt;/p&gt;

&lt;p&gt;Below is the representation of the same set of resources as Business Applications. One can clearly understand which application in my Azure estate needs immediate attention. The best part is that the business applications setting can be done in an hour and does not involve much effort.&lt;/p&gt;

&lt;p&gt;Business Applications in Serverless360 allow the definition of Auto Map rules. Configure a rule to map any resource with a specific tag or any resource created in a specific resource group to a Business Application. Monitoring profiles can be configured to monitor the key performance metrics on the Azure resource mapped to a Business Application. That’s it; Serverless360 will continuously monitor the resources as your Azure estate evolves. The Operations team can rely on the Serverless360 dashboard to detect issues in your entire Azure estate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vH7nubnU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wzqx8qy577lq2tlrc33f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vH7nubnU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wzqx8qy577lq2tlrc33f.png" alt="Image description" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A resource mapped to a business application can be extensively monitored on its status, properties, and metrics. The Service Map feature can help the team visualize the resources as an application and indicate the resource status. The deeply integrated tooling for failure management enables the support team to restore failures like resubmitting the dead letters in Service Bus or Event Grid and reprocessing the failed executions in Logic Apps, Data Factory pipelines, and Azure Functions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cXzZ2kpu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/octvinqz6fozscivjsy6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cXzZ2kpu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/octvinqz6fozscivjsy6.png" alt="Image description" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  End-to-end tracking – Business activity monitoring
&lt;/h3&gt;

&lt;p&gt;Business operations often get struck with the question, “Where is my message?” In a modern distributed application context, this is a much more complex question to be addressed.&lt;/p&gt;

&lt;p&gt;Business users may not have a technical understanding of the application, but they will have a sound understanding of the functions and the data associated with the business.&lt;/p&gt;

&lt;p&gt;Introducing Serverless360 BAM for business users unlocks achieving end-to-end tracking of messages flowing through the distributed application.&lt;/p&gt;

&lt;p&gt;The best part here is that Serverless360 BAM is technology-agnostic and platform-independent. Enable your business users to locate a specific transaction, spot any exceptions or long-running unresponsive processes, or derive strategic business decisions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tKDakE0J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0z5oen4n7iooq8qqn0vq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tKDakE0J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0z5oen4n7iooq8qqn0vq.png" alt="Image description" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure Data to Insights – Azure Documenter
&lt;/h3&gt;

&lt;p&gt;Enterprise integrations in Azure evolve in no time. Even before we realize the volume of Azure resources grows massive, it becomes hard to derive insights from their usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.serverless360.com/azure-documenter"&gt;Azure Documenter&lt;/a&gt; in Serverless360 offers reports to understand the evolution of the subscription(s).&lt;/p&gt;

&lt;p&gt;Consider that every team is assigned a resource group to build their integration. The Resource Auditing report in Azure Documenter can compare the current month’s spending against the last month’s, an executive summary of the entire subscription.&lt;/p&gt;

&lt;p&gt;Reporting the security compliance of the application is also made simple.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ROnXMXwg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/77gcv22vk0wexfj6dsip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ROnXMXwg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/77gcv22vk0wexfj6dsip.png" alt="Image description" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost control – Cost Analyzer
&lt;/h3&gt;

&lt;p&gt;One of the key reasons organizations choose to migrate to the Cloud is to cut off the pre-computation cost on the infrastructure. While taking advantage of the pay-as-you-go model, monitoring Azure spending is crucial.&lt;/p&gt;

&lt;p&gt;Cost Analyzer in Serverless360 allows cloud economists to create a single pane view to deliver the desired insights across subscriptions to drive actions.&lt;/p&gt;

&lt;p&gt;Setting up budget alerts and optimization schedules to cut unnecessary costs are essential tools for cost control.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pfiSpduu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w9rlrtrm4nu7s696cxvu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pfiSpduu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w9rlrtrm4nu7s696cxvu.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Business Value
&lt;/h3&gt;

&lt;p&gt;With Serverless360, enterprises across several business domains have devised a friction-free support strategy to shift- left the support tasks in the support escalation path, allowing the Azure experts to focus on business innovation. The result is a significant reduction in the Total Cost of Ownership and considerable improvement in the resolution time, ensuring Customer Delight through operational excellence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;To sum it up, Serverless360 is an advanced cloud management platform that offers comprehensive monitoring of 4 essential perspectives to address the requirements of various stakeholders involved in Azure Operations. Serverless360 stands out in comparison with several APM solutions available in the market, as an APM can only help detect an issue, whereas Serverless360 is perfect to Visualize, Spot, and Fix issues with the underlying integration.&lt;/p&gt;

&lt;p&gt;I strongly recommend checking out the &lt;a href="https://www.serverless360.com/resources/case-studies"&gt;Enterprise case studies&lt;/a&gt; on devising an Azure support strategy using Serverless360.&lt;/p&gt;

&lt;p&gt;If you want to discuss how Serverless360 can help address your challenges with administering your Azure integrations, why not contact us? We are always happy to discuss any challenges over a call. If there is a match, we can give you an &lt;a href="https://www.serverless360.com/request-demo/"&gt;obligation-free demo&lt;/a&gt; of the product, or you can take a &lt;a href="https://www.serverless360.com/signup"&gt;15 days free-ride&lt;/a&gt; to explore the product yourself.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>azurefunctions</category>
    </item>
    <item>
      <title>Azure Resource Monitoring: The key method for Holistic Monitoring</title>
      <dc:creator>Sabari Rohith</dc:creator>
      <pubDate>Mon, 01 May 2023 10:09:55 +0000</pubDate>
      <link>https://dev.to/sabarirohith/azure-resource-monitoring-the-key-method-for-holistic-monitoring-3npe</link>
      <guid>https://dev.to/sabarirohith/azure-resource-monitoring-the-key-method-for-holistic-monitoring-3npe</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;When your organization has started to adopt Azure, the continuously increasing number of Azure resources throughout all your Development, Test, and Production subscriptions make it hard to keep on top of the health of all those resources. So, it is important to have proactive Azure resource monitoring to know when something unexpected happens.&lt;/p&gt;

&lt;p&gt;This blog looks at the different types of monitoring, manual monitoring difficulties, and the difference between Application Performance Monitoring products and Cloud Management products.&lt;/p&gt;

&lt;h2&gt;
  
  
  Different types of monitoring
&lt;/h2&gt;

&lt;p&gt;Before looking into the other topics, let’s first discuss what kind of monitoring is relevant to be aware of the well-being of your Azure resources.&lt;/p&gt;

&lt;p&gt;The following types should be considered for monitoring your Azure resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Availability monitoring&lt;/li&gt;
&lt;li&gt;Health monitoring&lt;/li&gt;
&lt;li&gt;Performance monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s spend some time exploring what those Azure Monitoring types are about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Availability monitoring
&lt;/h2&gt;

&lt;p&gt;All solution components must work as expected to meet the business requirements. KPIs are often used to monitor the overall availability of a solution. Such KPIs include Overall availability, Planned unavailability, and Unplanned availability. &lt;/p&gt;

&lt;p&gt;The expectations around solution availability can be registered as Service Level Agreements (SLAs). In such contracts, you can formalize subjects like Solution availability (based on KPIs), Performance Metrics, Response times, Planned Maintenance, and Usage statistics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RIx-R81J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lt0exzokjks5u2cjxsga.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RIx-R81J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lt0exzokjks5u2cjxsga.png" alt="Image description" width="602" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Health monitoring
&lt;/h2&gt;

&lt;p&gt;You also want to be aware of the health of the individual components of a solution, which concerns setting up the rules and conditions to determine the health of those components. Depending on the importance of the components, you can think of different notification policies.&lt;/p&gt;

&lt;p&gt;For example, you can think of immediate notifications in case of critical components, delayed notifications in case of components of lesser importance, or scheduled health reports. You can also be aware of anomalies via health dashboards, scheduled health reports, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance monitoring
&lt;/h2&gt;

&lt;p&gt;It is also essential to keep an eye on the solution’s performance. Functionally, it might work flawlessly, but if the solution is under stress because it is under-resourced, it cannot effectively be used. To understand the solution from a performance perspective, you could monitor, for example, the number of processed transactions within a specific timeframe or the number of requests per second. Besides technical performance, you could also monitor user satisfaction using an Apdex score.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manual versus Automated monitoring
&lt;/h2&gt;

&lt;p&gt;Now that we understand the relevant types of monitoring, let’s also look briefly at how monitoring is taking place; manually or by using automated tooling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manual monitoring
&lt;/h2&gt;

&lt;p&gt;When monitoring needs to be done manually, administrators or support engineers must frequently check the status of the components comprising a solution. Manual monitoring requires proper procedures for monitoring responsibilities, skilled and disciplined engineers who perform the monitoring tasks, and precise escalation levels. Consider setting up proper dashboards in the Azure portal to reduce the daily monitoring effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automated monitoring
&lt;/h2&gt;

&lt;p&gt;Monitoring can also be automated, and you can use anything from a scheduled PowerShell script to Azure out-of-the-box features or a third-party solution. Each choice comes with its own set of pros and cons. Script-based monitoring will be free but cumbersome and limited in capabilities. Third-party monitoring solutions, however, can be compelling but, in most cases, require license fees and a learning curve to get familiarized with the product.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges with manual monitoring
&lt;/h2&gt;

&lt;p&gt;If your organization still needs to (yet) have a monitoring solution, monitoring has to be done manually. However, with that, several challenges arise. Such challenges include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consistently performing the monitoring over time.&lt;/li&gt;
&lt;li&gt;Continuity of monitoring during any leave.&lt;/li&gt;
&lt;li&gt;The required skill level of monitoring engineers.&lt;/li&gt;
&lt;li&gt;Passive rather than proactive monitoring.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If not appropriately addressed, those challenges will bring the well-being of your Azure solutions at stake, so it is worth exploring automated monitoring tooling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Performance Monitoring solution or Cloud Management solution?
&lt;/h2&gt;

&lt;p&gt;When you have decided to investigate purchasing a monitoring solution, there is still a choice to be made. Will you go for an Application Performance Monitoring (APM) or Cloud Management solution? Let’s look at a couple of pros and cons of both categories.&lt;/p&gt;

&lt;h2&gt;
  
  
  Focus versus comprehensiveness
&lt;/h2&gt;

&lt;p&gt;APM tools are focused on monitoring and optimizing application performance. On the other side, Cloud Management tools have a broader scope; besides monitoring capabilities, such products can also provide operational capabilities. This enables administrators to manage all aspects of their Azure solutions under one hood, which might be lacking with APM products.&lt;/p&gt;

&lt;h2&gt;
  
  
  Customizability versus generic
&lt;/h2&gt;

&lt;p&gt;By nature, APM products are focused on application performance. Because of that, such products provide a high degree of customizability regarding what can be monitored. With Cloud Management platforms, monitoring is more generic, and you may be unable to monitor the same metrics as with APM products.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional costs versus cost-effectiveness
&lt;/h2&gt;

&lt;p&gt;Since APM products focus solely on monitoring application performance, additional tooling might be required to monitor other aspects or administrative purposes. This would add additional costs in terms of licensing, etc. Cloud Management platforms might be a better choice because such products provide both monitoring and administrative capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud management with Serverless360
&lt;/h2&gt;

&lt;p&gt;Serverless360 is a cloud management solution that enables you to keep control of your cloud solutions. To support different requirements, the product has four main modules. Those modules are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Business Applications&lt;/strong&gt; – To support your Azure solutions from a technical perspective.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business Activity Monitoring (BAM)&lt;/strong&gt; – To provide Business Users insight into their business transactions via portals and monitoring.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure Documenter&lt;/strong&gt; – Generate ad-hoc and scheduled reports about your Azure resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Analyzer&lt;/strong&gt; – Get a grip on your Azure spending with cross-subscription overviews, budget monitoring, and resource optimization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most of those modules have different kinds of monitoring, but in this article, we mainly focus on Business Applications, so let’s have a little bit better look at that module in Serverless360.&lt;/p&gt;

&lt;p&gt;With Business Applications, you can bundle the Azure resources across multiple Azure subscriptions, Resource Groups, and Tags that belong to an Azure solution into Business Application containers. Access to those Azure solutions is provided at the level of those business Applications. Within those Business Applications, you will find comprehensive monitoring and administrative features.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.serverless360.com/blog/monitoring-and-observability-in-azure-services"&gt;monitoring capabilities&lt;/a&gt; have rich means to send notifications of anomalies and contain automated discovery, thereby supporting the availability of your solution. Within Business Applications, you can also automate maintenance tasks like cleaning up dead-lettered messages, resubmitting Logic App-runs, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Us3J5lbs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4nm574bdu1mumxlc7ry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Us3J5lbs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4nm574bdu1mumxlc7ry.png" alt="Image description" width="602" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Automated monitoring with Serverless360
&lt;/h2&gt;

&lt;p&gt;As mentioned earlier in this article, with most modules of Serverless360, you can perform automated monitoring. Those modules are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Business Applications&lt;/strong&gt; – Availability, Health, and Performance Monitoring.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business Activity Monitoring&lt;/strong&gt; – Business Transaction Monitoring.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Analyzer&lt;/strong&gt; – Actual/Amortized Costs Monitoring.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s now focus on several kinds of automated monitoring that can be done with Business Applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Availability Monitoring for Business Applications
&lt;/h2&gt;

&lt;p&gt;In the Business Applications module, you group components that belong to the same solution into logical containers, aka Business applications. To understand the overall availability, automated monitoring can be done against the status of all the Resources. When resources are not in their expected state, you will receive notifications. You can configure parameters like the monitoring frequency and when you want to receive the notifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring an API Endpoint
&lt;/h2&gt;

&lt;p&gt;Consider an important API Endpoint you might have that you want to bring under monitoring. For example, you can think of monitoring the endpoint against a specific HTTP return code, like &lt;strong&gt;HTTP equals 200&lt;/strong&gt;. If the API Endpoint does not return an HTTP 200, then you want a ticket to be created in your ServiceNow ticketing system, enabling the appropriate team to investigate the issue.&lt;/p&gt;

&lt;p&gt;Scenarios like this can easily be set up with Serverless360. Actually, the monitoring and notification capabilities go further than just that; with respect to API Endpoints, you can monitor:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTTP return codes&lt;/li&gt;
&lt;li&gt;API Endpoint Response times&lt;/li&gt;
&lt;li&gt;Validate Text/XML/JSON responses&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i6JZj6_p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fvpbjhrv480gajcrhie8.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i6JZj6_p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fvpbjhrv480gajcrhie8.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Health Monitoring for Business Applications
&lt;/h2&gt;

&lt;p&gt;For Health Monitoring purposes within Business Applications, you can schedule health reports that provide insights into the well-being of the resources within those Business Applications. You can configure how often you want to receive those reports and if you want to receive them only in case of anomalies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring failed Logic App runs
&lt;/h2&gt;

&lt;p&gt;Besides monitoring the state of Resources, with Serverless360, you can also monitor the health of, for example, Logic App runs. This enables you to be aware of failed Logic App runs immediately after they happen, and take adequate actions. With the Automated Tasks feature in Serverless360, it is even possible to create a task to automatically resubmit failed runs. This is not only improving the overall health of your solutions but also reduces your workload!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RVwG8Ilb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t7kz7lcrkqi7jtniddbu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RVwG8Ilb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t7kz7lcrkqi7jtniddbu.png" alt="Image description" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Receive notifications where you need them most
&lt;/h2&gt;

&lt;p&gt;Serverless360 Business Applications are flexible in terms of notification channels. You can receive notifications via:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Collaboration platforms Microsoft Teams and Slack&lt;/li&gt;
&lt;li&gt;Ticketing systems ServiceNow and OpsGenie&lt;/li&gt;
&lt;li&gt;Azure DevOps, Azure OMS&lt;/li&gt;
&lt;li&gt;Email, PagerDuty, Twilio&lt;/li&gt;
&lt;li&gt;Webhook&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By choosing the right notification channel, the right people will be informed, and notifications won’t go unseen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When you are looking for a cost-effective, ever-evolving solution to manage your Azure resources, Serverless360 might be the right product for you. You can read more about the product on the Serverless360 website. There, we provide an &lt;a href="https://www.serverless360.com/signup"&gt;obligation-free trial&lt;/a&gt;, but you could also discuss your challenges regarding managing your Azure resources with our team over a &lt;a href="https://www.serverless360.com/request-demo/"&gt;demo&lt;/a&gt;. They are happy to go on a call and see if Serverless360 would fit your organization and show you the product in detail.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>10 Ways to Optimize your Azure cost</title>
      <dc:creator>Sabari Rohith</dc:creator>
      <pubDate>Mon, 24 Apr 2023 04:08:01 +0000</pubDate>
      <link>https://dev.to/sabarirohith/10-ways-to-optimize-your-azure-cost-33f9</link>
      <guid>https://dev.to/sabarirohith/10-ways-to-optimize-your-azure-cost-33f9</guid>
      <description>&lt;p&gt;``In modern times, building and publishing an application has become very easy with Cloud-based deployment. Users don’t need to worry about infrastructure-related challenges like availability, reliability, scalability, etc. The cloud providers are responsible for keeping the deployment flow simple and intact. &lt;/p&gt;

&lt;p&gt;Providing many advantages and coherence, the high cost incurred for such benefits is the downside. If the resources created in Cloud are not adequately audited or kept track of, it will quickly bring up the cost to double or triple the allocated budget. Some cases even drained a year’s tech budget in a month.&lt;/p&gt;

&lt;p&gt;This article will help you understand some of the Azure Cost Optimization best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Understanding the Resources by Tagging and Grouping
&lt;/h2&gt;

&lt;p&gt;Azure Resource Groups are containers that hold related resources for Azure solutions. Creating resources under proper and more petite Resource Groups will help the users understand the purpose of the resources, which also helps in cleaning up the resources related to a new solution.&lt;/p&gt;

&lt;p&gt;Many users need help creating proper Resource Groups before adding resources. As this is a one-time process, you can only do a little with the existing resources that are not grouped during the creation. For such scenarios, Azure Tags will be the right solution. Here is a guide from Microsoft for naming and tagging resources.&lt;/p&gt;

&lt;p&gt;Once the resources are correctly grouped, it is easy to understand how the cost is distributed across the solutions. Visualizing and getting insights based on filters like Resource Groups, Types, Regions, and Tags are available at each Subscription level in the Azure Portal. For a better experience at the scope of multiple Subscriptions, try out &lt;a href="https://www.serverless360.com/azure-cost-analysis"&gt;Serverless360 Cost Analyzer&lt;/a&gt;, which is available for a 15-day trial.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--URQCnurn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2khzo9hjj40gac1vsg58.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--URQCnurn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2khzo9hjj40gac1vsg58.png" alt="Image description" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Monitor the Cost and Act on Unintentional Spikes
&lt;/h2&gt;

&lt;p&gt;Once you understand how the resources are distributed across multiple solutions and the cost it incurs, the next step is understanding the daily spent rate of your Azure subscriptions.&lt;/p&gt;

&lt;p&gt;You can calculate the daily spent rate by taking the average day-wise cost for the last 7 days. Cost spikes due to yearly renewals like App Service certificates should be excluded so that you can arrive at the exact rate.&lt;/p&gt;

&lt;p&gt;You can create budgets in the Azure portal or other third-party tools at a Subscription level. It would help if you had tools that support monitoring the cost at &lt;a href="https://www.serverless360.com/blog/explore-azure-costs-for-multiple-subscriptions-with-cost-analysis"&gt;multiple Subscription levels&lt;/a&gt;. Serverless360 calculates the daily spent rate automatically while configuring cost monitoring.&lt;/p&gt;

&lt;p&gt;Once the monitor is set up, you will be alerted whenever something goes off the chart. You can immediately check which resource contributed to the raise and act on it. If the raise is valid and the resources are created or scaled intentionally, you must calculate the daily spend rate again and monitor it. There are tools available to alert if the cost trend changes automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iypuR_dz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ydmuw730ubqzgd11eys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iypuR_dz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ydmuw730ubqzgd11eys.png" alt="Image description" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Smart start/ stop or Deallocate Nonproduction Resources
&lt;/h2&gt;

&lt;p&gt;Close to 70% of a month, the resources contributing to the nonproduction environments will run for no reason. Most resources can be stopped, scaled down, or deallocated during off-business hours or weekends, which can be achieved using 3rd party tools like Serverless360.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oPVvJiaa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/29k2hircv251qvkikz04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oPVvJiaa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/29k2hircv251qvkikz04.png" alt="Image description" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Identify and Remove the Idle Resources
&lt;/h2&gt;

&lt;p&gt;While grouping the resources using Resource Groups and Tags, you could have identified the resources that serve no purpose and can be deleted immediately. For example, most of the time, disks associated with the VMs will be left idle while deleting the VMs; you can delete those disks.&lt;/p&gt;

&lt;p&gt;You can find the unidentified idle resources by analyzing the consumption metrics like CPU and Memory percentage. You can remove the resources with zero consumption.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Reservations
&lt;/h2&gt;

&lt;p&gt;Reservation is one big area where you can save up to 80% on the cost spent on resources. Microsoft Azure discounts resources based on usage commitment declared for 1 to 3 years. Reservation is not just about discounts; you can get a high-performing machine at a shallow price point. You can find the details about the reservations here.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Savings Plan
&lt;/h2&gt;

&lt;p&gt;Savings plans are, just like reservations, a recent addition. You need to commit to the fixed hourly spent for 1 to 3 years and get up to 65% discount on it.&lt;/p&gt;

&lt;p&gt;For Reservations, you need to select the VM sizes upfront for 3 years, which might be challenging for growing businesses as changing the size of the VMs in reservations comes with a hard limit of only up to $50K per year. Saving plans expect the hourly cost to be committed, and you can create VMs of any size within the committed limit.&lt;/p&gt;

&lt;p&gt;The Savings Plan is supported for computing resources like Reservations. You can find the details about the Savings plans here.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Spot Instances
&lt;/h2&gt;

&lt;p&gt;Spot instances are another offering by Microsoft Azure, where you can purchase the unused capacity of the VMs at a significant discount. These VMs are suggested for interruptible workloads, and you can get up to a 90% discount.&lt;/p&gt;

&lt;p&gt;You can use Spot instances for Azure DevOps pipeline build agents, interruptible batch jobs, etc. You can find the details about the spot instances here.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Right-sizing
&lt;/h2&gt;

&lt;p&gt;Provisioning servers in Azure is effortless. Within a few clicks, you can create a machine costing 1000s of dollars, leaving the engineering users to provision more than the required capacity to be on the safer side.&lt;/p&gt;

&lt;p&gt;Though the environments will be stable and reliable by hosting in a giant machine, the cost comes as the downside. You can easily consume a significant chunk of the budget allocated by provisioning a few servers. It is always recommended to evaluate resource utilization and right-size them continuously. Right-sizing comes as a feature in many 3rd party tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Choosing the Right Services
&lt;/h2&gt;

&lt;p&gt;There are many Serverless resources available in Azure. Instead of paying for the total capacity of a server, we can pay just for the capacity we use. Most of the time, it is better to go with Serverless resources as it automatically saves the cost during the system’s idle time.&lt;/p&gt;

&lt;p&gt;Azure Functions running in a consumption plan are a suitable replacement for many background tasks running in resources like VMs, and Cloud Service Web/Worker roles.&lt;/p&gt;

&lt;p&gt;Similarly, many other services come in a Serverless plan. Consider using Serverless resources in the possible areas to save cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  10. Setup Policies and Follow Best Practices
&lt;/h2&gt;

&lt;p&gt;It is common to have technical brainstorming sessions, handover meetings, and retrospection meetings for engineering-related activities. Plan the same kind of meeting regularly to discuss cloud cost optimization.&lt;/p&gt;

&lt;p&gt;It would help if you defined Access policies to restrict resource creation and modification, and they should be delegated to team owners. Only some team members should be allowed to deploy resources as it comes with a cost.&lt;/p&gt;

&lt;p&gt;For obtaining the best overview, you should subscribe to a 3rd party tool for managing the Azure cost. It is as essential as an APM tool for an application, thereby ensuring that all the Azure cost optimization best practices are followed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways from Azure Cost Optimization Best Practices
&lt;/h2&gt;

&lt;p&gt;Following the above 10 best practices immensely helps to optimize the Azure cost and have it under full control. But, in real-time, there is this challenge where the person who wants to understand the costs might not have the knowledge to break down the cost data into something meaningful, and the application teams do not have the access or tools to analyze their Azure spend.&lt;/p&gt;

&lt;p&gt;Here is where Serverless360 Cost Analyser enables organizations to tackle this problem from both sides. The application teams can build cost analysis, budgeting, and optimization focused on making sure they are efficiently running their resources. Then they can work with the cost owner to build views which allows them to demonstrate they are being efficient.&lt;/p&gt;

&lt;p&gt;This creates an all-around better experience where the application teams can be responsible for their costs, and the cost owner can focus on governance across teams. Cost Analyzer will help break up those silos which lead to zero transparency, inefficiency, and out-of-control costs.&lt;/p&gt;

&lt;p&gt;Optimize &lt;a href="https://www.serverless360.com/signup"&gt;Azure cost&lt;/a&gt; now for free!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>azurefunctions</category>
      <category>cosmosdb</category>
    </item>
  </channel>
</rss>
