<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Madhavan kovai</title>
    <description>The latest articles on DEV Community by Madhavan kovai (@madhavankovai_31).</description>
    <link>https://dev.to/madhavankovai_31</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/madhavankovai_31"/>
    <language>en</language>
    <item>
      <title>How to avoid spiraling up Azure Subscription costs?</title>
      <dc:creator>Madhavan kovai</dc:creator>
      <pubDate>Wed, 27 Jul 2022 09:30:00 +0000</pubDate>
      <link>https://dev.to/madhavankovai_31/how-to-avoid-spiraling-up-azure-subscription-costs-2e6b</link>
      <guid>https://dev.to/madhavankovai_31/how-to-avoid-spiraling-up-azure-subscription-costs-2e6b</guid>
      <description>&lt;p&gt;Cloud adoption has got a tremendous uptrend as never before. The World pandemic has made startups, and enterprises move their legacy systems into Cloud to cut huge infrastructure maintenance costs. According to Gartner, cloud spending is forecasted to increase 23.1% at $332.3 billion in 2021 from $270 billion in 2020. We can even see enterprises using BizTalk servers moving to the Azure Cloud for better maintenance and features that confirm the above forecast.&lt;/p&gt;

&lt;p&gt;When cloud spending increases, it is essential to analyze the cost spending on resources better and optimize it. As most of the cloud service provides (CSPs) offers a pay-as-you-go model, spending could be below at the start. When the resources increase, it leads to massive expenditure due to a lack of tools and skills required for better Cost management and analysis. This article will talk about avoiding spiraling up Azure Subscription cost with real-world scenarios and better tools available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-world scenario
&lt;/h2&gt;

&lt;p&gt;In this article, let us take a cab booking application scenario where Flywheel Cabs is the largest cab service provider in the UK. Flywheel cabs recently migrated from their high-performance Biztalk server to Azure Integration services like Azure Logic Apps, Service Bus, APIM etc. After some point, they have expanded their footprint to App Services like Web App, Function app and more. They require visualizing the cost spending on different Azure Subscriptions, regions &amp;amp; resource groups, and recommendations to optimize the cost due to the increased footprint. Along with Cost Management, it is also required to generate a shareable report and collaboration capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Native solutions
&lt;/h2&gt;

&lt;p&gt;In the above scenario, few native solutions can be used to manage the cost spending on the Azure Subscriptions that are Azure Cost Management&lt;/p&gt;

&lt;p&gt;Azure Cost Management shows organizational cost and usage patterns with advanced analytics and many more. Reports in Cost Management show the usage-based charges consumed by Azure services and third-party Marketplace offerings. Even though it offers advanced analytical features, it lacks documentation and cost comparison to understand the monthly cost consumption index. Here comes the Azure Documenter in Serverless360.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Documenter
&lt;/h2&gt;

&lt;p&gt;Azure Documenter is an automated document generation tool for your Azure Subscriptions. It gives the executive summary, cost, and compliance reports for the Azure Subscriptions. Below are the different types of documents that can be generated using Azure Documenter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Executive Summary&lt;/strong&gt;– Provides summary on Resource Groups, Resource types, Locations across which the resources are distributed, and Billing Summary over a documentation period.&lt;br&gt;
&lt;strong&gt;Billing &amp;amp; Metrics&lt;/strong&gt;– Provides a graphical representation of cost incurred Resource wise, Resource type-wise, location-wise and, Resource group-wise. It also provides a split-up of cost consumed at the individual resource level.&lt;br&gt;
&lt;strong&gt;Cost vs Consumption&lt;/strong&gt;– Provides resource-wise details on Total cost incurred versus how much consumed with colour coordination as green, yellow, and red.&lt;br&gt;
&lt;strong&gt;Compliance &amp;amp; Evaluation&lt;/strong&gt;– Evaluates subscription or resources and provides a detailed report on security issues or misconfiguration based on a set of compliance rules.&lt;br&gt;
&lt;strong&gt;Details on Resources&lt;/strong&gt;– Provides in-depth information about each of the resources grouped by their Resource Types.&lt;/p&gt;

&lt;p&gt;Now let us see how our &lt;a href="https://www.serverless360.com/blog/microsoft-azure-documentation-tool"&gt;Azure Documentation tool&lt;/a&gt; satisfies the requirements of Flywheel cabs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Billing and metrics- better visualization
&lt;/h2&gt;

&lt;p&gt;The first requirement for Flywheel cabs would be better cost visualization reports. To satisfy this requirement, Azure Documenter offers Billing and metrics documents. It provides a graphical representation of cost spent Resource wise, Resource Type intelligent, location-wise and Resource group-wise, etc. It helps Flywheel cabs to generate reports to understand where they spend most in the Azure Subscription.&lt;/p&gt;

&lt;p&gt;Click here to know more about Billing and Metrics in Azure Documenter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Comparer
&lt;/h2&gt;

&lt;p&gt;The most critical requirement for Flywheel cabs would be to understand the cost consumption index to know where they are spending most and optimize the cost. Here comes the Cost Comparer in Azure Documenter. It provides the price vs consumption index on the Azure Subscription. It helps Flywheel cabs understand whether they are spending high or low based on the Tier chosen for the resources. If the consumption index is low, they are consuming low on high tier resources. It indicates that the user should consume more or downgrade the pricing plan just sufficient for the current usage.&lt;/p&gt;

&lt;p&gt;Click here to know how to set up a Cost comparer in Azure Documenter.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Azure Documenter can support decision-making?
&lt;/h2&gt;

&lt;p&gt;We can see the Azure Documenter is a clear winner in providing better cost visualization and optimization. This blog will not be complete without exploring the business value additions this offers: Scheduled reports, Notification and Snapshots, and cost management. Let us now briefly discuss on them:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scheduled reports&lt;/strong&gt; – It allows users to schedule the document generation. It provides options to configure the recurrence on a monthly/weekly basis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Notification alerts&lt;/strong&gt; – It is also possible to notify different stakeholders on the document generation as per the configured schedule. Users can choose the appropriate notification channel from the list: Slack, Microsoft Team, SMTP, Pagerduty, Service Now, OMS, Webhook, and 3rd Party email notification channels. Click here to know more about Notification alerts and channels available.&lt;br&gt;
&lt;strong&gt;Snapshots&lt;/strong&gt; – Every document generation generates a snapshot. Snapshots capture the current state of all available resources in the user’s Azure Subscription at a specific point in time. Like Cost comparer, it is also possible to compare two snapshots to understand their changes. It helps the support team understand what resources have been added, withdrawn, and changed resource characteristics. Click here to know more about the snapshots and comparison, one of the features found most useful in production.&lt;br&gt;
&lt;strong&gt;Templates&lt;/strong&gt; – It allows users to control the document structure and filter out the data to be available during the Azure document generation process. By default, there are three types of Templates available like Essential, Full and Executive. It is also possible to create our template based on the requirement. Click here to know more about Templates in Azure Documenter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Azure Documenter offers a seamless documentation experience that no other tools in the market provide. We can see how Azure documenter satisfies all the Azure Cost management requirements along with the operational capabilities. I hope this article helped you understand how to avoid spiraling up Azure Subscription costs. Stay tuned for more exciting articles.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloudcomputin</category>
    </item>
    <item>
      <title>All about monitoring your Azure Functions</title>
      <dc:creator>Madhavan kovai</dc:creator>
      <pubDate>Mon, 18 Jul 2022 07:26:09 +0000</pubDate>
      <link>https://dev.to/madhavankovai_31/all-about-monitoring-your-azure-functions-pnn</link>
      <guid>https://dev.to/madhavankovai_31/all-about-monitoring-your-azure-functions-pnn</guid>
      <description>&lt;p&gt;This blog centers on core concepts of Azure Function and how it tends to be better monitored using Serverless360. To comprehend Azure Function Apps more readily, let us think about a real-world business scenario of Order Processing to understand the pain points that we have. Before getting into the Business scenario, let’s do a quick look at Azure Functions and their attributes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Azure Function?
&lt;/h2&gt;

&lt;p&gt;Azure Function is a serverless compute service that enables users to run event-triggered code without providing or managing infrastructure. A trigger-based service runs a script or code in response to various events.&lt;/p&gt;

&lt;p&gt;Azure Functions can be used to achieve decoupling, high throughput, and reusability. It can also be used for production environments.&lt;/p&gt;

&lt;p&gt;Azure Functions can be triggered with configured triggers like HTTP Trigger, Timer Trigger, Queue Trigger, and more. Workflow in Azure Functions can be defined using Azure Durable Function. It consists of an Orchestrator Function that has the workflow defined with several Activity Functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Are Azure Functions Used?
&lt;/h2&gt;

&lt;p&gt;Azure Functions are the most appropriate for more modest applications with occasions that can work freely on different sites. Standard Azure Functions send emails, start a backup, order processing, task scheduling such as database clean up, sending notifications, messages, and IoT data processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to use Azure functions
&lt;/h2&gt;

&lt;p&gt;Consider Functions for errands like image or order processing, file maintenance, or any tasks you want to run on a schedule. Functions provide templates to get you started with critical scenarios.&lt;/p&gt;

&lt;p&gt;Azure Functions upholds triggers, ways to start executing your code, and bindings, which simplify coding for input and output data. There are other integration and automation services in Azure, and they all can solve integration problems and automate business processes. They can all define input, actions, conditions, and output.&lt;/p&gt;

&lt;p&gt;For any Azure Functions, a solitary Function execution has a limit of 5 minutes naturally to execute. If the Function is running longer than the maximum timeout, then the Azure Functions runtime can end the process at any point after the maximum timeout has been reached.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Functions Advantages
&lt;/h2&gt;

&lt;p&gt;Being a cloud service, the Azure Function has a lot of advantages. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pay as you go model&lt;/strong&gt; – Azure Functions comes in the Pay as you go model. Users can pay only for what they use. For Azure functions, the cost is based on the Number of Executions per month. The cost structure of Azure Functions is mentioned above in the Pricing Section.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supports a variety of Languages&lt;/strong&gt; – Azure Function supports significant languages like Java, C#, F#, Python, and more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Easy Integration with Other Azure services&lt;/strong&gt; – Azure Functions can be easily integrated with the other Azure Services like Azure Service Bus, Event Hubs, Event Grids, Notification Hubs, and more without any hassle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trigger-based executions&lt;/strong&gt; - Azure Functions get executed based on the configured triggers. It supports various triggers like HTTP Triggers, Queue Trigger, Event Hub Trigger, and more. Being a trigger-based service, it runs on demand. Refer to the Trigger section above to know more about the available triggers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Function App
&lt;/h2&gt;

&lt;p&gt;Azure Functions are the individual Functions created in a Function App. Each Function can be invoked using the configured trigger. The Azure portal provides the capabilities to develop, manage, monitor, and integrate inputs &amp;amp; outputs of the Azure Functions.&lt;/p&gt;

&lt;p&gt;An Azure Function can also be tested by providing some raw inputs. When it comes to monitoring the Functions, the portal offers solutions like Application insights for live status monitoring through the invocation logs. Functions in a Function App can be monitored based on the app metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Monitor
&lt;/h2&gt;

&lt;p&gt;Azure Monitor can be used to monitor the Azure services from various perspectives. It maximizes the availability and performance of the application or service.&lt;/p&gt;

&lt;p&gt;Azure Monitor offers several services like Application Insights, Log Analytics, Alerts, and Dashboards. It also integrates Power BI, Event Hubs, Logic Apps, and API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Metrics Explorer:
&lt;/h3&gt;

&lt;p&gt;With Metrics Explorer, the user can detect the application’s performance and latency or the service in the chart view. This metric explorer shows the analytics results based on the filters configured on the extensive set of metrics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Log Analytics:
&lt;/h3&gt;

&lt;p&gt;Log Analytics is a tool used to manage Azure Monitor queries. With Log Analytics, users can perform monitoring and diagnostics logging for Azure Logic Apps. Users can also query the log for efficient debugging.&lt;/p&gt;

&lt;h3&gt;
  
  
  Alerts:
&lt;/h3&gt;

&lt;p&gt;With Alerts in Azure Monitor, users can get the alert report when there is a violation. These Alerts are also based on the configured metrics while creating it. A single alert can only monitor a single entity based on several metrics configured.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless360 for Azure Function App
&lt;/h2&gt;

&lt;p&gt;Azure Function Apps can settle colossal business challenges, but managing and monitoring them in Azure Portal is quite challenging. So here comes Serverless360, which manages and monitors the Azure resources in the application context. With Serverless360, Azure Functions can be managed, monitored, and analyzed from various perspectives. Let’s jump into the exciting features that Serverless360 has in a glance underneath.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manage Azure Functions in Serverless360
&lt;/h2&gt;

&lt;p&gt;An Azure Function is a code triggered by an event, whereas an Azure Logic App is a workflow triggered by an event. Azure Functions can be monitored using Application Insights and Azure Monitor. Though Azure provides such monitoring solutions, users cannot monitor multiple entities with various metrics.&lt;/p&gt;

&lt;p&gt;Whereas, with Serverless360 monitoring, it is possible to monitor various entities based on metrics at the Application level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Function Overview
&lt;/h2&gt;

&lt;p&gt;In Serverless360, users can also associate and manage Azure Functions from different Function Apps. Users can now enable or disable an Azure Function directly from Serverless360.&lt;/p&gt;

&lt;p&gt;Users can keep track of their Azure Function by using the disabled status.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test Function
&lt;/h2&gt;

&lt;p&gt;Azure Functions can be tested using Serverless360 instead of navigating to the Azure portal. The testing is supported for any trigger in a function. Using the Test Function within Serverless360 context switching can be avoided.&lt;/p&gt;

&lt;p&gt;For example, the user will be allowed to select the request method, say GET or POST and add Headers, Query parameters, and Message body. After providing the details, a test run can be made, and the response and its status code will be displayed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Invoke Triggers
&lt;/h2&gt;

&lt;p&gt;Serverless360 provides the capability to invoke the Function by sending a message to the Trigger. Users can achieve this by giving the Trigger details in Invoke Function screen and by sending the message. Azure Function supports different triggers like HTTP, Service Bus Queue, Timer, etc.&lt;/p&gt;

&lt;p&gt;There may be scenarios where an invocation of the Function fails, and the user needs to reinvoke it. There is no possibility to do it apart from posting the message of the failed invocation to the Trigger of the Function (this is not feasible in Timer triggered Function). But, Serverless360 provides the capability to invoke.&lt;/p&gt;

&lt;p&gt;For example, if the Function is a Service Bus Queue triggered one, the user can provide the Queue connection details and post the message. The Triggers supported in Serverless360 are Service Bus Queue, Service Bus Topic, Event Grid, and HTTP.&lt;/p&gt;

&lt;h2&gt;
  
  
  Invocation Logs
&lt;/h2&gt;

&lt;p&gt;As I mentioned before, content and context switching is always a hectic job. To avoid the switching and access to the invocation logs of Azure Functions is a significant capability provided by Serverless360 in terms of Azure Function App management. Users can now filter out the invocation logs by selecting any one of the following states:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Succeeded,&lt;/li&gt;
&lt;li&gt;Failed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The invocation detail contains the following information: Invocation date, message, and Log level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Function App Monitoring in Serverless360
&lt;/h2&gt;

&lt;p&gt;Function and Function Apps resources have been split into two resources for better monitoring at ease.&lt;/p&gt;

&lt;p&gt;In the Function app for monitoring, you have to go to individual resources and create alerts with the respective metrics. Also, you can’t view the alerts you have created in a consolidated view. This scenario is okay when you have limited resources, but what if you have a 100+ Function?&lt;/p&gt;

&lt;p&gt;To overcome this case, Serverless360 has a consolidated alert report generated and sent to the configured email addresses and notification channels for the configured rules of all monitored resources in a Business Application. The condition, warning, and error thresholds of all the query rules can be edited in bulk using the Save button in the Queries tab. Query rules can be updated and deleted using the options next to each saved query rule.&lt;/p&gt;

&lt;p&gt;Also, in Serverless360, you can have a holistic view of the complete resource. It gives you the overall picture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Auto-correct
&lt;/h3&gt;

&lt;p&gt;Serverless360 allows users to automatically correct resource status without navigating to the Azure portal. This feature relieves customers of the need to check the status of resources and manually modify them regularly.&lt;/p&gt;

&lt;p&gt;Users can set the Autocorrect status of compatible resources by configuring monitoring rules when creating (or) editing a Business Application or by going to the monitoring section of the resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitor User Activities
&lt;/h3&gt;

&lt;p&gt;Serverless takes action and can monitor the user activities that someone/you have done within Serverless360. Through the Audit feature, users can see which user has made what changes in Serverless360.&lt;/p&gt;

&lt;h3&gt;
  
  
  Violation report and Status report
&lt;/h3&gt;

&lt;p&gt;There were two reports available in Serveless360 to transfer the alerts.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Violation report,&lt;/li&gt;
&lt;li&gt;Status report&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Violation reports can be set to send alerts based on configured duration when a threshold condition is violated in the environment.&lt;/p&gt;

&lt;p&gt;A status report in a Business Application can be configured to receive alerts at a specific schedule (e.g., every one hour).&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Function App Dashboard
&lt;/h2&gt;

&lt;p&gt;Users now have access to a default Azure Function App Dashboard, which allows them to stay up to date with real-time data through enhanced data visualization. A resource Dashboard is a picturized view pre-built into each resource type and will enable users to track each resource separately using the Dashboard’s features.&lt;/p&gt;

&lt;p&gt;It is often preferable to provide an individual dashboard for each resource that allows the user to track the resource separately rather than having an overall resource management dashboard.&lt;/p&gt;

&lt;p&gt;Servereless360 facilitates better &lt;a href="https://www.serverless360.com/azure-functions-monitoring-management"&gt;Azure Functions monitoring&lt;/a&gt; and management to get immediate feedback and analysing invocation logs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap Up
&lt;/h2&gt;

&lt;p&gt;I hope this blog helps you understand the basic concepts of Azure Function and how it can be better monitored using &lt;a href="https://www.serverless360.com/"&gt;Serverless360&lt;/a&gt;. You can solve enormous business challenges using Azure Function Apps, but managing and monitoring them in Azure Portal is quite challenging. With Serverless360, Azure Functions can be managed, monitored, and analyzed from various perspectives.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Enhanced monitoring for your Azure Logic App</title>
      <dc:creator>Madhavan kovai</dc:creator>
      <pubDate>Thu, 02 Jun 2022 07:04:24 +0000</pubDate>
      <link>https://dev.to/madhavankovai_31/enhanced-monitoring-for-your-azure-logic-app-j2e</link>
      <guid>https://dev.to/madhavankovai_31/enhanced-monitoring-for-your-azure-logic-app-j2e</guid>
      <description>&lt;p&gt;Implementing a business process can be challenging because you typically need to make various services work together. Think about everything your company uses to store and process data. How do you integrate all these products? Azure Logic Apps gives you pre-built components to connect to hundreds of services. You use a graphical design tool to put the pieces together in any combination you need, and Logic Apps will run your process automatically in the cloud. Building a Logic App flow in Azure is simple, but enhanced monitoring for such a powerful resource is mandatory but lacks in Azure. In this blog, let us discuss the need to monitor the Logic App and how Serverless360 provides it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Need for Logic App Monitoring
&lt;/h2&gt;

&lt;p&gt;Logic Apps are a gift for integrating business scenarios as they are pretty easy to understand and integrate with other systems. As they play an integral part monitoring the Logic App is vital. Here are a few reasons why organizations should monitor Logic App,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Lack of &lt;a href="https://www.serverless360.com/azure-logic-apps-monitoring-management"&gt;Azure Logic Apps monitoring&lt;/a&gt; might affect the complete flow of the architecture, resulting in business risk.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring plays a vital role to avoid the storm of failed Logic App runs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring is an essential parameter to check while implementing any Azure resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Challenges in Azure
&lt;/h2&gt;

&lt;p&gt;Though Azure has metric-based monitoring, it is difficult for an Operations/ Support/ Business user to understand the application’s performance without a holistic view. If we try to understand the performance at the application level, we must move into various subscriptions and drill deeper into the resource group to find each resource status manually. It consumes a lot of time and leads to overhead.&lt;/p&gt;

&lt;p&gt;Azure portal is powerful to build enterprise-grade solutions but complex to manage and monitor; let us see the challenges in Azure,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To identify the resubmitted runs.&lt;/li&gt;
&lt;li&gt;To focus only on the failed Logic App runs.&lt;/li&gt;
&lt;li&gt;To schedule, the automation of only the required failed logic app runs.&lt;/li&gt;
&lt;li&gt;To modify the input message which triggered the Logic App to run.&lt;/li&gt;
&lt;li&gt;To Monitor and identify the issues arising in Logic Apps.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It isn’t easy to perform the above tasks, and hence users might find it challenging when working with Logic Apps in the Azure portal. Serverless360 will be a unified solution to resolve all the prob&lt;br&gt;
lems under one roof.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manage and Monitor Azure Logic Apps with Serverless360
&lt;/h2&gt;

&lt;p&gt;The native Azure Monitor helps its users be reactive, but any real-world business requires a proactive approach. To brief on this, let us consider a scenario where the queue acts as a messaging bridge, and the Function app validates the messages received from the queue. Since we are retrieving the messages in peek-lock mode, when the validation fails, messages get stagnated in the same queue. Beyond the TTL (Time To Live), the message gets pushed to the DDL(Dead-Lettered Queue). In our scenario, TTL is configured for 1min.&lt;/p&gt;

&lt;p&gt;Now consider that the orders fail to pass the validation, and DDL messages are piled up in the queue. If we use the Azure monitor, we can get the alert only when the maximum threshold value is breached. But by the time we reach the alert, there will be lot more DDL messages piled up in our queue. It would be helpful for us if we had a proactive alert before reaching the threshold value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.serverless360.com/azure-logic-apps"&gt;Azure Logic Apps&lt;/a&gt; solve enormous business challenges with easy workflow design and automation but managing and monitoring the Azure Logic Apps in an application context is impossible with Azure Portal. To solve this challenge, the user must go ahead with Serverless360. Serverless360 is the solution for managing and monitoring Azure Serverless Applications. With the help of Serverless360, users can manage and monitor the Azure Logic Apps efficiently, resolving the challenges mentioned above. Let’s see how!&lt;/p&gt;

&lt;h2&gt;
  
  
  To identify the resubmitted runs.
&lt;/h2&gt;

&lt;p&gt;Logic Apps run resubmission is possible in the Azure portal, but the challenge is unclear in identifying the resubmitted runs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.serverless360.com/"&gt;Serverless360&lt;/a&gt; overcomes this challenge by adding a ‘Resubmission of’ tag to those runs resubmitted either from Serverless360 or from Azure Portal. Clicking it will fetch the details of the actual parent-run through which correlation between parent-run and child-run can be achieved.&lt;/p&gt;

&lt;h2&gt;
  
  
  To focus only on the failed Logic App runs.
&lt;/h2&gt;

&lt;p&gt;In the Azure portal, users might find it challenging to understand which instances of the Logic App need you to take any action. It will also be tough to tell using which Logic App instances you have already resolved an issue.&lt;/p&gt;

&lt;p&gt;This challenge can be easily solved by using the Action Required feature in Serverless360, which can group the runs that require user attention in the ‘Action Required’ tab. Operations like Resubmit can be performed on the runs available in this section.&lt;/p&gt;

&lt;h2&gt;
  
  
  To schedule the custom automation of failed Logic App runs.
&lt;/h2&gt;

&lt;p&gt;Another big challenge the user faces is to automate the resubmission of only the required failed Logic App runs.&lt;/p&gt;

&lt;p&gt;It can be solved by using Serverless360, where the failed runs can be customized to resubmit only required failed Logic App runs based on the filters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deeper Insights
&lt;/h2&gt;

&lt;p&gt;Service Map in Serverless360 serves as a physical representation of the architecture, and the user can derive relationships between the entities that constitute the Business application. It provides a clean dashboard with a complete application view and displays the state of each entity based on its monitoring configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enhanced Monitoring for Logic Apps
&lt;/h2&gt;

&lt;p&gt;Serverless360’s unique monitoring capabilities keep the end-user informed about the performance and status of their business application. It provides metrics monitoring with threshold values to benefit end-users. With the help of this monitoring, you can maintain a proactive business environment.&lt;/p&gt;

&lt;p&gt;It is always necessary to stay up to date on the status of resources. However, it will be even more beneficial for any business to use an auto-correct option available in Serverless360’s Rules evaluation frequency. All resources associated with the selected Business Application will be monitored based on the frequency.&lt;/p&gt;

&lt;p&gt;The Aggregation period in Serverless360 allows the user to configure a warning threshold value for the metrics of the Resources, which will send us an alert if the configured threshold value is exceeded. Based on the Aggregation Period specified, all metrics associated with the selected Business application will be aggregated using the metric’s Primary Aggregation Type.&lt;/p&gt;

&lt;p&gt;Knowing the status of your Azure resources is critical for ensuring the smooth flow of business hosted on a cloud platform. A status report of resources associated with this Business application that has been chosen to be monitored will be sent during the specified hours. The report generated by Serverless360 will be a consolidated monitoring report in which the application level of visibility is achieved.&lt;/p&gt;

&lt;p&gt;Recipient email configuration: Alerts are generally sent to mentioned Email addresses. Users can configure more than one email address and choose if the alerts can be sent to all the email addresses in a single go or as separate emails.&lt;/p&gt;

&lt;p&gt;Notification channels: Besides Email alerts, third-party Notification Channels can also be configured to receive alerts from Serverless360. Notification Channels already configured in the Settings section can be chosen in the list shown.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Every business needs enhanced monitoring to avoid any downtime or failure. Especially for any business running in the cloud, there is an indispensable need for a tool like Serverless360. Utilizing it, monitoring the Business Application in Azure Cloud Space can be more effective. Serverless360 does not stop with monitoring but also provides better Management solutions for Operations and Support people to enhancing their day in and day out tasks.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Strengthen your &lt;a href="https://www.serverless360.com/azure-logic-apps-monitoring-management"&gt;Azure Logic Apps monitoring&lt;/a&gt; and get powerful toolsets, actionable insights to troubleshoot issues with the help of Serverless360.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
    </item>
    <item>
      <title>In-Depth Look Into Azure API App And Logic App Connector</title>
      <dc:creator>Madhavan kovai</dc:creator>
      <pubDate>Mon, 30 May 2022 11:50:42 +0000</pubDate>
      <link>https://dev.to/madhavankovai_31/in-depth-look-into-azure-api-app-and-logic-app-connector-4n6i</link>
      <guid>https://dev.to/madhavankovai_31/in-depth-look-into-azure-api-app-and-logic-app-connector-4n6i</guid>
      <description>&lt;p&gt;The recently announced Azure App Service brings in capabilities from Azure Websites, Azure Mobile services and Azure BizTalk services together into an integrated platform. Some of the existing offering like Azure Websites is renamed to Web apps and Mobile Services renamed to Mobile Apps. The announcement is definitely not just some restructuring/renaming gimmick, Microsoft brought whole lot of new capabilities into Azure as part of Azure App Services, two of the notable ones are the new “Azure API apps” and “Azure Logic apps”.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Azure API Apps?
&lt;/h2&gt;

&lt;p&gt;At a very high level API apps are nothing but any REST based service (web service) with additional metadata that explains clearly the various operations present. Microsoft adapts Swagger 2.0 as the default metadata definition language for the standard ASP.NET Web API projects along with Swashbuckle (an open source project that provide dynamic Swagger generation for Web API controllers). Customer can build API apps in any technology (Node.js, Java, .NET etc), as long as the metadata is clearly defined according to Swagger 2.0 definitions it will become an Azure API app. Microsoft out the box shipped lot of Azure API Apps in the Azure marketplace during the launch (most of them are developed by BizTalk team) that helps to connect to numerous SaaS based applications out there like Office365, Salesforce, Yammer, Twilio, etc. You can simply search by clicking on the “Market Place” within Azure portal Azure API apps are sometimes also referred as “Connectors”, since it acts like connection end points to external systems. Some of the connectors are capable of “Hybrid Connections”, meaning they are capable of running in customers on-premise environment and expose the on-premise systems seamlessly as Azure API apps. This is one of the exciting features, and one of the key feature that differential Azure’s offering from similar other offering like IFTTT, Zappier etc.&lt;/p&gt;

&lt;p&gt;Let’s take a deep look into “&lt;a href="https://www.biztalk360.com/blog/biztalk-hybrid-connections-an-overview-of-new-cloud-adapters/"&gt;BizTalk Hybrid Connections&lt;/a&gt;”, with some example, how to setup, how to make it work, how to trouble shoot/diagnose connectivity issues etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hello World Example Using File Connector
&lt;/h2&gt;

&lt;p&gt;It’s a common tradition for people coming from BizTalk world to demonstrate the power of BizTalk by moving a file from folder A to folder B using the FILE adapter and slowly expanding on it. Let’s follow the same tradition here, but with slight twist. Contoso limited places all their invoices in a local file system in one of their applications servers, which needs to be transmitted to Kovai Limited periodically,&lt;/p&gt;

&lt;p&gt;In the past if you wanted to build a system like this, it would have required lot of infrastructure work like SFTP, Managed File transfer, opening up ports/firewalls on both locations etc. But now with the help of Azure Logic App, Azure API App — File Connector (out of the box), you should be able to build this system within hours/days. Now let’s dive into implementing this solution.&lt;/p&gt;

&lt;p&gt;You need someone to receive the file from source location and someone to transmit the file to destination location. That someone in our case is the out of the “File Connector” in Azure. File Connectors are nothing but Azure API app, that’s been shipped by Microsoft. File connector is special, it’s a pure Hybrid connector and out of the box got the ability to talk to on-premise Folder locations after some basic configuration. I’m assuming you are aware of the basic concepts like working with Azure Logic apps, how to use connectors, if this is your first time, take a look at Create a new logic app before continuing. It’s important to understand the concepts like Service Plans, Resource Group, Gateway etc, but it’s not required at this stage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure File Connector
&lt;/h2&gt;

&lt;p&gt;The very first thing we need to do is to create two instances of File Connector within our resource group, but before we configure File connectors we need to create two Azure Service Bus namespaces (1 for each file connector — locations), since it’s required for configuring file connector. File connector under the hood uses the Service Bus Relay technology to connect to the on-premise environment. It kind of work in similar way how Azure Hybrid Connection works by abstracting all the connection complexities by providing an installable MSI file. NOTE: One of the important thing to note here is, your Azure Service Bus subscription must be of “Standard” tier, since the Service Bus relay is only available as part of the Standard tier. Also you can create/manage service bus only using the old portal at this stage.&lt;/p&gt;

&lt;p&gt;Once you have service bus connection string, configuring File connector is fairly straight forward, it only takes two parameters.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to Azure Portal (new one), Click on Browse &amp;gt; Market Place&lt;/li&gt;
&lt;li&gt;Start typing the word “File Con..” in the search box, and you’ll see an icon with the name “File Connector”&lt;/li&gt;
&lt;li&gt;Click File Connector, and then “Create” button in the next summary blade&lt;/li&gt;
&lt;li&gt;On the File Connector blade, click on the “Package Setting” link,&lt;/li&gt;
&lt;li&gt;On the final screen you need to specify the two important properties required for File connector&lt;/li&gt;
&lt;li&gt;Root Folder : This is the normal folder location (on premise in the server), example : “C:\Temp\Azure.LogicApp\Pickup”&lt;/li&gt;
&lt;li&gt;Service Bus Connection String : Specify the connection string of your SB namespace (pick it from old Azure portal).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once you click the create button, it will take about a minute to provision the File connector (API App), before you can use it in your solution (Azure Logic App). You need to repeat the above steps to configure the second File connector to the destination location, but I’m going to cut short this example by using the same File connector but pointing to different folders within the same server. Once the file connector is created, the blade will open automatically, if not you can open it by navigating to Browse&amp;gt;Api Apps&amp;gt;Your File Connector.&lt;/p&gt;

&lt;p&gt;As you can notice there is a warning sign in the blade showing “Hybrid Connection — On Premise Setup Incomplete”, just click on that box, which will open up “Hybrid Connection” blade. You will see a link labelled &lt;strong&gt;ON-PREMISES HYBRID CONNECTION MANAGER&lt;/strong&gt; “Download and Configure”, clicking on that will download the on-premise agent file. Copy that file to the server where you’ll have your folder location, and start installing it (internet connection is required of course).&lt;/p&gt;

&lt;p&gt;Once you provide that information and completed the installation procedure, leave few seconds and check back on the File Connector blade.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating The Azure Logical App
&lt;/h2&gt;

&lt;p&gt;At this stage we have done the ground work of creating all the building blocks, now let’s just get stitch all of them together with the Azure Logical app. Click the + link (at bottom left corner) in the portal.&lt;/p&gt;

&lt;p&gt;Let’s name the logical app to “contoso-to-kovai-invoice-submit” and click on the “Triggers and Actions” link, which will open the design canvas. On the RHS you’ll see the File connector API app we just created earlier, just click on it. You’ll see a trigger called “TriggerOnFileAvailable”, select the frequency and interval (you need to be on standard plan to choose low values like minutes), and specify the Folder Path. This is the tricky bit and took a while to figure out, you just need to specify the folder name under the root folder name you configured while creating the File Connector (API App). So in my case the pickup folder is “ C:\Temp\Azure.LogicApp\Pickup\Files “ (i.e Rootfolder + Files), leave the rest of the configuration to default.&lt;/p&gt;

&lt;p&gt;Now we configured the source location, the next step is configure the destination. For this click on the “File Connector” again, this time you’ll see bit more options (basically it’s showing the actions part, instead of triggers), pick up “Upload File” and configure it.&lt;/p&gt;

&lt;p&gt;For the content just select the body content from previous connector, when you click on the “…” button next to Content text box it will show the available options, you select “TriggerOnFileAvailable Content” from the drop down. Save the Logic App (by pressing the save button) and close the designer. The important (tricky) thing here is configuring the File Path, it must be configured like this &lt;strong&gt;@concat( ‘/Drops/’, triggers().outputs.body.FileName)&lt;/strong&gt; meaning you are concatenating the output folder “Drops” with the source file name. Once it’s all setup, you can then just go and drop the files in the pickup folder, and let it run, you’ll notice the files being picked up and appearing on the destination folder. The file will automatically get deleted from the source folder, once it’s successfully picked up.&lt;/p&gt;

&lt;p&gt;Do Not Use “Run Now” For Trigger Based Logic Apps&lt;br&gt;
This one got me stuck for a while, do not use the “Run Now” functionality (available in the Logic App blade), because the trigger won’t get fired and you’ll end up with empty data resulting in errors like this &lt;em&gt;{“code”:”InvalidTemplate”,”message”:”Unable to process template language expressions for action ‘fileconnector0’ at line ‘1’ and column ‘11’: ‘Template language expression can not be evaluated: the property ‘outputs’ can not be selected.’.”}&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Large Message Handling
&lt;/h2&gt;

&lt;p&gt;One of the challenge we can spot in this new architecture is it’s more inclined with web technologies like HTTP, REST etc which are state less and not good at streaming end to end. We can clearly see the issue here in our example, when I tried to upload a 5mb file, it clearly thrown the exception **Exception thrown:\r\nExceptionType=System.Web.HttpException, Message=Maximum request length exceeded. BizTalk would have handled it nicely without a glitch due to its advanced streaming support end-to-end.&lt;/p&gt;

</description>
      <category>biztalk</category>
    </item>
    <item>
      <title>Introduction to File Locations Monitoring</title>
      <dc:creator>Madhavan kovai</dc:creator>
      <pubDate>Mon, 30 May 2022 06:09:15 +0000</pubDate>
      <link>https://dev.to/madhavankovai_31/introduction-to-file-locations-monitoring-10p8</link>
      <guid>https://dev.to/madhavankovai_31/introduction-to-file-locations-monitoring-10p8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;File Locations Monitoring is one of the new Monitoring capabilities introduced in BizTalk360 v8.4. From our customers, we received many requests to introduce the ability to monitor Folder Level into BizTalk360. We are bringing a feature to monitor file count for File Locations (File, FTP and SFTP), configured in BizTalk receive locations and send ports.&lt;/p&gt;

&lt;h2&gt;
  
  
  Need for File Locations Monitoring
&lt;/h2&gt;

&lt;p&gt;Messaging or data exchange between business can be done in various ways. Frequent data communication process is done through for example File and FTP with XML or EDI as the message format. From BizTalk 2013 onwards, SFTP is also included in BizTalk Server which was available as Out-of-box feature in prior versions. Even though BizTalk works seamlessly with File Adapters, it has some known issues which occur due to incorrect configurations. The BizTalk File Location Adapters (File, FTP, SFTP) fail to perform the operations on following scenarios:&lt;/p&gt;

&lt;p&gt;The File receive adapter cannot access the receive location on the file system or network share because the specified path does not exist. For a network share, the File receive adapter disables the receive location after all retry attempts have been exhausted.&lt;/p&gt;

&lt;p&gt;The File receive adapter cannot access the receive location on the file system or network share because the account used by the associated host instance does not have read-write permission for that location. For a network share, the File receive adapter disables the receive location after all retry attempts have been exhausted&lt;/p&gt;

&lt;p&gt;Files with names longer than 256 characters are encountered in the receive location&lt;/p&gt;

&lt;p&gt;To resolve the above issues, we need to ensure that the specified path or share exists and the account used as the Logon should have read-write access. Additional to this, if you configure schedule/service window for your receive locations, messages will be accepted only during that time window, all other times BizTalk won’t pick up messages. Any violation to this scenario also needs to be monitored. We often experience that organizations facing these kinds of challenges used custom solutions for this kind of monitoring. To overcome this, BizTalk360 added the File Locations Monitoring capabilities out of the box.&lt;/p&gt;

&lt;h2&gt;
  
  
  File Locations (File, FTP, SFTP)
&lt;/h2&gt;

&lt;p&gt;In BizTalk360 v8.4, we are introducing support to monitor the File, FTP and SFTP servers under File Location Monitoring Section. File Location Monitoring will list all the locations configured in the BizTalk Artifacts (Send Ports and Receive Locations) for the Transport Types (File, FTP, SFTP) respectively, which helps users to monitor all the File Locations mapped with Receive Locations/Send Ports.&lt;/p&gt;

&lt;h2&gt;
  
  
  File Monitoring
&lt;/h2&gt;

&lt;p&gt;In BizTalk360 File Monitoring Configuration contains three sections: Basic Information, Authentication, and File Monitoring Configurations.&lt;/p&gt;

&lt;p&gt;The Basic Information Section contains Folder Location and File Mask configured in BizTalk.&lt;/p&gt;

&lt;p&gt;The Authentication section is Optional. By default, authentication could be processed by the BizTalk360 Monitoring Service account, when credentials are not given.&lt;/p&gt;

&lt;p&gt;In the File Configurations Section, we can configure the Thresholds with the metric File Count to monitor&lt;/p&gt;

&lt;p&gt;When a File Location is in the Orphaned State, BizTalk360 would let the users know about the cause of the failure on hovering the warning icon&lt;/p&gt;

&lt;p&gt;Messaging or data exchange between business can be done in various ways. Frequent data communication process is done through for example File and FTP with XML or EDI as the message format. From BizTalk 2013 onwards, SFTP is also included in BizTalk Server which was available as Out-of-box feature in prior versions. Even though BizTalk works seamlessly with File Adapters, it has some known issues which occur due to incorrect configurations. The BizTalk File Location Adapters (File, FTP, SFTP) fail to perform the operations on following scenarios:&lt;/p&gt;

&lt;p&gt;The File receive adapter cannot access the receive location on the file system or network share because the specified path does not exist. For a network share, the File receive adapter disables the receive location after all retry attempts have been exhausted.&lt;/p&gt;

&lt;h2&gt;
  
  
  SQL Server availability monitoring
&lt;/h2&gt;

&lt;p&gt;Server availability monitoring provides the ability to monitor failover SQL clusters and standalone SQL Servers using the protocols Ping or Telnet. This feature answers the question “Are the BizTalk/SQL Servers up and running?. ” BizTalk administrators can choose the option when to receive the alert; either if one of the Servers has gone down or only when all the servers were down.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Service Bus Topics Monitoring
&lt;/h2&gt;

&lt;p&gt;Users can effectively monitor Service Bus Topics and their subscription by simply configuring the Namespace connection string and Topic Name. Once the topic is configured BizTalk360 will detect and list the topic’s details such as its current state, message size, and subscription details. With this user can monitor the state of the topic and by configuring threshold rules you can monitor other topics and subscription metrics.&lt;/p&gt;

&lt;p&gt;State-based monitoring – The state can be monitored by configuring the expected state as Active /disabled /Send Disabled. BizTalk360 triggers an alert If the current state of the Topic/Subscription is not as same configured Expected state. Users can also set up Autocorrect for this, where the BizTalk360 system will automatically heal (change the state) if the expected state is not equal to the current state.&lt;/p&gt;

&lt;h2&gt;
  
  
  BHM Profile Management
&lt;/h2&gt;

&lt;p&gt;BizTalk360 has already an integration with BHM. This integration enables the BizTalk360 user to schedule BHM and view the output of the different runs of the tool directly from within BizTalk360. You can also manually run BHM, Schedule BHM runs, Run BHM profiles, view and monitor the BHM runs.&lt;/p&gt;

&lt;p&gt;You can manage and monitor only the default profile in the previous versions. From this version, we have extended the scope to support multiple profiles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Manage BHM Profiles&lt;/strong&gt; – The detailed reports of multiple BHM profiles can be viewed in BizTalk360 operations modules and the profile can be analysed and run by clicking the Run BHM option, which runs all the profiles and generates the report.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring multiple BHM profiles&lt;/strong&gt; – The Schedule can be configured to run the selected profile on a specific time duration. More than one profile can be monitored by configuring the threshold rules and mapping that to alarm. BizTalk360 checks for any threshold violation based on the rules configured and the latest report generated and if there is any violation, the same will be notified through an alert.&lt;/p&gt;

&lt;h2&gt;
  
  
  FTP Monitoring
&lt;/h2&gt;

&lt;p&gt;The FTP Configuration UI is categorized into three sections: FTP Details, Firewall Details and FTP Monitoring Configurations. Monitor FTP/SFTP/FTPS sites easier with an &lt;a href="https://www.biztalk360.com/blog/ftp-sftp-monitoring-tool-for-biztalk-server/"&gt;FTP Monitoring tool&lt;/a&gt; like BizTalk360. It offers monitoring of Receive Locations and Send Ports.&lt;/p&gt;

&lt;p&gt;The FTP Details Section contains the details about the FTP Location, Authentication, and SSL&lt;/p&gt;

&lt;p&gt;The Firewall Details contains the configurations to connect FTP Server through a Firewall&lt;/p&gt;

&lt;p&gt;In the FTP Monitoring Config section, we can configure the monitor with Threshold Conditions for the metric File Count&lt;/p&gt;

&lt;h2&gt;
  
  
  SFTP Monitoring
&lt;/h2&gt;

&lt;p&gt;The SFTP Monitor Tab in BizTalk360 lists the SFTP Locations which are configured in BizTalk. It contains four sections:&lt;/p&gt;

&lt;p&gt;SSH Server Section has the details about the SFTP Location&lt;/p&gt;

&lt;p&gt;The Proxy Details Section is optional to connect SFTP Server behind a firewall&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;BizTalk360 brings the File Locations Monitoring with the ability to monitor the File Count. In the future, we will be adding support to monitor Folder Size and Access permissions.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>BizTalk 2020 and Beyond</title>
      <dc:creator>Madhavan kovai</dc:creator>
      <pubDate>Tue, 24 May 2022 05:41:19 +0000</pubDate>
      <link>https://dev.to/madhavankovai_31/biztalk-2020-and-beyond-538d</link>
      <guid>https://dev.to/madhavankovai_31/biztalk-2020-and-beyond-538d</guid>
      <description>&lt;p&gt;This blog gives a detailed overview of BizTalk Server 2020 and the future of BizTalk On-Premise solutions. This session also has an update on &lt;a href="https://www.biztalk360.com/blog/biztalk-server-2020-migration-path/"&gt;BizTalk Migration&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  BizTalk Server 2020
&lt;/h2&gt;

&lt;p&gt;Valerie started the session with the information about the new release of BizTalk Server 2020 early this year 15th January. She gave a brief introduction of what’s new features and how the features of &lt;a href="https://www.biztalk360.com/blog/biztalk-server-2020/"&gt;BizTalk Server 2020&lt;/a&gt; have been developed year on year (2017, 2018 &amp;amp; 2019) after the BizTalk Server 2016 release.&lt;/p&gt;

&lt;h2&gt;
  
  
  BizTalk Beyond 2020 Version
&lt;/h2&gt;

&lt;p&gt;It’s the most anticipated announcement from the Microsoft Product Team about the future of BizTalk Server. BizTalk Product Team has a plan about the vNext version with Cloud-Native and Hybrid Solutions. However, the timeline to release of the next version is not determined, but It could be as like of BizTalk Server 2016 incremental updates (Service Pack). It’s too early to predict the next version of BizTalk Server, it’s purely based on platforms that run on and update which version of Visual Studio/ SQL Server supports. In the below screenshots you can get a clear roadmap of the BizTalk Server from its first version.&lt;/p&gt;

&lt;h2&gt;
  
  
  BizTalk to Azure Integration – Migration Tool
&lt;/h2&gt;

&lt;p&gt;The announcement of the BizTalk to Azure Integration Tool is the next major update from the product team. Jon introduced the BizTalk Migrator Tool during the Keynote of Integrate 2020 Remote, but this session is completely about the tool.&lt;/p&gt;

&lt;p&gt;Following are the highlights of BizTalk Migration Tool;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is a Command-Line Tool to assist with Migrating. This tool is under development and fall availability is planned.
*Runs against BizTalk MSI files to gather information about the BizTalk solution.&lt;/li&gt;
&lt;li&gt;If you want to migrate the BizTalk solutions, those msi’s need to be exported and run against in the new environment through this command-line tool.&lt;/li&gt;
&lt;li&gt;Will be Open Source as designed to be extensible and written in C#. Almost 80% of the scenario will be covered based on the major business scenario. If there is any other scenario need to be covered then community users /end-users are privileged to do the changes.&lt;/li&gt;
&lt;li&gt;Migration Tool has divided into six stages, each with their own interfaces for the extensibility.&lt;/li&gt;
&lt;li&gt;Discover, Parse &amp;amp; Analyze&lt;/li&gt;
&lt;li&gt;Discover is the very first and it will be looking at all the msi and discover all the artifacts.&lt;/li&gt;
&lt;li&gt;Once after that through parsing the tool will generate the XML file.&lt;/li&gt;
&lt;li&gt;The generated file will be helpful to produce the HTML report of existing BizTalk applications for analyzing.&lt;/li&gt;
&lt;li&gt;Report, Convert&lt;/li&gt;
&lt;li&gt;Based on the report everything (BizTalk Solutions &amp;amp; Artifacts) will be converted to Azure Components (Logic Apps, API Management, etc.,). Microsoft will provide the templates for conversion or else users are privileged to extend the code based on their business scenario to add the additional templates. (Here comes extensibility into the picture).&lt;/li&gt;
&lt;li&gt;Verify&lt;/li&gt;
&lt;li&gt;This component will be helpful to test the exported BizTalk Solutions are working fine after the migration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A Detailed look at AIM
&lt;/h2&gt;

&lt;p&gt;It is a command-line tool and the executable named “AIM” (Azure Integration Migration Tool). During the initial launch of the tool, the assessment will run with the command aim as mentioned in the screenshot.&lt;/p&gt;

&lt;p&gt;In order to run the tool, there is a library used as “Chocolatey”. As explained in the previous section the tool will go through the first of the six stages “Discover” and then subsequent stages to do the migration.&lt;/p&gt;

&lt;p&gt;Basically, the user can define the run mode of the tool. In the normal mode, the tool will run and do the migration and complete without any log. In the verbose mode, users can run the tool with extensive logging which will help them to identify the errors.&lt;/p&gt;

&lt;p&gt;Migration Tool has other command-line options, but the important information is, you can run the tool only to Assess or either other stages individually (Migrate, Convert, Verify).&lt;/p&gt;

&lt;h2&gt;
  
  
  Migration Report
&lt;/h2&gt;

&lt;p&gt;Azure Integration Migration Tool will generate the HTML report with detailed information as shown in the screenshot.&lt;/p&gt;

&lt;p&gt;They are&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Discovered Resources&lt;/li&gt;
&lt;li&gt;Each Application Artifacts migration – for example, FTP Adapter is migrating to FTP Connector in Azure but some manual intervention is needed to authenticate the FTP Server.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Migration Path with an example
&lt;/h2&gt;

&lt;p&gt;Here in this example, we can see how the content-based routing is happening and the abilities it has, and how it is using the context properties with the Service Bus Topics and APIM Routing Manager.&lt;/p&gt;

&lt;p&gt;Let’s assume a message coming in from FTP,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It will investigate the FTP connector in the Logic app.&lt;/li&gt;
&lt;li&gt;The message constructor will get some envelope or Routing slip and save that in Azure app config.&lt;/li&gt;
&lt;li&gt;Then sends the message to the intermediary&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Routing slip router till receiving the message. Then Azure management API is being used to resolve the routing address identification in order to send the message to the correct address (Subscribers).&lt;br&gt;
&lt;strong&gt;Note:&lt;/strong&gt; Logic apps are intended to run the same as BizTalk to send the messages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then next, Logic Apps use the Integration Account to convert Flat File schema into XML Schema. In this example, an XML validator is being used.&lt;br&gt;
XML validator&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the validation is completed then the message will be passed to the next stage respectively like RoutingSlipRouter, XML Transform, and again RoutingSlipRouter.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, the message will be transmitted to the ContentBasedRouter and rooted to ServiceBus Topic to share the message with the respective senders.&lt;br&gt;
ContentBasedRouter&lt;br&gt;
Note: The users can keep all the business processes in a single logic app or split the flows to multiple logic apps it’s purely based on the business use cases.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Microsoft has encouraged the customers of BizTalk On-premise to use the latest Azure technologies.  AIM is in the initial phase to migrate the BizTalk Solutions from on-prem to Cloud. Future of Integration will be focused on Cloud Native + Hybrid Integration.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Application Insights Logging to Monitor Business Application</title>
      <dc:creator>Madhavan kovai</dc:creator>
      <pubDate>Wed, 11 May 2022 11:41:45 +0000</pubDate>
      <link>https://dev.to/madhavankovai_31/application-insights-logging-to-monitor-business-application-3l2k</link>
      <guid>https://dev.to/madhavankovai_31/application-insights-logging-to-monitor-business-application-3l2k</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This article will focus on using Azure Application Insights to monitor the real-world business apps. Azure Application Insights is an Application Performance Management (APM) service for developers and DevOps professionals. It is used to monitor an app that is live in real-time.&lt;/p&gt;

&lt;p&gt;It supports a variety of platforms including .NET, Node.js, Java, and Python hosted on-premises, hybrid, or any public cloud. Application Insights can be used in two ways to monitor your Application. They are,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Codeless Monitoring&lt;/li&gt;
&lt;li&gt;Code-based Monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Can Be Monitored?
&lt;/h2&gt;

&lt;p&gt;Azure Application insight is ideal for developers to monitor how an application works in real-time from performance to error logs. As an APM tool, it can monitor&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Request rates, response times, and failure rates&lt;/li&gt;
&lt;li&gt;Exceptions&lt;/li&gt;
&lt;li&gt;Pageviews and load performance&lt;/li&gt;
&lt;li&gt;Diagnostic trace logs&lt;/li&gt;
&lt;li&gt;Custom events and metrics and many more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;User can achieve all the above-mentioned monitoring goals with the help of tools in Application insights like Application Map, Live Metrics, Dashboards, etc.,&lt;/p&gt;

&lt;h2&gt;
  
  
  Codeless Monitoring
&lt;/h2&gt;

&lt;p&gt;In Codeless Monitoring, Users can get to monitor the Application without any modification to the code, provided the application is hosted in an Azure Service like Azure Web App or Azure Virtual Machine. These resources are natively integrated with the Application Insights that can be enabled from the Azure portal itself. Now let us see how to enable Azure Application insights for an Azure Web App&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do You Implement Application Insights in Web API?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Go to your Azure Web App -&amp;gt; Application Insights&lt;/li&gt;
&lt;li&gt;Click Turn on Application insights.&lt;/li&gt;
&lt;li&gt;You can either choose the existing Application Insights instance or create a new one by providing the Name and Location. And Click on Apply to turn on the Application Insights for this Web-App.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Web app will now be enabled with the Application Insights and the user can achieve monitoring goals like live monitoring and failure detection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code-based Monitoring
&lt;/h2&gt;

&lt;p&gt;Codeless Application monitoring will be very useful if the application is hosted in Azure Resources like VMs and App Services. Whereas, if the Application is hosted in any other Cloud Service provider or hosting service, then code-based monitoring will be the go-to option. With this monitoring solution, it is possible to monitor the applications or APIs built using&lt;/p&gt;

&lt;p&gt;With the help of Application Insights Logging, it is also possible to monitor background services and Client-side JavaScript with the help of Application Insights. Now let us take a .NET Core application and see how to integrate Application Insights.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do I Enable Application Insights in Azure?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create an Azure Application Insights instance in Azure Portal. To create it, click on Create a Resource menu in the left Navigation and Type Application Insights in the Search box.&lt;/li&gt;
&lt;li&gt;Now, Give the Name and other details like Resource Group Name, etc., Click Create&lt;/li&gt;
&lt;li&gt;Once the Application Insights instance is created, go to Properties in the left side navigation bar to get the Instrumentation key that will be required for our Application to establish the connection&lt;/li&gt;
&lt;li&gt;Open your .NET Core Application and install the library “ApplicationInsights.AspNetCore” from NuGet Package.&lt;/li&gt;
&lt;li&gt;Add AddApplicationInsightsTelemetry(); to the ConfigureServices() method in your Startup class&lt;/li&gt;
&lt;li&gt;To establish a connection between the Application and the Azure Application Insights, we need to provide the Instrumentation key in json as an Environment variable as below&lt;/li&gt;
&lt;li&gt;Now, run the Application. Go to the Application Insights instance that was created in the Azure Portal. Go to Live Metrics in the Left Navigation bar.&lt;/li&gt;
&lt;li&gt;Live Monitoring is on, and you can infer the Application performance from the widgets on key performance indices.&lt;/li&gt;
&lt;li&gt;
With the help of Live Monitoring, it is possible to monitor a variety of Metrics like Request rate, Request Failure rate, Outgoing requests, Failed Requests, and many more.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How Do I Log Exceptions in Application Insights?
&lt;/h2&gt;

&lt;p&gt;By configuring the Application as mentioned above, the user can monitor all the requests and logs sent to Azure Application insights. But when the need is to monitor only the logs which are in the Warning or Error state.&lt;/p&gt;

&lt;p&gt;To filter the logging data, the user must create a builder in the CreateWebHosting() method and add a filter as below code:&lt;/p&gt;

&lt;p&gt;It is also possible to log the data from our logging services by providing the necessary information. This Azure Application Insights Logging can be useful when the need is to monitor the performance of an Application.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do I See Exceptions in Application Insights?
&lt;/h2&gt;

&lt;p&gt;Open the Application Insights Search window in Visual Studio, and set it to display events from your app. While you’re debugging, you can do this just by clicking the Application Insights button. Notice that you can filter the report to show just exceptions.&lt;/p&gt;

&lt;p&gt;Users can even use the application insights created in Azure Portal to view all the errors happening in their system.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do I View App Insight Logs?
&lt;/h2&gt;

&lt;p&gt;In the Azure portal, browse to the required resource group and select the required resource like Function App, Web App, etc.,&lt;/p&gt;

&lt;p&gt;In the monitoring section, select the logs option and all the logs captured will be displayed.&lt;/p&gt;

&lt;p&gt;But When the Application comprises a variety of Azure Serverless Resources and the need is to monitor the whole Application along with Azure resources, Application Insights Logging cannot be the right solution for this scenario.&lt;/p&gt;

&lt;h2&gt;
  
  
  Achieve Distributed Tracing with Business Activity Monitoring
&lt;/h2&gt;

&lt;p&gt;Business Activity Monitoring in Serverless360 is a &lt;a href="https://www.serverless360.com/blog/distributed-tracing-tools"&gt;distributed tracing tool&lt;/a&gt; to perform end-to-end tracking of the Azure Serverless applications. Serverless360 BAM can be instrumented in your business application using the Exposed APIs also available as .Net SDK and Logic App Connector. Implementing Business Activity Monitoring can provide visibility on the messages flowing through the components of the business application with &lt;a href="https://www.serverless360.com/distributed-tracing"&gt;distributed tracing&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Business Transactions
&lt;/h2&gt;

&lt;p&gt;Every Transaction of an Application will be tracked in a separate section where the status of the transactions can be viewed along with the status of each stage in a transaction.&lt;/p&gt;

&lt;p&gt;It is also possible to view the data flowing through every stage along with the tracking of key business properties.&lt;/p&gt;

&lt;h2&gt;
  
  
  Analytics &amp;amp; Monitoring
&lt;/h2&gt;

&lt;p&gt;Business Activity Monitoring in Serverless360 also offers the capability to monitor the transactions when there is a failure or exception. Users will be notified through the configured Notification channels whenever there is an exception in any of the Business transaction. Calendric view in Business process monitor helps to understand transaction monitoring history.&lt;/p&gt;

&lt;p&gt;Consider when there is a need to analyse how the application is performing based on the custom property tracked. Serverless360 Dashboard helps the user to create widgets and analyze business transactions in a graphical view. User can give the input as a BAM query and get the data as advanced graphical widgets as below&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Application Insights along with Logging is the best tool for live monitoring of Applications much needed for developers, whereas it is too complex for an operations or support user. Their need would be a tool like &lt;a href="https://www.serverless360.com/"&gt;Serverless360&lt;/a&gt; to monitor and track the whole Serverless Application with better monitoring and management capabilities.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Azure Service Bus Throttling Conditions to be Considered in Messaging Platform</title>
      <dc:creator>Madhavan kovai</dc:creator>
      <pubDate>Tue, 12 Apr 2022 07:59:34 +0000</pubDate>
      <link>https://dev.to/madhavankovai_31/azure-service-bus-throttling-conditions-to-be-considered-in-messaging-platform-4kj2</link>
      <guid>https://dev.to/madhavankovai_31/azure-service-bus-throttling-conditions-to-be-considered-in-messaging-platform-4kj2</guid>
      <description>&lt;p&gt;When architecting a solution in Azure, it is always important to keep in mind any limitations which might apply. These limitations can come not only from tier choice but also from technical restrictions. Here, we will have a look at the Service Bus throttling conditions, and how to handle them. When you are at the documentation page, it is clear there are several thresholds which will affect the maximum throughput achieved before running into throttling conditions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Queue/topic size&lt;/li&gt;
&lt;li&gt;Number of concurrent connections on a namespace&lt;/li&gt;
&lt;li&gt;Number of concurrent receive requests on a queue/topic/subscription entity&lt;/li&gt;
&lt;li&gt;Message size for a queue/topic/subscription entity&lt;/li&gt;
&lt;li&gt;Number of messages per transaction&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Azure Service Bus Throttling conditions
&lt;/h2&gt;

&lt;p&gt;Each of these conditions has their characteristics and ways in which to handle them when they occur. It is important to understand each, as it allows us to take decisions on the following steps. And, set up a resilient architecture to minimize risks. Let us have a look at each and at the options to mitigate these thresholds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Queue/topic size&lt;/strong&gt;&lt;br&gt;
This threshold stands for the maximum size of a Service Bus entity and defined when creating the queue or topic. When messages are not retrieved from the entity or retrieved slower than they are sent in. The entity will fill until it reaches this size. Once the entity hits this limit, it will reject new incoming messages and throws a QuotaExceededException exception back to the sender. Maximum entity size can be 1, 2, 3, 4 or 5 GB for the basic or standard tier without partitioning. 80GB standard tier with partitioning enabled as well as for the premium tier.&lt;/p&gt;

&lt;p&gt;When this occurs, one option is to add more message receivers, to ensure our entity can keep up with the ingested messages. If the entity is not under our control, another option would be to catch the exception, and use an exponential backoff retry mechanism. By implementing an exponential backoff, receivers get a chance to catch up with processing the messages in the queue. Another option is to have the receivers use prefetching, which allows higher throughput, clearing the messages in our entity at a faster rate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Number of concurrent connections on a namespace&lt;/strong&gt;&lt;br&gt;
The second threshold discussed in this post is about the number of connections allowed to be open simultaneously to a Service Bus namespace. Once all of these are in use, our entity will reject subsequent connection requests, throwing a QuotaExceededException exception. To mitigate this condition it is essential to know that queues share their connections between senders and receivers. Topics, on the other hand, have a separate pool of connections for the senders and receivers. The protocol used for communication is also essential, as NetMessaging allows for 1000 connections, while AMQP gives us 5000 connections.&lt;/p&gt;

&lt;p&gt;This means that as the owner of the entities, there is the possibility to switch from queues to topics, effectively doubling the number of connections. Beware though, this will only increase the number of total allowed connections, but if there is already a large number of senders or receivers, it will still just give us the maximum of connections the chosen protocol gives us for each of these. If the sender or receiver client is under our control, there is also the option to switch protocols, which could provide us with five times the amount of connections when switching from NetMessaging to AMQP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Number of concurrent receive requests on a queue/topic/subscription entity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This threshold applies to the number of receive operations invoked on a Service Bus entity. Each of our entities has a maximum of 5000 receive requests it can handle concurrently. In case of topic subscriptions, all subscriptions of the topic share these receive operations. Once the entity reaches this limit, it will reject any following receive requests until the number of requests is lower and throws a ServerBusyException exception back to the receiver.  To handle this limitation, once again the option is there to implement an exponential backoff retry strategy while receiving messages from our Service Bus entity. An alternative would be to lower the total number of receivers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Message size for a queue/topic/subscription entity&lt;/strong&gt;&lt;br&gt;
Service Bus entities only allow for a limited size for their incoming brokered messages. When trying to send in larger messages, the entity rejects these and throws a MessageSizeExceededException exception back to the sender. For the basic and standard tier, the maximum message size is 256KB, while for the premium tier it is 1MB. When working with large messages, it is possible to split these and send the chunks over the line, re-assembling them on the receiver side. Another option is to implement a claim check pattern, in which case storage of the large payload is done at an alternative location and only a reference to this is sent in the brokered message.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Number of messages per transaction&lt;/strong&gt;&lt;br&gt;
When sending messages in transactions, the limit is 100 messages per transaction, both for synchronous and asynchronous calls. When trying to post more messages inside of a single transaction, the entity throws a TransactionSizeExceededException exception back to the sender and rejects the complete transaction. The answer to this restriction is making sure the calling code never exceeds 100 messages in a transaction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Retries
&lt;/h2&gt;

&lt;p&gt;For several of the throttling conditions doing retries is a plausible solution to ensure our client delivers its messages in the end. This is the case in any situation where time can help resolve the problem. The fix can be due to retrieval of messages, closing of connections, or the number of clients decreasing. However, it is important to note that retries will not help for all these throttling conditions. For example, when a message is too large, retrying this message will never result in success. Therefore, it is important to check the actual exception that you receive when catching these. Depending on the type of message, you can take decisions on the next steps.&lt;/p&gt;

&lt;p&gt;Furthermore, by default retries will occur every 10 seconds. While this is acceptable for many occasions, it might be better to implement an exponential retry mechanism instead. This mechanism retries with an increasing interval, for example first after 10 seconds, then 30 seconds, then 1 minute, and so on. This mechanism allows for intermittent issues to resolve quickly. But also help on lasting exceptions thanks to the increasing interval between retries mitigates provided by an exponential backoff retry mechanism.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring
&lt;/h2&gt;

&lt;p&gt;When working with Service Bus, it is crucial to implement a suitable monitoring strategy. There are several options to do this, ranging from the built-in tooling in Azure to using a third-party product like &lt;a href="https://www.serverless360.com/"&gt;Serverless360&lt;/a&gt;. Each of these solutions has their strengths and weaknesses. When it comes to watching the Service Bus throttling state, &lt;a href="https://www.serverless360.com/microsoft-azure-monitoring"&gt;Azure Monitor&lt;/a&gt; has recently added new metrics which allow us to do just that. These capabilities are currently in preview and gives several metrics to keep an eye on Service Bus namespaces and entities. One of these metrics is Throttled Requests, giving us insights into the number of requests throttled.&lt;/p&gt;

&lt;p&gt;Subsequently, it is even possible to set up alerts on top of these metrics, which you can accomplish through Azure Monitor. Add an alert rule for this scenario. These rules define when to trigger alerts, and which actions to take.&lt;/p&gt;

&lt;p&gt;These actions range from sending out an email or SMS, all the way to calling webhooks or invoking Logic Apps. These latter options give us the possibility to start custom workflows, notifying specific teams, creating a ticket, and more like these. For this, specify an action group with one or more actions in the alert rule. Consequently, it is even possible to create multiple action groups can for different alert types. Here you can send high-level alerts to the operations team and service-specific alerts to the owners of that service within the organization.&lt;/p&gt;

&lt;p&gt;Serverless360 provides easy configuration and notification options for &lt;a href="https://www.serverless360.com/azure-service-bus-monitoring-management"&gt;Azure service bus monitoring&lt;/a&gt; and raise alerts for Service Bus Throttling conditions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When setting up an architecture with Azure services it is always important to keep an eye on the capabilities. In this case, we investigate Service Bus throttling conditions. Often, mitigation is done by adjusting some of the properties of our clients or implementing a retry strategy. Additionally, to keep clear insights into our environment, a monitoring strategy needs to be implemented for our situation. Where alerts are triggered in case any of these throttling conditions occur.&lt;/p&gt;

</description>
      <category>azure</category>
    </item>
    <item>
      <title>What’s New in BizTalk Server 2020!</title>
      <dc:creator>Madhavan kovai</dc:creator>
      <pubDate>Tue, 22 Feb 2022 08:23:22 +0000</pubDate>
      <link>https://dev.to/madhavankovai_31/whats-new-in-biztalk-server-2020-1fnf</link>
      <guid>https://dev.to/madhavankovai_31/whats-new-in-biztalk-server-2020-1fnf</guid>
      <description>&lt;p&gt;At our annual event Integrate, Microsoft announced last year that BizTalk Server 2020 should have released in the first quarter of 2019. Only two weeks in the new year, Microsoft has released BizTalk Server 2020 (v3.13.717.0)! In this blog, we want to update you on what’s new in this version of the product.&lt;/p&gt;

&lt;h2&gt;
  
  
  No Changes in Available Editions
&lt;/h2&gt;

&lt;p&gt;Similar to earlier versions of the product, BizTalk Server 2020 comes in 4 flavors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Microsoft BizTalk Server 2020 Developer&lt;/li&gt;
&lt;li&gt;Microsoft BizTalk Server 2020 Branch&lt;/li&gt;
&lt;li&gt;Microsoft BizTalk Server 2020 Standard [to do: changed limitations]&lt;/li&gt;
&lt;li&gt;Microsoft BizTalk Server 2020 Enterprise&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  New Features in BizTalk Server 2020
&lt;/h2&gt;

&lt;p&gt;During Integrate 2019, Paul Larsen, the Program Manager who is responsible for BizTalk Server, already highlighted which new features are coming in BizTalk Server 2020. You can read a recap about Paul’s session here.&lt;/p&gt;

&lt;p&gt;As you can understand from Paul’s session, we were already expecting platform alignment. More importantly, BizTalk Server 2020 contains all features from BizTalk Server 2016, including the 3 Feature Packs, which have been released.&lt;/p&gt;

&lt;h3&gt;
  
  
  Audit Log
&lt;/h3&gt;

&lt;p&gt;Until now, no auditing was available for any operations performed against the BizTalk environment. Luckily, from BizTalk Server 2020 on, the Administration console provides auditing of operations. To be able to use this feature, you will have to turn it on in the Group Settings screen.&lt;/p&gt;

&lt;h3&gt;
  
  
  BizTalk2020-Audit-Log
&lt;/h3&gt;

&lt;p&gt;The auditing data is stored in the Management database, and you can access it via the Operational Services. At the moment, the supported operations are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating, updating and deleting ports&lt;/li&gt;
&lt;li&gt;Suspending, resuming and terminating service instances&lt;/li&gt;
&lt;li&gt;Adding, updating and removing BizTalk applications&lt;/li&gt;
&lt;li&gt;Importing binding files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Although this is a good start, multiple important operations are still missing; think of starting/stopping of ports, orchestrations, and host instances. We hope that Microsoft adds such operations in the future.&lt;/p&gt;

&lt;h3&gt;
  
  
  Support of .NET Framework v4.7
&lt;/h3&gt;

&lt;p&gt;Earlier, Microsoft has announced that .NET 4.8 will be supported.&lt;/p&gt;

&lt;h3&gt;
  
  
  New Read-Only Operator role
&lt;/h3&gt;

&lt;p&gt;The BizTalk Operator role has changed. Where before, it was possible to perform actions like stopping/starting ports and changing the port configuration, this role is now completely read-only. This can certainly be useful for facilitating DevOps scenarios.&lt;/p&gt;

&lt;p&gt;When accessing the BizTalk Server 2020 Admin console as a BizTalk Operator, all the operations are still accessible. However, when an operator tries, for example, to stop a port, an error message shows up, and the operation becomes blocked. Unfortunately, the error message is not always clear that the operation is blocked due to insufficient permissions.&lt;/p&gt;

&lt;h3&gt;
  
  
  BAM Portal is Deprecated
&lt;/h3&gt;

&lt;p&gt;Microsoft has decided to deprecate the BAM portal. Probably due to the new capabilities to push data to Azure, they decided to deprecate the old-fashioned BAM portal. In case you are using BAM, and you are considering upgrading to BizTalk Server 2020, you can still install and configure the BAM portal from the BizTalk installer and configuration wizard; you won’t be left in the dark.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deprecated and Removed Adapters
&lt;/h3&gt;

&lt;p&gt;Multiple updates in this area as well. For example, Microsoft deprecates the POP3 and the SMTP adapters in favor of the Office 365 adapters, which appeared in the BizTalk Server 2016 Feature Packs. Next, the old &lt;a href="https://www.biztalk360.com/blog/biztalk-server-2020-migration-how-to-deal-with-deprecated-or-removed-adapters/"&gt;BizTalk SQL adapter&lt;/a&gt; has been removed and replaced by the WCF-SQL adapter, and the JDE OneWorld and the WCF-NetTcpRelay adapter have both been deprecated.&lt;/p&gt;

&lt;h3&gt;
  
  
  BizTalk360 Support of BizTalk Server 2020
&lt;/h3&gt;

&lt;p&gt;The BizTalk360 product team has eagerly followed the developments around BizTalk Server 2020, and we are glad that BizTalk Server 2020 has been released. For BizTalk360, it is evident that the product must support all recent versions of BizTalk Server.&lt;/p&gt;

&lt;p&gt;That’s why we made the latest released versions of BizTalk360 and Atomic Scope compatible with BizTalk Server 2020!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;BizTalk Server 2020 has been released. This is the 11th version of the product since its inception in 2000. Although we have already seen most of the features of this release (in the BizTalk Server 2016 Feature Packs), this is still a useful release. Especially when you are still on versions older than BizTalk Server 2016, it will certainly be worth upgrading or migrating to this release.&lt;/p&gt;

</description>
      <category>biztalk</category>
    </item>
    <item>
      <title>View the History of Jobs with SQL Jobs History Feature</title>
      <dc:creator>Madhavan kovai</dc:creator>
      <pubDate>Mon, 21 Feb 2022 08:07:53 +0000</pubDate>
      <link>https://dev.to/madhavankovai_31/view-the-history-of-jobs-with-sql-jobs-history-feature-30e</link>
      <guid>https://dev.to/madhavankovai_31/view-the-history-of-jobs-with-sql-jobs-history-feature-30e</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Admin people on a day-to-day basis schedule a lot of SQL jobs and must monitor its success and failure statuses constantly. In Biztalk360 we have already provided the capability for the users to do some operations and monitoring on SQL servers. Here comes a new feature called SQL Jobs History to view the history of the jobs that have been scheduled on the server agent and this article holds more details about the new feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why SQL Jobs History in BizTalk360
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;SQL Jobs History will give a clear picture on an overall level about the status of the scheduled job.&lt;/li&gt;
&lt;li&gt;Biztalk360 will become a one-stop solution for admin people to handle operations and monitoring concerning SQL server agents.&lt;/li&gt;
&lt;li&gt;Helps the user by providing a view of the job history and reduces the navigation between different applications.&lt;/li&gt;
&lt;li&gt;The created jobs get displayed in BizTalk360 and the jobs history can be viewed by clicking the eye icon both in SQL server jobs and in SQL jobs history.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Jobs in SSMS (SQL Server Management Studio)
&lt;/h2&gt;

&lt;p&gt;Jobs that are created in SQL Server Management Studio and its history will be displayed in Biztalk360. The below section will highlight how to create jobs in SSMS (SQL Server Management Studio).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Click on the SQL server agent select new and create a job.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add Job Name, Category, and Description and click the OK button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To monitor the jobs first we must start the SQL server agent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Renaming the jobs can be done in SSMS (SQL Server Management Studio).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We can also view the execution history of the job by right-click and selecting View History from the context menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Green colour shows the job executed successfully and red colour shows the job gets failed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key components of SQL Server Agent
&lt;/h2&gt;

&lt;p&gt;SQL agent service consists of the following components and defines a task to be performed. The below-mentioned components help to perform the tasks and describe the success and failure of the tasks.&lt;/p&gt;

&lt;p&gt;There are a few components of the SQL Server Agent service.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;General&lt;/strong&gt; – This defines the name of the job; owner of the SQL server and we can categorize the job and its description too.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Steps&lt;/strong&gt;– Job can contain one or multiple steps. Each step executes a specific set of instructions and contains its own task. The next step will be executed based on the status of the previous step.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schedules&lt;/strong&gt;– Jobs can be scheduled hourly, daily, weekly, monthly and the schedule type can also be set.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alerts&lt;/strong&gt;– Alerts triggered based on the job’s execution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notifications&lt;/strong&gt;–User can set up email notifications to get updates about the result of the job execution. It throws notification when the jobs fail so that appropriate action can be taken.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SQL Jobs History&lt;br&gt;
SQL Jobs History tab in BizTalk360 will help the user to view the history of jobs on a single page rather than going to SSMS (SQL Server Management Studio) for the same details. SQL Jobs History tab has the recent records of jobs run and listed according to the max records selected. SQL Jobs history can be accessed in BizTalk360 via Operations-&amp;gt;Manage Infrastructure-&amp;gt;SQL Server Instances-&amp;gt;SQL Jobs History. The detailed history can be viewed by clicking the eye icon in each row.&lt;/p&gt;

&lt;h2&gt;
  
  
  SQL Jobs History
&lt;/h2&gt;

&lt;p&gt;In the previous version of the BizTalk360, only the jobs were getting displayed under SQL server jobs. For example, let us take a scenario. If the user needs to view the history of the job, the user must navigate to the SSMS (SQL Server Management Studio) to view the history. So, to overcome such a process, we have implemented SQL Jobs History and it becomes more helpful for the user to audit all the history of the jobs instead of viewing in SSMS (SQL Server Management Studio). &lt;/p&gt;

&lt;h2&gt;
  
  
  SQL Jobs History details blade
&lt;/h2&gt;

&lt;p&gt;The importance to add general details is that a user can be able to view the primary details of the job. BizTalk360 provides a user-friendly structure when compared to log file viewer in SSMS (SQL Server Management Studio). &lt;/p&gt;

&lt;p&gt;General tab:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Job Name&lt;/li&gt;
&lt;li&gt;Owner Name&lt;/li&gt;
&lt;li&gt;Run Status&lt;/li&gt;
&lt;li&gt;Category&lt;/li&gt;
&lt;li&gt;Created Date Time&lt;/li&gt;
&lt;li&gt;Modified Date Time&lt;/li&gt;
&lt;li&gt;Last Modified Date Time&lt;/li&gt;
&lt;li&gt;Description&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the step tab, only the step execution history gets displayed. All the step details can be viewed in a single tab. &lt;/p&gt;

&lt;p&gt;Step tab:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Step Id&lt;/li&gt;
&lt;li&gt;Step Name&lt;/li&gt;
&lt;li&gt;Run Status&lt;/li&gt;
&lt;li&gt;Last Executed Date Time&lt;/li&gt;
&lt;li&gt;Description&lt;/li&gt;
&lt;li&gt;SQL Jobs History details&lt;/li&gt;
&lt;li&gt;SQL Server Jobs enhancements&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All the available jobs get listed under SQL server jobs. For better usability grid crawler has been implemented. When we hit the eye icon, we can view the details of the job by clicking the Next and Previous arrow continuously to view the jobs detail instead of viewing all the jobs history separately. Succeeded, Failed, Cancelled, and unknown status of the jobs gets displayed.&lt;/p&gt;

&lt;h2&gt;
  
  
  SQL Server Jobs enhancements
&lt;/h2&gt;

&lt;p&gt;Parent and child job details can be viewed in the SQL jobs history blade on expanding the parent job.&lt;/p&gt;

&lt;p&gt;SQL Jobs History blade holds the Steps of the Job ran. For example (step 1, step2.).&lt;/p&gt;

&lt;p&gt;Succeeded Jobs are shown in green colour and Failed Jobs are shown in red colour code in SQL Jobs History blade.&lt;/p&gt;

&lt;p&gt;SQL Jobs History blade&lt;br&gt;
On clicking the eye icon in the child grid, we can view the below step details in the SQL Jobs History Details blade:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Step Id&lt;/li&gt;
&lt;li&gt;Step Name&lt;/li&gt;
&lt;li&gt;Run Status&lt;/li&gt;
&lt;li&gt;Last Executed Date Time&lt;/li&gt;
&lt;li&gt;Description&lt;/li&gt;
&lt;li&gt;Step Id&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;SQL jobs history comes with different filters to filter their specific data. There are more jobs run at the same time. If the user needs to audit a particular job, it is hard to search, and it is time-consuming. To handle such challenges, we can use the filter by option in SQL Jobs History.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In earlier versions, there was no option to view the history of jobs which makes it difficult for users who are looking for history details as they must go SSMS (SQL Server Management Studio) every time to view those details. By adding the SQL Jobs History feature the robustness of BizTalk360 has increased and helps the user with all the jobs-related data in a single place. We believe this new feature will add value to your day-to-day tasks by reducing your redundant efforts for those who need to view jobs history. We have a free trial for you, give a try!!!&lt;/p&gt;

</description>
      <category>biztalk</category>
    </item>
    <item>
      <title>Why did we build FTP/SFTP Monitoring for BizTalk Server?</title>
      <dc:creator>Madhavan kovai</dc:creator>
      <pubDate>Wed, 09 Feb 2022 08:41:23 +0000</pubDate>
      <link>https://dev.to/madhavankovai_31/why-did-we-build-ftpsftp-monitoring-for-biztalk-server-2kbk</link>
      <guid>https://dev.to/madhavankovai_31/why-did-we-build-ftpsftp-monitoring-for-biztalk-server-2kbk</guid>
      <description>&lt;h2&gt;
  
  
  Why do we need this feature?
&lt;/h2&gt;

&lt;p&gt;In the day-to-day activities of a BizTalk administrator, you might come across integrations where FTP sites are used for receiving and transmitting messages. FTP sites are often used for cross-platform integrations. For example, when you have an SAP system on Unix that has to be integrated, via BizTalk Server, with other systems, you might use FTP for receiving and transmitting of messages.&lt;/p&gt;

&lt;p&gt;SFTP &amp;amp; FTPS are just the secured version of FTP with advanced transport encryption mechanisms, so your end-to-end data transmission is secure and safe.&lt;/p&gt;

&lt;p&gt;To keep the business process going, it can be of vital importance that the FTP/SFTP sites are online and the messages are being picked up. So, when a BizTalk administrator needs to be constantly aware of whether the FTP/SFTP sites are online and working properly, the administrator needs to monitor the sites and the activities which take place on these sites.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the current challenges?
&lt;/h2&gt;

&lt;p&gt;BizTalk Server offers no monitoring capabilities, not for Receive Locations / Send Ports and also not for endpoints like FTP, SFTP and FTPS sites. So, using just the out-of-the-box features of BizTalk Server, a BizTalk administrator will have to manually check whether the FTP sites are online and whether all (appropriate) files are being picked up for further processing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manual monitoring
&lt;/h3&gt;

&lt;p&gt;This kind of manual monitoring can be quite cumbersome and time-consuming. The administrator will probably use multiple pieces of software to be able to perform these tasks. Think of for example the BizTalk Administration console to check whether the Receive Locations/Send Ports are up and some FTP client to check whether files are being picked up.&lt;/p&gt;

&lt;p&gt;It is obvious that this is not a very efficient scenario, which could easily be automated by setting up monitoring.&lt;/p&gt;

&lt;h3&gt;
  
  
  Maintaining scripts for monitoring FTP sites
&lt;/h3&gt;

&lt;p&gt;To reduce their workload, we experience that BizTalk administrators are creating their own scripts to monitor FTP sites and all kinds of other resources. Although this kind of script certainly can be of help, we still think this does not fully solve the problem.&lt;/p&gt;

&lt;p&gt;For example, often these kinds of scripts need maintenance when FTP sites need to be added, changed, or deleted from monitoring. This kind of task can be easily forgotten.&lt;/p&gt;

&lt;p&gt;Also from a knowledge transfer perspective, it’s easy to forget to update new colleagues about the existence of this kind of script, as they will probably be installed on some (monitoring) server.&lt;/p&gt;

&lt;p&gt;Another challenge with solving this kind of problem with scripts is that not each administrator is capable to write this kind of script, which makes knowledge transfer even harder.&lt;/p&gt;

&lt;p&gt;To keep the overview, we think that it is easier to use software, like BizTalk360, to have everything in one easily accessible place, with good visibility of all the features/capabilities, fine-grained security/auditing, and without the need to maintain custom scripts, etc…&lt;/p&gt;

&lt;h2&gt;
  
  
  How BizTalk360 solves this problem?
&lt;/h2&gt;

&lt;p&gt;With &lt;a href="https://www.biztalk360.com/"&gt;BizTalk360&lt;/a&gt;, we make monitoring of FTP/SFTP/FTPS sites a lot easier. For a very long time, the product offers monitoring of Receive Locations and Send Ports, but for some time now, BizTalk360 has also offered to monitor the physical FTP/SFTP/FTPS endpoints.&lt;/p&gt;

&lt;p&gt;We wanted to make setting up this kind of endpoint monitoring as seamless as possible and therefore we simply show all the ports in the current BizTalk group which make use of the FTP/SFTP/FTPS adapter.&lt;/p&gt;

&lt;p&gt;In BizTalk360, you can find FTP monitoring under Monitoring =&amp;gt; Manage Mapping =&amp;gt; File Locations (File, FTP, SFTP).&lt;/p&gt;

&lt;p&gt;Next, you can set up monitoring rules based on File Count and Directory Size and have BizTalk360 send Warning or Error notifications through the notification channels which are configured on the associated alarm.&lt;/p&gt;

&lt;p&gt;Of course, besides the greater-than-or-equals operator, also other common operators are available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As a final point, we see from time to time that administration teams maintain an administrator’s handbook, which contains all the tasks a (BizTalk) administrator should take care of. We think that by using software like BizTalk360, we can reduce the number of pages in such handbooks, as the kind of scripts we mentioned no more have to be described in that kind of book.&lt;/p&gt;

&lt;p&gt;This description could be replaced by, for example, a general guideline on how FTP sites should become monitored and the monitoring rules with BizTalk360.&lt;/p&gt;

&lt;p&gt;As a result, we hope to make the work of BizTalk administrators a bit easier so the team can focus on the more exciting parts of the job of BizTalk administrators, instead of constantly having to update their handbooks.&lt;/p&gt;

&lt;p&gt;So, we think that we make the day-to-day life of a BizTalk administrator, who needs to monitor the well-being of FTP sites, a little bit easier by bringing this feature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.biztalk360.com/"&gt;BizTalk360&lt;/a&gt; also offers an advanced monitoring capability called “Data Monitoring” which allows monitoring the traffic/volume of messages going through the ports for a given period, Eg: expected 50 PO orders from our partner via FTP/SFTP.&lt;/p&gt;

</description>
      <category>biztalk</category>
      <category>biztalkserver</category>
    </item>
    <item>
      <title>Insurance Claim Process Managed and Monitored with Serverless360</title>
      <dc:creator>Madhavan kovai</dc:creator>
      <pubDate>Thu, 21 Oct 2021 11:18:20 +0000</pubDate>
      <link>https://dev.to/madhavankovai_31/insurance-claim-process-managed-and-monitored-with-serverless360-1e48</link>
      <guid>https://dev.to/madhavankovai_31/insurance-claim-process-managed-and-monitored-with-serverless360-1e48</guid>
      <description>&lt;p&gt;In recent times cloud computing has played a significant role in various domains. In this blog, we will look at how Serverless360 helps these domains fulfill their business needs. We will explore a global insurance provider’s business need with regional offices in several territories and partners in many countries who need to manage policies and contracts and submit claims from different countries to the customer to reduce the processing overhead and maximize automation opportunities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Business Solution
&lt;/h2&gt;

&lt;p&gt;The business problem for this use case was to consolidate and centralize the processing of insurance claims into a new solution. The need is to provide a centralized source of truth for claims and centralized partner management followed by processing and then offload the claim to be handled locally by the regional office under additional local market rules.&lt;/p&gt;

&lt;p&gt;The solution would provide a single point of integration for partners who would work in multiple territories and centralized management of claims at the first point of contact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Solution
&lt;/h2&gt;

&lt;p&gt;The technical solution for this project was to use Microsoft Dynamics and Power Platform eco-system as the heart of the solution for data management with integration from partners via multiple channels such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure API Management for API&lt;/li&gt;
&lt;li&gt;Microsoft Power Apps for manual entry by staff for paper claims&lt;/li&gt;
&lt;li&gt;Microsoft Azure App Service for Partner Self Service Portal&lt;/li&gt;
&lt;li&gt;Microsoft Logic Apps for EDI and cloud Integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The platform implementing this solution looked like the following diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6z5ZtPf9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/txkxvb67d2yujdtfyqot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6z5ZtPf9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/txkxvb67d2yujdtfyqot.png" alt="Image description" width="480" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While Dynamics and Power Platform form the core of the solution in terms of the system of record and engagement, Microsoft Azure provided the platform for the custom components needed for the main integrated parts of the system.&lt;/p&gt;

&lt;p&gt;The key to success for this solution was the integration story that made it easy for partners to integrate with the platform.  The partners we would need to integrate with would have different technology capabilities ranging from small partners with minimal technology capability to big multi-national partners with advanced technology platforms.&lt;/p&gt;

&lt;p&gt;For those small low-tech partners, we provided options for the administrators to upload invoices via a Self-Service Portal as PDF or manual entry or to send paper invoices that our internal teams could process.&lt;/p&gt;

&lt;p&gt;We offered a range of options for other partners, such as integration via EDI style interfaces where messages or batches could be submitted, which were then processed via Logic Apps into the CRM system. Later, response batches would be returned.&lt;/p&gt;

&lt;p&gt;Some partners had API-based systems. We offered an API that allowed those partners to submit data via an API that could leverage Azure API Management, Azure Functions and Service Bus to process submissions and integrate them into the CRM system.&lt;/p&gt;

&lt;p&gt;With an extensive partner network and a wide range of technical capabilities, as a service provider, the ability to offer several different ways to integrate with the platform makes it accessible. It opens the opportunity to do business with all of the partners in the network.  One of the aims would be to work with and encourage partners to move from the low-tech options to more automated approaches where possible, resulting in a more seamless experience and lower operating costs and quicker processing times.&lt;/p&gt;

&lt;p&gt;In the solution, the entry point via whichever channel data comes into the system would process claims by an engine developed with Azure Functions.  This setup allowed more advanced and complex rules to be created that could be implemented with Dynamics core functionality.  It also allowed us to optimize and control the processing as needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Management &amp;amp; Monitoring Challenge
&lt;/h2&gt;

&lt;p&gt;The implemented solution needed to address complex requirements, and Microsoft Azure and Microsoft Power Platforms provided an excellent technology platform to deliver the solutions.&lt;/p&gt;

&lt;p&gt;With over 300 Azure Resources making up the overall solution, the challenge is how to lower the support burden of the solution, so the support team can manage it without everyone needing to be an Azure expert and make the solution manage itself where possible.&lt;/p&gt;

&lt;p&gt;The management and monitoring challenge comes from multiple perspectives.  There are challenges like management, monitoring and tracking of individual transactions being processed.  Where are they, when did they get processed and are there errors.&lt;/p&gt;

&lt;p&gt;There are also questions like “is the system healthy” “have we had any downtime” and those technical and service level questions to answer too.&lt;/p&gt;

&lt;p&gt;The solution chose to include Serverless360 as a monitoring and operations portal so that we can allow safe and secure access to non-azure experts. Perform those level 1 and level 2 support operator tasks to maintain the system and allow business users who fall under the super users to manage business transactions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless360 Business Activity Monitoring
&lt;/h2&gt;

&lt;p&gt;Serverless360 Business Activity Monitoring (BAM) is a crucial part of the solution. We implement BAM in the claims processing area when receiving thousands of claims per day and having many key background processes acting on the data received from partners.&lt;/p&gt;

&lt;p&gt;Serverless360 BAM can accept telemetry about business milestones from the technologies within the solution by performing &lt;a href="https://www.serverless360.com/distributed-tracing-with-serverless360-bam"&gt;distributed tracing&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We created BAM processes to collect and return claims from partners and implemented BAM business processes covering background processes like the claims processing engine.&lt;/p&gt;

&lt;p&gt;In the below diagram, you could use BAM to help operate one of the integration processes from the claim platform when data is integrated into other systems within the businesses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f-Zh7HUX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w6g7452vj87zr899epjf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f-Zh7HUX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w6g7452vj87zr899epjf.png" alt="Image description" width="602" height="619"&gt;&lt;/a&gt;&lt;br&gt;
With the claim processing engine, using BAM provides a business-friendly console that allows business users to track the status of claims that have been received and see where they are within the claims processing cycle. The business users can now self-serve on the integration solution to check claims status and deal with business process problems without escalating support tickets for many common data and validation issues.&lt;/p&gt;

&lt;p&gt;With the effective use of BAM, the number of support tickets to the IT help desk is a lot lower, and business users can identify, and handle issues experienced by partners quicker.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless360 Business Applications
&lt;/h2&gt;

&lt;p&gt;One of the critical challenges for a distributed solution is understanding how the distributed components work together to solve a business problem. With Serverless360 Business Applications, we can group those Azure resources which work together to solve a business problem. We grouped the specific Azure Functions, Service Bus Queues that make up our claim processing engine and the API’s and Logic Apps, which provide the different partner integration solutions.&lt;/p&gt;

&lt;p&gt;Serverless360’s Business Applications concept allows us to teach the support users how to manage each business solution and gives the support user least-privilege access to do the support actions we want them to do without needing the extensive experience of the entire Azure Portal and the risk that the support user accidentally does the wrong thing&lt;/p&gt;

&lt;p&gt;We can allow the support user to act on most day-to-day support-related scenarios with a basic understanding of Azure and focus on supporting our solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Proactive Monitoring
&lt;/h2&gt;

&lt;p&gt;Serverless360 Business Applications also allows us to create monitoring for the resources which make up the business solution.&lt;/p&gt;

&lt;p&gt;Serverless360 indicates when there might be a problem that affects the operation of the business processes.&lt;/p&gt;

&lt;p&gt;We can use features like Service Map to zone in on the area where the problem may be to get to a resolution quicker if there is an issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;The net result of the &lt;a href="https://www.serverless360.com/"&gt;Serverless360&lt;/a&gt; application is that we have lowered the cost of support for our solution and provided our users with a better support experience by reducing the time to resolution for support cases with a significant reduction in the number of tickets that need escalation to the development team.  The self-service aspects provided by Serverless360 BAM also inspire confidence in our solution for the business who feel engaged and have visibility of the health and workings of the system.&lt;/p&gt;

</description>
      <category>azure</category>
    </item>
  </channel>
</rss>
