<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gregory Savvidis</title>
    <description>The latest articles on DEV Community by Gregory Savvidis (@gregory_sav).</description>
    <link>https://dev.to/gregory_sav</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gregory_sav"/>
    <language>en</language>
    <item>
      <title>Hands-on Monitoring and Alerting guide for Azure resources</title>
      <dc:creator>Gregory Savvidis</dc:creator>
      <pubDate>Wed, 18 Jun 2025 09:04:25 +0000</pubDate>
      <link>https://dev.to/agileactors/hands-on-monitoring-and-alerting-guide-for-azure-resources-3a12</link>
      <guid>https://dev.to/agileactors/hands-on-monitoring-and-alerting-guide-for-azure-resources-3a12</guid>
      <description>&lt;p&gt;When talking about software quality and detecting flaws early, what immediately comes to mind is writing tests and enforcing them as soon as possible in the CI/CD process. Overall, quality is about ensuring reliability throughout the entire implemented solution. This can be tightly coupled with monitoring resources, tracking performance and setting up early alerting mechanisms. By proactively detecting issues like high CPU usage, memory leaks, or slow response times, teams can prevent failures before they impact users.&lt;/p&gt;

&lt;p&gt;In this article we are going to focus on other aspects of quality that do not necessarily require writing and executing tests, but instead utilize metrics and logs provided by the Azure Portal directly and visualize them on an Azure Workbook as an interactive and customizable data visualization tool within Azure Portal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting the scene
&lt;/h2&gt;

&lt;p&gt;Imagine you're part of a DevOps team responsible for maintaining an application hosted on Azure. Before going to production you would like to be in a position to early detect slowdowns and occasional service disruptions. Without a clear picture of the system's health and performance, it's difficult to pinpoint the cause and respond quickly. This lack of visibility and proactive alerting leads to longer downtime and frustrated customers. To address this, we need a robust monitoring and alerting strategy using Azure's built-in tools - starting with identifying where the problem lies, setting up monitoring for relevant metrics and building alerting rules that help us react before users are affected.&lt;/p&gt;

&lt;p&gt;Let's say we're responsible for maintaining an Orders API, which handles incoming HTTP requests from a web frontend app to process customer orders. It's hosted on Azure App Service and backed by an Azure SQL Database while Application Insights and/or Log Analytics workspace is enabled. Recently, support tickets have reported that requests to the &lt;code&gt;/submit-order&lt;/code&gt; endpoint occasionally take too long or fail, especially during high traffic periods.&lt;/p&gt;

&lt;p&gt;To diagnose and resolve this, we want to answer the following questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is the API experiencing high response times or failures?&lt;/li&gt;
&lt;li&gt;What's causing the slowdown - CPU/memory pressure, database latency, or something else?&lt;/li&gt;
&lt;li&gt;Would it be useful to set up alerts notifying us as soon as performance degrades?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our approach will follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Monitor metrics&lt;/strong&gt; to understand the API's real-time performance (e.g., response time, request count, error rate)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable Diagnostic Logs&lt;/strong&gt; to capture deeper insights into failures and long-term trends using Log Analytics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use KQL Queries&lt;/strong&gt; to investigate patterns and detect anomalies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create a Workbook&lt;/strong&gt; to visualize the data in a centralized, interactive dashboard&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Define Alerts&lt;/strong&gt; with thresholds that will notify us when performance degrades or errors spike.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This structured approach ensures we're not just reacting to problems, but actively detecting and preventing them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitor metrics
&lt;/h2&gt;

&lt;p&gt;To begin with troubleshooting the performance issues on &lt;code&gt;/submit-order&lt;/code&gt; endpoint, we start by examining the available metrics provided by the Azure App Service that hosts our Orders API. These metrics give us a snapshot of how the application is performing in real time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Navigate to Metrics in Azure Portal
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Go to the Azure Portal&lt;/li&gt;
&lt;li&gt;In the search bar, type and select your &lt;strong&gt;App Service&lt;/strong&gt; (e.g., orders-api-prod)&lt;/li&gt;
&lt;li&gt;In the left-hand menu under &lt;strong&gt;Monitoring&lt;/strong&gt;, click &lt;strong&gt;Metrics&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpid6zy0lyyrydy9wypt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpid6zy0lyyrydy9wypt.png" alt="Image description" width="261" height="617"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After clicking on Metrics, we can choose the one we want to monitor and see a graphical representation of it. For example, we can select from the dropdown the Response time and get the following graph:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax9nkt5nmv3ybkvcf0d2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax9nkt5nmv3ybkvcf0d2.png" alt="Image description" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other metrics can be utilized to address user complaints and align with our system architecture. For example we can choose from the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Server response time&lt;/strong&gt; - Tells us how long it takes to respond to HTTP requests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Requests&lt;/strong&gt; - Shows the number of incoming requests. Spikes here may correlate with performance issues&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP 5xx errors&lt;/strong&gt; - Indicates server-side errors, which can be tied to crashes or overload&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CPU Percentage&lt;/strong&gt; - Helps determine if the instance is under CPU pressure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory Working Set&lt;/strong&gt; - Tracks memory usage over time&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Monitor logs
&lt;/h2&gt;

&lt;p&gt;While metrics give us a real-time snapshot of the Orders API's performance, Application Insights and/or Log Analytics workspace logs provide a deeper and more granular view of what's actually happening inside the application. Logs can help answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which specific requests are failing and why?&lt;/li&gt;
&lt;li&gt;Are there specific error messages or exceptions being thrown?&lt;/li&gt;
&lt;li&gt;How is the backend database responding?&lt;/li&gt;
&lt;li&gt;What patterns can we identify over time?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Access and Explore Logs
&lt;/h2&gt;

&lt;p&gt;Once logging is enabled and data starts flowing into your workspace:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to your &lt;strong&gt;Log Analytics Workspace&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Logs&lt;/strong&gt; (reference "Metrics section under Monitoring" image)&lt;/li&gt;
&lt;li&gt;In the query editor, you'll see several &lt;strong&gt;predefined tables&lt;/strong&gt; such as:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;AppRequests&lt;/code&gt; – HTTP request data (e.g., method, URL, duration)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AppExceptions&lt;/code&gt; – Exceptions thrown by your app&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AppTraces&lt;/code&gt; – Custom traces or log messages from your code&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AppDependencies&lt;/code&gt; – External calls, e.g., to databases or APIs&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the query editor we use &lt;strong&gt;&lt;a href="https://learn.microsoft.com/en-us/kusto/query/?view=microsoft-fabric" rel="noopener noreferrer"&gt;Kusto Query Language (KQL)&lt;/a&gt;&lt;/strong&gt;, a read-only query language optimized for fast and efficient data exploration, enabling users to filter, aggregate and visualize large datasets easily.&lt;/p&gt;

&lt;p&gt;Here are a few useful KQL queries to start exploring what's happening behind the scenes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Slow Requests to &lt;code&gt;/submit-order&lt;/code&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah3crfnocyqrz4w6jy6h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah3crfnocyqrz4w6jy6h.png" alt="Image description" width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Count of Failed Requests:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcr9c6jx7nri9aro2ipll.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcr9c6jx7nri9aro2ipll.png" alt="Image description" width="600" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Top Exception Messages:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferf89amsuw3p7idjf4fa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferf89amsuw3p7idjf4fa.png" alt="Image description" width="594" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure Diagnostic Settings
&lt;/h2&gt;

&lt;p&gt;In case the AppExceptions table is not available or any other necessary tables, we can enable Diagnostic settings to send these logs to a specific &lt;strong&gt;Log Analytics Workspace&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To start capturing logs, we need to ensure our App Service is sending data to a &lt;strong&gt;Log Analytics Workspace&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to your &lt;strong&gt;Orders API App Service&lt;/strong&gt; in the Azure Portal&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Monitoring&lt;/strong&gt;, click &lt;strong&gt;Diagnostic settings&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add diagnostic setting&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Give your setting a name and check:

&lt;ul&gt;
&lt;li&gt;Application Logging&lt;/li&gt;
&lt;li&gt;Request Logs&lt;/li&gt;
&lt;li&gt;Failed request tracing&lt;/li&gt;
&lt;li&gt;AppServiceHTTPLogs&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; 5. Select &lt;strong&gt;Send to Log Analytics Workspace&lt;/strong&gt; and choose an existing workspace or create a new one&lt;br&gt;
 6. Click Save&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note: Logs can differ depending on the resource type. For App Services, HTTP logs and application logs are particularly useful.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once the Diagnostic settings are set, the steps are identical with the previous case where we use KQL query on the Log Analytics workspace.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workbooks
&lt;/h2&gt;

&lt;p&gt;Understanding metrics, logs, and queries is the first step in enabling Azure resource monitoring. Once this foundation is established, we can analyze individual resources by visiting them and monitoring their behavior. However, for a more comprehensive and centralized approach, it is essential to consolidate metrics and logs in a single, structured view.&lt;/p&gt;

&lt;p&gt;One of the visualization tools provided by the Azure Portal is Azure Workbooks. This feature allows users to analyze and visualize data from various Azure resources, logs, and metrics within a single, interactive interface.&lt;/p&gt;

&lt;p&gt;Creating an Azure Workbook is a straightforward process. Simply type &lt;em&gt;Azure Workbooks&lt;/em&gt; in the Azure Portal search bar, select the service, and click on the &lt;em&gt;Create&lt;/em&gt; button. From this point, users can choose to create either an empty Workbook or select from preconfigured templates that cater to common monitoring scenarios.&lt;/p&gt;

&lt;p&gt;Regardless of the option chosen, users can click on &lt;em&gt;Edit&lt;/em&gt; to customize the Workbook according to their requirements. Within the edit mode, clicking on the &lt;em&gt;Add&lt;/em&gt; button allows the inclusion of various visualization components&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpd7924z6bnl2g4stqlp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpd7924z6bnl2g4stqlp.png" alt="Image description" width="134" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As seen on the image above, we are able to utilize multiple options to make our Workbook meet our needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Text&lt;/strong&gt; - add markdown or HTML-based text to provide descriptions, explanations, or headers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Query&lt;/strong&gt; - run Kusto Query Language (KQL) queries to fetch data from Log Analytics, Azure Resource Graph, or Application Insights&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameters&lt;/strong&gt; - Define dropdowns, text inputs, or checkboxes to make Workbooks dynamic and interactive&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Links &amp;amp; Tabs&lt;/strong&gt; - Add navigation links or tabs to switch between different sections of a Workbook&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metrics&lt;/strong&gt; - Fetch real-time Azure Metrics (e.g., CPU usage, memory utilization) and display them visually&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Group&lt;/strong&gt; - helps in organizing content logically, making the Workbook easier to read&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can choose &lt;strong&gt;Metrics&lt;/strong&gt; where the predefined metrics (per resource) are available to be displayed or &lt;strong&gt;Query&lt;/strong&gt; where the same KQL query from before can be applied.&lt;/p&gt;

&lt;p&gt;Once the data is loaded we can choose the preferred visualization option:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Charts (area, bar, line, pie, scatter, time)&lt;/li&gt;
&lt;li&gt;Grids&lt;/li&gt;
&lt;li&gt;Tiles&lt;/li&gt;
&lt;li&gt;Stats&lt;/li&gt;
&lt;li&gt;Graphs&lt;/li&gt;
&lt;li&gt;Maps&lt;/li&gt;
&lt;li&gt;Text visualization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Creating custom Workbooks provides a graphical visualization of the resources both for tech and non tech people.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alerting
&lt;/h2&gt;

&lt;p&gt;Creating Alert rules is a very easy process, as we can simply reuse the same metrics and/or queries that we have used on our Azure Workbook. Following these steps it will allow us to set up an alert:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;em&gt;Create Alert rule&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Under &lt;em&gt;Scope&lt;/em&gt;, select the Azure resource you want to monitor&lt;/li&gt;
&lt;li&gt;Under &lt;em&gt;Condition&lt;/em&gt;, define the metrics and queries condition that should trigger the alert&lt;/li&gt;
&lt;li&gt;Under &lt;em&gt;Actions&lt;/em&gt;, select or create an Action Group to define who gets notified&lt;/li&gt;
&lt;li&gt;Provide a name and severity level for the alert rule.&lt;/li&gt;
&lt;li&gt;Click &lt;em&gt;Create&lt;/em&gt; to finalize the alert rule&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5wltyvmozmk4zqqugq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5wltyvmozmk4zqqugq8.png" alt="Image description" width="760" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, effective Monitoring and Alerting in Azure is essential for maintaining visibility, performance, and security across cloud resources. Azure Workbooks provide a centralized and interactive way to visualize metrics and logs, enabling teams to analyze data efficiently. Meanwhile, Azure Alerts ensure proactive monitoring by automatically notifying the right people and triggering automated actions when predefined conditions are met. By leveraging Action groups, organizations can streamline alert management and ensure timely responses to potential issues.&lt;/p&gt;

&lt;p&gt;Combining these tools allows for a comprehensive monitoring strategy, where teams can track, analyze, and respond to system behavior in real time. With proper Workbook customization, Alert rule configuration, and Action group management, businesses can optimize performance, reduce downtime, and enhance overall cloud reliability.&lt;/p&gt;

&lt;p&gt;In case you are looking for a dynamic and knowledge-sharing workplace that respects and encourages your personal growth as part of its own development, we invite you to explore our current &lt;a href="https://apply.workable.com/agileactors/" rel="noopener noreferrer"&gt;job opportunities&lt;/a&gt; and be part of Agile Actors.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Schema validation testing with Prism</title>
      <dc:creator>Gregory Savvidis</dc:creator>
      <pubDate>Mon, 26 May 2025 10:52:15 +0000</pubDate>
      <link>https://dev.to/agileactors/schema-validation-testing-with-prism-5e73</link>
      <guid>https://dev.to/agileactors/schema-validation-testing-with-prism-5e73</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8sv04io7f4tn5a7j5g2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8sv04io7f4tn5a7j5g2.png" width="700" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article we will discuss schema validation testing using Prism, the differences it has with contract testing and provide an introduction on how we can run it locally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current state and issues
&lt;/h2&gt;

&lt;p&gt;Our team is part of an organisation whose aim is to create a cloud-native SaaS bank operating system that will empower banks to evolve to a new era by leaving behind the (old) monolithic approach.&lt;/p&gt;

&lt;p&gt;As part of the development lifecycle, when we release our code or product, we need to be as much accurate and confident as possible. One of the common undesired occasions is when we receive customer’s feedback during that lifecycle, which causes a lot of back and forth that actually costs.&lt;/p&gt;

&lt;p&gt;The preferred way would be to have a frequent check that will most probably catch errors easier and earlier before reaching the end user. This is where &lt;a href="https://stoplight.io/open-source/prism" rel="noopener noreferrer"&gt;Prism&lt;/a&gt; comes into play.&lt;/p&gt;

&lt;p&gt;In our case, we had an e2e suite that was running user journeys against microservices. Obviously that is not a one of a kind situation, but due to the architectural structure the microservices were being hit either directly or via proxies (&lt;a href="https://cloud.google.com/apigee" rel="noopener noreferrer"&gt;Apigee&lt;/a&gt; or &lt;a href="https://www.getambassador.io/" rel="noopener noreferrer"&gt;Ambassador&lt;/a&gt;) so we needed a way to validate the requests and the responses.&lt;/p&gt;

&lt;p&gt;One feasible way where we can set up a structure and actually validate the requests and the responses would be with &lt;a href="https://docs.pact.io/" rel="noopener noreferrer"&gt;PACT&lt;/a&gt; contract testing. This approach will provide the confidence that we seek for but the disadvantage occurs on the amount of time and effort that needs to be spent both from the Consumer and the Provider teams. However, despite having PACT tests, there are cases where we need to be as autonomous as possible and do a validation within the team’s scope. Here is where Prism comes into play.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Prism?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Prism is an open-source HTTP Mock &amp;amp; Proxy Server that acts as a transformation and validation layer between the actual and the expected result. It can be used as a contract testing tool as well as a logging one. Prism works in collaboration with OpenAPI v2 and OpenAPI v3, that generates request and response objects based on the provided yaml or json files.&lt;/p&gt;

&lt;p&gt;So, in our case, during the execution, we had it validating the called endpoint’s request and response bodies against those generated objects.&lt;/p&gt;

&lt;p&gt;In case that any of the examined objects did not match with the expected ones, Prim provides information regarding the error that can be easily used to generate a report.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famcdmzqkk70xq0jt0wzm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famcdmzqkk70xq0jt0wzm.png" width="700" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Schema testing vs Contract testing
&lt;/h2&gt;

&lt;p&gt;Schema testing uses a generalised notation that defines the structure that a request and response are supposed to have at a point in execution time. It validates that a system is compatible to a given schema.&lt;/p&gt;

&lt;p&gt;Contract testing, on the other hand, defines how two systems are able to communicate by agreeing on what should be sent between them and providing concrete examples (contracts) to test the agreed behaviour.&lt;/p&gt;

&lt;p&gt;The difference between them arises with contract testing going one step further on just defining a schema, requiring both parties to come to a consensus on the allowed set of interactions allowing evolution over time.&lt;/p&gt;

&lt;p&gt;In simple words, contract testing is more concrete as it defines strictly how the two systems are supposed to communicate, while schema testing is more abstract as there is a general validation on the expected structure of the request and response payloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use Prism
&lt;/h2&gt;

&lt;p&gt;Our &lt;a href="https://dev.to/agileactors/designing-an-e2e-testing-framework-for-a-cloud-native-banking-platform-towards-continuous-delivery-96c467343361"&gt;E2E suite&lt;/a&gt; is developed using Spring Boot, so we decided to use the Spring’s &lt;a href="https://www.thymeleaf.org/" rel="noopener noreferrer"&gt;Thymeleaf&lt;/a&gt; library to construct and customise the reports and then inform the related microservice teams. Using Thymeleaf we created a boilerplated prism-report.html file that was updated on demand when errors occurred during the execution.&lt;/p&gt;

&lt;p&gt;As already stated, Prism can help checking for discrepancies between an API implementation and the OpenAPI document that describes. To start of we need to install Prism either locally&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm install @stoplight/prism-cli&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;or via a Docker image:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker pull stoplight/prism&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The generic run command is:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npx prism proxy &amp;lt;OPEN_API_SPEC_YAML_FILE&amp;gt; &amp;lt;UPSTREAM_URL&amp;gt; — port &amp;lt;PROXY_PORT&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;where,&lt;/p&gt;

&lt;p&gt;a) &lt;em&gt;OPEN_API_SPEC_YAM_FILE&lt;/em&gt; is the yaml file that OpenAPI uses to generate the DTOs&lt;br&gt;&lt;br&gt;
b) &lt;em&gt;UPSTREAM_URL&lt;/em&gt; is the host/proxy that the request goes by&lt;br&gt;&lt;br&gt;
c) &lt;em&gt;PROXY_PORT&lt;/em&gt; is the assigned port for requests (we can provide any port we want)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcojarezdbzgpqbn76cv1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcojarezdbzgpqbn76cv1.png" width="700" height="145"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  How we decided to use it
&lt;/h2&gt;

&lt;p&gt;Finally, in order to make the whole process as autonomous as possible, we created a Jenkinsfile that runs in a daily manner and does the aforementioned validation and then posts the results on a specific Slack channel.&lt;/p&gt;

&lt;p&gt;As shown below there is a stage in our Jenkinsfile that runs a bash script to kick off the validation. The whole suite is Dockerised so we have the ability to provide the service(s) as environment variable(s). We can define, with &lt;em&gt;PRISM_PROXY_CHOICE&lt;/em&gt;, if we want to run a specific service or all of them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stage("Run Prism Schema validation") {  
       steps {  
         catchError(buildResult: 'SUCCESS', stageResult: 'UNSTABLE') {  
          script {  
            sh '''  
             if [ "${PRISM_PROXY_CHOICE}" != "All" ] ; then  
              export PRISM_PROXY_SERVICES=${PRISM_PROXY_CHOICE};  
             fi  

            ./ci/execution/dockerRunExecutor.sh  
            '''  
          }  
        }  
      }  
     }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our bash script we retrieve the OpenAPI specs that Prism will run against. In the code block below, you will observe that we have assigned a specific port for each Apigee/Ambassador service, so we can have the validations isolated and the produced result will be categorised per service. So, for example, if a request is made to any endpoint of &lt;em&gt;firstgateway&lt;/em&gt; it will automatically be redirected to localhost:5000 where the Prism Server will perform the schema validation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OPENAPI_SPECS_PATH=./src/main/resources/openapi-specs  

GATEWAYS_PORT_MAPPING=("firstgateway:5000"  
  "secondgateway:5001"  
  "thirdgateway:5002"  
  "fourthgateway:5003"  
  "fifthgateway:5004"  
  "sixthgateway:5005"  
  "seventhgateway:5006"  
  "eighthgateway:5007"  
  "ninethgateway:5008")  

if [ -z "${PRISM_PROXY_SERVICES}" ] || [ "${PRISM_PROXY_SERVICES}" = "All" ] ; then  
  #If prism prismProxyService is empty then start proxy service for all available gateways  
  echo "Starting prism proxy for all gateways"  
  for gatewaymap in "${GATEWAYS_PORT_MAPPING[@]}"; do  
    GATEWAY=${gatewaymap%%:*}  
    PRISM_PORT=${gatewaymap#*:}  
    prism proxy -h 0.0.0.0 ${OPENAPI_SPECS_PATH}/${GATEWAY}.yaml ${PRISM_UPSTREAM_URL} -p ${PRISM_PORT} &amp;amp;  
  done  
else  
  for prismProxyService in ${PRISM_PROXY_SERVICES//,/ }; do  
    for gatewaymap in "${GATEWAYS_PORT_MAPPING[@]}"; do  
      GATEWAY=${gatewaymap%%:*}  
      PRISM_PORT=${gatewaymap#*:}  

      if [ ${prismProxyService} == ${GATEWAY} ]; then  
        echo "Starting Prism Proxy Server for ${GATEWAY} on port ${PRISM_PORT} listening to ${PRISM_UPSTREAM_URL}"  
        prism proxy -h 0.0.0.0 ${OPENAPI_SPECS_PATH}/${GATEWAY}.yaml ${PRISM_UPSTREAM_URL} -p ${PRISM_PORT} &amp;amp;  
      fi  
    done  
  done  
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During execution, we have created a store where we keep information regarding each request (service name, path, method, response). Prism adds an extra &lt;em&gt;sl-violations&lt;/em&gt; header, which we use after each test to update a &lt;em&gt;PrismStore&lt;/em&gt; that saves the above information plus some extra ones like severity, error location and response code. That store will be used with Cucumber’s &lt;em&gt;AfterAll&lt;/em&gt; annotation to complement our custom Thymeleaf report file (as shown in the last image).&lt;/p&gt;

&lt;p&gt;After the execution stage is completed, as part of the clean up stage, we generate the report and call &lt;em&gt;sendMessage()&lt;/em&gt; method to send a slack notification to ensure that all related teams would be informed soon enough to proceed with any needed updates before the broken functionality is deployed to any client environments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;post {  
      always {  
        script {  

          sh '''  
            ./ci/utils/cleanUp.sh  
          '''  

            publishHTML(target: [  
                allowMissing: false,  
                alwaysLinkToLastBuild: false,  
                keepAll: true,  
                reportDir: 'target/prismproxy/',  
                reportFiles: 'prism_report.html',  
                reportName: 'PrismProxyReport',  
                reportTitles: 'Prism Test Report'])  

            sendMessage()  
          }  
       }  
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbh3a3iv82dvo02t8b9wt.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbh3a3iv82dvo02t8b9wt.jpeg" width="640" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;To sum up, in this article we tried to have our first approach with Prism. We explained what Prism is and what can it bring to the table. We mentioned the key differences between Prism as a schema validation tool and Pact as contract testing tool. Last but not least we explained how we decided to use Prism as part of our E2E suite and generate a report that will provide handy results to any team during the development lifecycle.&lt;/p&gt;

&lt;p&gt;In case you are looking for a dynamic and knowledge-sharing workplace that respects and encourages your personal growth as part of it’s own development, we invite you to explore our current &lt;a href="https://apply.workable.com/agileactors/" rel="noopener noreferrer"&gt;&lt;em&gt;job opportunities&lt;/em&gt;&lt;/a&gt; and be part of Agile Actors!&lt;/p&gt;

</description>
      <category>schema</category>
      <category>pact</category>
      <category>prism</category>
      <category>programming</category>
    </item>
    <item>
      <title>Branching and merging strategies</title>
      <dc:creator>Gregory Savvidis</dc:creator>
      <pubDate>Mon, 26 May 2025 09:00:11 +0000</pubDate>
      <link>https://dev.to/agileactors/branching-and-merging-strategies-21hj</link>
      <guid>https://dev.to/agileactors/branching-and-merging-strategies-21hj</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6fbze59ugw3sni4wm4q.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6fbze59ugw3sni4wm4q.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article we will present the most popular branching strategies and their differences, as well as set the scene for teams that want to start using a comprehensive branch naming convention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Branching strategy
&lt;/h2&gt;

&lt;p&gt;Branches are primarily used as means for teams to develop features giving them a separate workspace for their code, which are usually merged back to the main branch once the new feature is implemented. A branching strategy, therefore, is the strategy that software development teams adopt when writing, merging and deploying code when using a version control system.&lt;/p&gt;

&lt;p&gt;Adhering to a branching strategy will help developers to work together without stepping on each other’s toes. In other words, it enables teams to work in parallel to achieve faster releases and fewer conflicts by creating a clear process when making changes to source control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Branching Strategies
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;GitFlow strategy&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;GitFlow is a branching strategy that uses feature branches and multiple primary branches. It is characterized for the long-lived branches and large commits as part of the merging strategy.&lt;/p&gt;

&lt;p&gt;There are 5 types of branches: main (previously called master), develop, feature, release and hotfix&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;main&lt;/em&gt;&lt;/strong&gt;: production-ready code, all branches are merged on main after developed/tested&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;develop&lt;/em&gt;&lt;/strong&gt;: pre-production code, new features that are being tested&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;feature&lt;/em&gt;&lt;/strong&gt;: add new features to the code, cut from develop and merge back once completed and reviewed&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;release&lt;/em&gt;&lt;/strong&gt;: prepare release with finish touches and minor bugs separately from main/develop&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;hotfix&lt;/em&gt;&lt;/strong&gt;: quickly address changes in main branch, merged to main and develop&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall it is well-defined and straight forward, but it hides complexity when merging code from development branches to main branch. The two step merging (develop and main) may slow down the development process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6qwcffyua6l1djau167.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6qwcffyua6l1djau167.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(source: &lt;a href="https://skynix.co/resources/navigating-git-branching-strategies-a-comprehensive-comparison" rel="noopener noreferrer"&gt;https://skynix.co/resources/navigating-git-branching-strategies-a-comprehensive-comparison&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;2. &lt;strong&gt;GitHub Flow strategy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GitHub Flow is a lightweight and straightforward branching strategy designed for continuous delivery and deployment. It focuses on simplicity and works best for projects with a single production version. The main branch contains production-ready code (code that is deployable at all times) and feature branches contain new features/bug fixes will merged to main to introduce new work (possibly exist for several days or even weeks).&lt;/p&gt;

&lt;p&gt;The main features here are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;main branch&lt;/em&gt;&lt;/strong&gt;: contains production-ready code&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;feature branches&lt;/em&gt;&lt;/strong&gt;: created from main branch to introduce new functionality&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;frequent commits and pushes&lt;/em&gt;&lt;/strong&gt;: commit and push changes regularly to ensure progress is saved and visible&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;pull requests&lt;/em&gt;&lt;/strong&gt;: create pull request to merge the new code from the feature branch to the main branch&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;deployment&lt;/em&gt;&lt;/strong&gt;: once the new code is merged the main branch in deployed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall good when there is a single production version, but does not support multi-version codebase and the lack of short-lived branches will make main branch to be multiple heads in front of the feature branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzshd4yjvt4c2jje6j0nw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzshd4yjvt4c2jje6j0nw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(source: &lt;a href="https://skynix.co/resources/navigating-git-branching-strategies-a-comprehensive-comparison" rel="noopener noreferrer"&gt;https://skynix.co/resources/navigating-git-branching-strategies-a-comprehensive-comparison&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;3. &lt;strong&gt;GitLab Flow strategy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GitLab Flow is a simpler approach of GitFlow branching strategy that combines feature-driven development with issue tracking. The main branch is the central branch where all development work converges, while optional pre-production and production branches enable thorough testing and stable deployments.&lt;/p&gt;

&lt;p&gt;The main features here are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;main branch&lt;/em&gt;&lt;/strong&gt;: primary branch where all features and fixes are merged&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;feature branches&lt;/em&gt;&lt;/strong&gt;: created from main branch to introduce new functionality&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;pre-production branches (optional)&lt;/em&gt;&lt;/strong&gt;: branches like test and staging for additional test and validation before merging to main&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;production branch&lt;/em&gt;&lt;/strong&gt;: a stable branch used to deploy the main branch when ready for production&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;version branch&lt;/em&gt;&lt;/strong&gt;: create branches like v1 or v2 to maintain and update different versions independently&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7zp5o2sm1on4690kt1c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7zp5o2sm1on4690kt1c.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(source: &lt;a href="https://www.abtasty.com/blog/git-branching-strategies" rel="noopener noreferrer"&gt;https://www.abtasty.com/blog/git-branching-strategies&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;4. &lt;strong&gt;Trunk-Based strategy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Trunk-Based Development is a streamlined and fast-paced workflow where all developers work closely on a single branch, typically the main branch called trunk. Changes are integrated into main frequently, promoting continuous integration and rapid deployment. Feature branches are short-lived and deleted immediately after merging.&lt;/p&gt;

&lt;p&gt;The main features here are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;main branch&lt;/em&gt;&lt;/strong&gt;: single source of truth, containing production-ready code at all times&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;feature branches&lt;/em&gt;&lt;/strong&gt;: temporary branches created for new features or fixes&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;pull requests&lt;/em&gt;&lt;/strong&gt;: create pull request to merge the new code from the feature branch to the main branch&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;deployment&lt;/em&gt;&lt;/strong&gt;: once the new code is merged the main branch in deployed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufaa3qty646i1ekf3ekf.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufaa3qty646i1ekf3ekf.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(source: &lt;a href="https://www.abtasty.com/blog/git-branching-strategies" rel="noopener noreferrer"&gt;https://www.abtasty.com/blog/git-branching-strategies&lt;/a&gt;)&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h1&gt;
  
  
  Branch naming strategy
&lt;/h1&gt;

&lt;p&gt;Branches should follow a specific best practices pattern to facilitate collaboration among the team. The following pattern needs to be followed in Azure DevOps, as it distinguishes the structure of the branch name into separate directories under Branches section:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&amp;lt;change_type&amp;gt; / &amp;lt;User Story&amp;gt; — &amp;lt;Description&amp;gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;prefixes (change type) must be related to the content of the branch

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;feature&lt;/em&gt;:&lt;/strong&gt; introducing new feature functionality&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;bug&lt;/em&gt;:&lt;/strong&gt; addressing/resolving reported bugs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;chores&lt;/em&gt;:&lt;/strong&gt; maintenance tasks that are not features or bugs (e.g. introduce comments or documentation)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;release&lt;/em&gt;:&lt;/strong&gt; release preparation branch (if we decide to have one)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; work item (User Story) must the one from Azure Board&lt;/li&gt;
&lt;li&gt; description must be descriptive to help team members to quickly understand the purpose of the branch at a glance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Prefer lowercase and hyphens for better readability&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Example: feature/123456-add-new-functionality&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In addition, there are tools that can help in setting the branch naming standards up. Some of the most popular tools are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;Husky&lt;/em&gt;&lt;/strong&gt;: Adds Git hooks to enforce naming rules at pre-commit or pre-push stages, ensuring consistency before pushing changes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;Commitizen&lt;/em&gt;&lt;/strong&gt;: Focuses on commit message conventions but can complement branch naming standards to keep everything consistent.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;Git Hooks&lt;/em&gt;&lt;/strong&gt;: Native hooks that can be customised to enforce branch naming patterns during commits, merges, or pushes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;SonarQube&lt;/em&gt;&lt;/strong&gt;: Primarily a code quality tool, but can integrate with Git to enforce branch naming conventions as part of quality gates.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;GitFlow&lt;/em&gt;&lt;/strong&gt;: A branching model that can be enforced with tools in Git platforms like GitHub or GitLab, encouraging branch naming conventions such as &lt;code&gt;feature/&lt;/code&gt; or &lt;code&gt;hotfix/&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;GitHub Branch Protection Rules&lt;/em&gt;&lt;/strong&gt;: Indirectly enforces branch naming conventions by restricting pushes to specific branches and requiring pull request reviews.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;Semantic Release&lt;/em&gt;&lt;/strong&gt;: Automates versioning and change log generation, often used alongside structured branch naming (e.g., &lt;code&gt;feature/&lt;/code&gt;, &lt;code&gt;bugfix/&lt;/code&gt;, &lt;code&gt;release/&lt;/code&gt;) to manage releases automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Code review process
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;raise PR once the new code is completed and push to a remote branch. On each PR:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;provide a meaningful title (as it will be visible in the history of the main branch)&lt;/li&gt;
&lt;li&gt;provide a description that will quickly describe the purpose of the PR (verbalizing the title of the PR)&lt;/li&gt;
&lt;li&gt;provide the US related to this work&lt;/li&gt;
&lt;li&gt;add mandatory tech related reviewers&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;refer to team member (preferably with content knowledge) to review&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;then it comes to reviewers’ responsibility to review and provide feedback on the code&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Reviewers’ responsibilities:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;verify language/code related patterns and security risks
&lt;/li&gt;
&lt;li&gt;verify common patterns are followed between old and new code
&lt;/li&gt;
&lt;li&gt;create clear comments describing the area that needs to be altered
&lt;/li&gt;
&lt;li&gt;once the comments are resolved, each reviewer needs to state if his/her comments are addressed&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Keep in mind that any new code that is about to be merged, needs to have the respective Unit Tests in place (meaning that the Unit Tests will already be part of the code that is about to be reviewed/merged).&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Merge policy
&lt;/h1&gt;

&lt;p&gt;Define how changes from different branches are integrated into the main codebase&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Types of merging&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;direct merge&lt;/em&gt;&lt;/strong&gt;: merge directly into main with all commits from feature branch available on main&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;em&gt;squash merge&lt;/em&gt;&lt;/strong&gt;: combine all commits into one before merging, to keep main branch history clean&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;rebase&lt;/strong&gt;: move changes from feature branch on top of the main branch, creating linear history&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Preferably we need to combine squash and rebase, meaning that all commits of the feature branch need to be squashed into single commit and then rebase the main branch on the feature branch and resolve conflicts to keep commit history clean. After resolving the conflicts, a PR can be raised as described above.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  How can Trunk-Based strategy and the Merge policy be effective on important aspects
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Discover flaws early&lt;/strong&gt;: By integrating small changes/features into the main branch (trunk) and run specific tasks as part of the CI (as described above) we can minimize the risk of introducing large and hard to find bugs, as we ensure that every small piece of code merged is not causing any issues/failures.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Enable fast and smooth CI/CD process&lt;/strong&gt;: By having the above mechanism in place, we can merge code quickly and safely (as already mentioned) which gives us the necessary confidence to proceed and deploy the new features in stages above DEV and TEST.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Give some control over what is released when&lt;/strong&gt;: By combining short-lived branches (and/or a release related branch potentially) with clean commit history, we have the flexibility to point over what is about to be deployed on any stage. A release branch should be the option when a ABN/PROD deployment is about to take place.&lt;/li&gt;
&lt;li&gt; Note: any changes/bug fixes that are implemented on release branch, SHOULD also be available on main branch so a PR from release branch to main needs to be raised&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Easy Rollback&lt;/strong&gt;: By ensuring that our main branch (trunk) is continuously stable, reverting to a previous working version is straightforward.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;In conclusion, effective branching and merging strategies are essential for maintaining streamlined collaboration and minimizing conflicts in software development lifecycle. By choosing the right strategy teams can align their workflows with project goals, team size, and release cadence. Prioritizing clear guidelines for development process, reduces errors, and promotes a cohesive codebase. Ultimately, the success of any branching and merging approach lies in consistent communication and adaptability to evolving project needs.&lt;/p&gt;

&lt;p&gt;In case you are looking for a dynamic and knowledge-sharing workplace that respects and encourages your personal growth as part of it’s own development, we invite you to explore our current &lt;a href="https://apply.workable.com/agileactors/" rel="noopener noreferrer"&gt;job opportunities&lt;/a&gt; and be part of Agile Actors.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>testing</category>
      <category>devdiscuss</category>
    </item>
  </channel>
</rss>
