<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Florian Lenz</title>
    <description>The latest articles on DEV Community by Florian Lenz (@florianlenz).</description>
    <link>https://dev.to/florianlenz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/florianlenz"/>
    <language>en</language>
    <item>
      <title>Azure AI Search: The Developer's Secret Weapon Most Teams Ignore</title>
      <dc:creator>Florian Lenz</dc:creator>
      <pubDate>Thu, 05 Feb 2026 21:27:23 +0000</pubDate>
      <link>https://dev.to/florianlenz/azure-ai-search-the-developers-secret-weapon-most-teams-ignore-3kn0</link>
      <guid>https://dev.to/florianlenz/azure-ai-search-the-developers-secret-weapon-most-teams-ignore-3kn0</guid>
      <description>&lt;p&gt;Most developers treat search as an afterthought.&lt;/p&gt;

&lt;p&gt;You build the core features. You nail the UI. You optimize performance. And then, almost as a checkbox item, you add a search bar that… barely works.&lt;/p&gt;

&lt;p&gt;Users type queries. They get irrelevant results. They rephrase. They give up. They leave.&lt;/p&gt;

&lt;p&gt;But here’s what surprised me: &lt;strong&gt;the problem isn’t that search is hard to build. It’s that most teams are solving the wrong problem entirely.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They think search is about finding keywords.&lt;br&gt;&lt;br&gt;
Modern users expect search to understand &lt;strong&gt;intent&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this post, I’ll show you how &lt;strong&gt;Azure AI Search&lt;/strong&gt; bridges that gap—and why it’s one of the most underutilized tools in the Azure ecosystem. Based on projects I’ve implemented and edge cases I’ve hit, you’ll learn:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why full-text search is table stakes (and what actually differentiates great search)&lt;/li&gt;
&lt;li&gt;The one feature that makes Azure AI Search feel like magic to users&lt;/li&gt;
&lt;li&gt;How to avoid the #1 mistake teams make when implementing search services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s start with what most developers get wrong.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Keyword Trap: Why Basic Search Fails Users
&lt;/h2&gt;

&lt;p&gt;Most search implementations work like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;User types &lt;strong&gt;“affordable running shoes”&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
→ System searches for exact matches&lt;br&gt;&lt;br&gt;
→ Returns 47 results&lt;br&gt;&lt;br&gt;
→ User scrolls, scrolls, doesn’t find what they want&lt;br&gt;&lt;br&gt;
→ Leaves&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here’s the conflict:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Users don’t think in keywords. They think in problems and intent.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When someone searches &lt;em&gt;“affordable running shoes,”&lt;/em&gt; they might actually mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Running shoes under $100”&lt;/li&gt;
&lt;li&gt;“Budget-friendly marathon training shoes”&lt;/li&gt;
&lt;li&gt;“Cheap sneakers for jogging beginners”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A keyword-based search engine treats these as completely different queries. The result? Frustration before users ever see your best content.&lt;/p&gt;

&lt;p&gt;Most teams stop here and accept “good enough” search.&lt;/p&gt;

&lt;p&gt;That’s a mistake.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Intelligence Layer: Natural Language Processing in Action
&lt;/h2&gt;

&lt;p&gt;This is where Azure AI Search separates itself from basic full-text engines.&lt;/p&gt;

&lt;p&gt;It includes &lt;strong&gt;built-in Natural Language Processing&lt;/strong&gt; that understands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Semantic meaning&lt;/strong&gt;
&lt;em&gt;“affordable” = “budget-friendly” = “cheap”&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context&lt;/strong&gt;
&lt;em&gt;“running shoes” vs “running a business”&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Synonyms&lt;/strong&gt;
Configurable synonym maps for domain language&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User intent&lt;/strong&gt;
What users actually want—not just what they typed&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Real Example: E-Commerce Search
&lt;/h3&gt;

&lt;p&gt;I worked on an e-commerce platform where users frequently searched for:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“laptop for students”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;With keyword search:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Returned any product with “student” in the description
&lt;/li&gt;
&lt;li&gt;Missed laptops perfect for students but marketed differently
&lt;/li&gt;
&lt;li&gt;Surfaced “student discount available” accessories that weren’t laptops&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;With Azure AI Search + NLP:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understood intent: affordable, portable, long battery life
&lt;/li&gt;
&lt;li&gt;Ranked results based on student needs
&lt;/li&gt;
&lt;li&gt;Surfaced relevant products even when descriptions used different terminology
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
📈 &lt;strong&gt;34% increase in search-to-purchase conversion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But NLP alone isn’t the real secret.&lt;/p&gt;

&lt;p&gt;That comes next.&lt;/p&gt;


&lt;h2&gt;
  
  
  Customizable Relevance Ranking: The Feature That Changes Everything
&lt;/h2&gt;

&lt;p&gt;Here’s the part most documentation glosses over:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not all search results are created equal.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Even with perfect intent detection, users still need the &lt;em&gt;right result at the top.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  The Problem with Default Ranking
&lt;/h3&gt;

&lt;p&gt;Default ranking usually relies on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keyword frequency
&lt;/li&gt;
&lt;li&gt;Document recency
&lt;/li&gt;
&lt;li&gt;Basic TF-IDF scoring
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That breaks down quickly in real systems.&lt;/p&gt;
&lt;h3&gt;
  
  
  Real Edge Case: Documentation Search
&lt;/h3&gt;

&lt;p&gt;I implemented Azure AI Search for a SaaS documentation site.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Most searched term: &lt;strong&gt;“API authentication”&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Top result: a 3-year-old changelog mentioning auth&lt;/li&gt;
&lt;li&gt;The actual authentication guide? Buried on page 2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The default ranking favored keyword density and recency—&lt;em&gt;not usefulness.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  The Fix: Custom Scoring Profiles
&lt;/h3&gt;

&lt;p&gt;Azure AI Search lets you define what &lt;strong&gt;relevance actually means&lt;/strong&gt; for &lt;em&gt;your&lt;/em&gt; product.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docRelevance"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"weights"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;3.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"functions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"freshness"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"fieldName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"lastModified"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"boost"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"magnitude"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"fieldName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"pageViews"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"boost"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1.5&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Why Most Search Features Fail (And How Azure AI Search Fixes It)
&lt;/h1&gt;

&lt;p&gt;Most teams treat search like infrastructure.&lt;/p&gt;

&lt;p&gt;Set it up once. Ship it. Forget it.&lt;/p&gt;

&lt;p&gt;That’s a mistake — and it’s why users abandon searches, open support tickets, and quietly lose trust in your product.&lt;/p&gt;

&lt;p&gt;Here’s what &lt;em&gt;actually&lt;/em&gt; works.&lt;/p&gt;




&lt;h2&gt;
  
  
  What These Changes Do
&lt;/h2&gt;

&lt;p&gt;We made three targeted ranking adjustments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Boosted title matches 3×&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Users searching &lt;em&gt;“API authentication”&lt;/em&gt; now see documents with that phrase in the title first.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Considered freshness — without letting it dominate&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
New content matters, but not at the expense of relevance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Factored in page views&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Popular documents are usually popular for a reason.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Result After Tuning
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;📈 Authentication guide moved to &lt;strong&gt;position #1&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;🔻 Search abandonment dropped &lt;strong&gt;41%&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;🎫 Support tickets about &lt;em&gt;“can’t find docs”&lt;/em&gt; decreased significantly&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The #1 Mistake: Treating Search as “Set and Forget”
&lt;/h2&gt;

&lt;p&gt;Here’s the hard truth most teams learn too late:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Search quality degrades over time.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You launch with perfect indexing and well-tuned ranking. Everything works beautifully.&lt;/p&gt;

&lt;p&gt;Then reality sets in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User behavior shifts&lt;/li&gt;
&lt;li&gt;Content grows and changes&lt;/li&gt;
&lt;li&gt;New synonyms emerge&lt;/li&gt;
&lt;li&gt;Query patterns evolve&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What worked on day 1 quietly fails by day 180.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Solution: Built-In Analytics + Continuous Optimization
&lt;/h2&gt;

&lt;p&gt;This is where &lt;strong&gt;:contentReference[oaicite:0]{index=0}&lt;/strong&gt; quietly shines.&lt;/p&gt;

&lt;p&gt;It includes analytics that surface:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Top queries with no results&lt;/strong&gt; → content gaps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-abandon searches&lt;/strong&gt; → relevance problems&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Click-through rate by position&lt;/strong&gt; → ranking effectiveness&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Query refinements&lt;/strong&gt; → UX issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This feedback loop is what separates &lt;em&gt;good&lt;/em&gt; search from &lt;em&gt;great&lt;/em&gt; search.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Implementation Checklist You Actually Need
&lt;/h2&gt;

&lt;p&gt;Most guides stop at:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;create service → define index → ingest data → query&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s not production-ready search.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before You Start
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Map &lt;strong&gt;user intent patterns&lt;/strong&gt; (not just keywords)&lt;/li&gt;
&lt;li&gt;[ ] Identify natural &lt;strong&gt;content categories&lt;/strong&gt; (for facets)&lt;/li&gt;
&lt;li&gt;[ ] Define what &lt;strong&gt;“relevance”&lt;/strong&gt; means for &lt;em&gt;your&lt;/em&gt; product&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  During Setup
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Create &lt;strong&gt;custom scoring profiles&lt;/strong&gt; (don’t rely on defaults)&lt;/li&gt;
&lt;li&gt;[ ] Configure &lt;strong&gt;synonym maps&lt;/strong&gt; for your domain language&lt;/li&gt;
&lt;li&gt;[ ] Set up &lt;strong&gt;faceted navigation&lt;/strong&gt; for top 3–5 attributes&lt;/li&gt;
&lt;li&gt;[ ] Enable &lt;strong&gt;analytics from day one&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  After Launch
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Review &lt;strong&gt;zero-result queries&lt;/strong&gt; weekly&lt;/li&gt;
&lt;li&gt;[ ] Audit top 20 queries monthly&lt;/li&gt;
&lt;li&gt;[ ] A/B test ranking changes&lt;/li&gt;
&lt;li&gt;[ ] Update synonym maps based on real user language&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Real Power of Azure AI Search
&lt;/h2&gt;

&lt;p&gt;Here’s what most documentation won’t tell you:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Azure AI Search isn’t just search. It’s an intelligence layer.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It changes how users discover information — from frustrating keyword hunting to intuitive, context-aware exploration.&lt;/p&gt;

&lt;p&gt;The difference looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;❌ Users abandoning your app&lt;br&gt;&lt;br&gt;
✅ Users engaging with it&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;❌ Support drowning in &lt;em&gt;“I can’t find X”&lt;/em&gt;&lt;br&gt;&lt;br&gt;
✅ Users self-serving successfully&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;❌ Maintaining custom search code&lt;br&gt;&lt;br&gt;
✅ Configuring a managed service&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Myth That Holds Teams Back
&lt;/h2&gt;

&lt;p&gt;Most teams assume great search requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ML expertise
&lt;/li&gt;
&lt;li&gt;Months of tuning
&lt;/li&gt;
&lt;li&gt;Custom infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The reality:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Azure AI Search delivers production-grade, AI-powered search — &lt;strong&gt;if you configure it correctly&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;Search isn’t just a feature.&lt;/p&gt;

&lt;p&gt;It’s often the difference between users finding value — or giving up and leaving.&lt;/p&gt;

&lt;p&gt;The real question isn’t whether you should invest in better search.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s whether you can afford not to.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>ai</category>
    </item>
    <item>
      <title>Turn Any REST API into an MCP Server with Azure API Management</title>
      <dc:creator>Florian Lenz</dc:creator>
      <pubDate>Tue, 30 Dec 2025 16:05:03 +0000</pubDate>
      <link>https://dev.to/florianlenz/turn-any-rest-api-into-an-mcp-server-with-azure-api-management-1in5</link>
      <guid>https://dev.to/florianlenz/turn-any-rest-api-into-an-mcp-server-with-azure-api-management-1in5</guid>
      <description>&lt;p&gt;&lt;em&gt;Unlock the power of your existing APIs — make them AI-agent friendly, discoverable, and usable by intelligent applications with just a few clicks.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Modern applications aren’t just about serving data to front-end clients or backend services anymore. With the rise of AI agents and powerful LLM-driven tools, the expectations for how systems should expose and consume APIs are shifting fast. &lt;strong&gt;APIs that once served developers now need to serve AI agents. And that’s where the Model Context Protocol (MCP)&lt;/strong&gt; comes in.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this post, you’ll learn why MCP matters, how Azure API Management (APIM) can act as an AI-Gateway, and how you can turn any REST API into an MCP server in minutes, without writing any backend logic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h5&gt;
  
  
  &lt;em&gt;If you found this useful, I share deeper dives and additional articles on &lt;a href="https://techworldofflorian.substack.com/" rel="noopener noreferrer"&gt;Substack&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/techworldofflorian/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;. Feel free to follow me there for more content.&lt;/em&gt;
&lt;/h5&gt;

&lt;h2&gt;
  
  
  What Is Azure API Management?
&lt;/h2&gt;

&lt;p&gt;Azure API Management (APIM) is a fully managed service from Microsoft that sits in front of your APIs and acts as a gateway between clients and backend services.&lt;/p&gt;

&lt;p&gt;At its core, APIM lets you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Publish existing APIs without changing backend code&lt;/li&gt;
&lt;li&gt;Secure APIs with authentication, authorization, and rate limits&lt;/li&gt;
&lt;li&gt;Transform requests and responses (headers, paths, payloads)&lt;/li&gt;
&lt;li&gt;Monitor usage, performance, and failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Traditionally, APIM is used to expose APIs to developers in a controlled, scalable way. But the same gateway capabilities also make it ideal for AI-driven use cases. With APIM, you can shape how APIs are discovered, described, and invoked, without touching the underlying service.&lt;/p&gt;

&lt;p&gt;That’s exactly why APIM works so well as an AI Gateway: it already understands APIs at the contract level and can enforce policies consistently at the edge.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is an MCP Server?
&lt;/h2&gt;

&lt;p&gt;MCP (Model Context Protocol) is a protocol designed to make tools and APIs &lt;strong&gt;natively usable by AI agents&lt;/strong&gt;. Instead of treating APIs as opaque HTTP endpoints, MCP defines a structured way to expose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What capabilities are available&lt;/li&gt;
&lt;li&gt;What inputs each operation expects&lt;/li&gt;
&lt;li&gt;What outputs it returns&lt;/li&gt;
&lt;li&gt;How an AI agent should call it safely and correctly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An MCP server is simply a service that exposes these capabilities in an MCP-compatible way. You can think of it like &lt;strong&gt;USB for AI tools&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsunpyewlnri5f4sukfq2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsunpyewlnri5f4sukfq2.jpg" alt="Architecture Overview of MCP Server" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just as USB provides a standard interface that lets any compatible device plug into any computer without custom drivers, MCP provides a standard way for AI agents to discover, understand, and use tools, databases, APIs, etc. Once an API is available via MCP, AI agents can discover it, reason about it, and invoke it as a tool, without custom glue code or hard-coded prompts.&lt;/p&gt;

&lt;p&gt;The key idea is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;MCP turns APIs into first-class tools for AI agents.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;By combining MCP with Azure API Management, you can wrap existing REST APIs and instantly make them AI-ready—without rewriting services, adding new backends, or maintaining custom adapters.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Zero to MCP in Azure API Management
&lt;/h2&gt;

&lt;p&gt;Now here’s the practical part, how I turned a normal REST API (Star Wars API) into an MCP server using Azure API Management (APIM), then connected it to ChatGPT and verified the calls end-to-end.&lt;/p&gt;

&lt;p&gt;If you want to follow along, you only need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Azure subscription&lt;/li&gt;
&lt;li&gt;Permission to create an API Management instance&lt;/li&gt;
&lt;li&gt;A public REST API (I used SWAPI because it’s free: &lt;a href="https://swapi.info/" rel="noopener noreferrer"&gt;https://swapi.info/&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1) Create a new Azure API Management instance
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the Azure Portal → Create a resource → search for API Management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Fill in the basics&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resource group: create new (e.g., &lt;code&gt;rg-mcp-demo&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Region: pick whatever’s closest&lt;/li&gt;
&lt;li&gt;Name: must be globally unique (this becomes part of your gateway hostname)&lt;/li&gt;
&lt;li&gt;Organization name / admin email: required&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Choose a pricing tier that fits your demo:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For experiments, pick a developer-friendly option (but not consumption.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create and wait until the APIM instance is provisioned.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: MCP server export is currently not available when using the Consumption pricing tier in Azure API Management.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xwg6ax67uch6wkqd5l6.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xwg6ax67uch6wkqd5l6.webp" alt="Creation of Azure API Management" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2) Create a new HTTP API in APIM (using SWAPI)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In your APIM instance, go to APIs → + Add API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose HTTP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Display name: SWAPI&lt;/li&gt;
&lt;li&gt;Web service URL: &lt;a href="https://swapi.info/api/" rel="noopener noreferrer"&gt;https://swapi.info/api/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;API URL suffix (optional): something like swapi (this becomes /swapi/... on your gateway)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy48kzbol9mg9rymtxbjh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy48kzbol9mg9rymtxbjh.png" alt="Create API Management HTTP API" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3) Add the GET operations you want to expose
&lt;/h2&gt;

&lt;p&gt;Inside your SWAPI API in APIM:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click + Add operation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create one operation per resource (GET):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GET /people&lt;/li&gt;
&lt;li&gt;GET /planets&lt;/li&gt;
&lt;li&gt;GET /species&lt;/li&gt;
&lt;li&gt;GET /vehicles&lt;/li&gt;
&lt;li&gt;GET /starships&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Save&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqm2rrjazsoj59jc10r2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqm2rrjazsoj59jc10r2.png" alt="Create API Management Operations" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4) Test the API from APIM
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;In the operation (example: &lt;code&gt;GET /people&lt;/code&gt;), open the Test tab.&lt;/li&gt;
&lt;li&gt;Click Send.&lt;/li&gt;
&lt;li&gt;Confirm you get a valid JSON response.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow8id0twrt5nadzbibul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow8id0twrt5nadzbibul.png" alt="Testing of API Management API" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5) Create an MCP server from the existing API (APIM → MCP)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In your APIM instance menu, go to APIs → MCP servers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click + Create MCP server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the option to Expose an API as an MCP server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The API you created (&lt;code&gt;SWAPI&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;The operations you want to expose as tools (select all the GET operations you added)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;APIM will generate an MCP endpoint that describes your tools and supports agent-style invocation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cu8cndihvk4r527a28r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cu8cndihvk4r527a28r.jpg" alt="Create MCP Server via Azure API Management" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6) Use the MCP server in an AI host
&lt;/h2&gt;

&lt;p&gt;That’s it! Your API is now MCP-enabled.&lt;/p&gt;

&lt;p&gt;You can now use the MCP server in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ChatGPT&lt;/li&gt;
&lt;li&gt;Visual Studio Code&lt;/li&gt;
&lt;li&gt;and any MCP-compatible host or agent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The host connects to the MCP server URL, discovers the available tools, and can immediately start calling your API, with no custom prompts, glue code, or backend changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsthnqxaqlf2lyfiksr1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsthnqxaqlf2lyfiksr1o.png" alt="Use MCP Server" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Where This Pattern Really Shines
&lt;/h2&gt;

&lt;p&gt;At a glance, turning a REST API into an MCP server might look like a convenience feature. In practice, it’s a shift in how APIs participate in modern systems.&lt;/p&gt;

&lt;p&gt;By putting Azure API Management in front of your services and exporting them as MCP servers, you’re creating a &lt;strong&gt;stable, contract-driven interface not just for developers, but for AI agents&lt;/strong&gt;. That matters because agents don’t behave like traditional clients:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They discover tools dynamically&lt;/li&gt;
&lt;li&gt;They reason about capabilities instead of endpoints&lt;/li&gt;
&lt;li&gt;They chain calls together without hard-coded flows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;APIM already solves the hardest parts of this problem: Versioning, security, throttling, observability. MCP simply gives those capabilities a language AI agents understand.&lt;/p&gt;

&lt;p&gt;This pattern is especially powerful in a few scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise APIs&lt;/strong&gt; that can’t be easily modified but need to be AI-accessible&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Legacy systems&lt;/strong&gt; where adding agent logic directly would be risky or slow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Platform teams&lt;/strong&gt; that want a single, governed way to expose tools to AI across the organization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of every team building custom “AI adapters,” you centralize the responsibility at the gateway, where it belongs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Important Things to Keep in Mind
&lt;/h2&gt;

&lt;p&gt;Before you go all-in, a few practical considerations are worth calling out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pricing tier matters&lt;/strong&gt;: MCP export is not supported on the Consumption tier. Plan accordingly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool design still matters&lt;/strong&gt;: MCP doesn’t fix poorly designed APIs. Clear operation names, sensible inputs, and predictable outputs make a huge difference for agents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security is amplified&lt;/strong&gt;: AI agents can call tools more frequently and creatively than humans. Rate limits, authentication, and scopes aren’t optional, they’re essential.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability becomes critical&lt;/strong&gt;: MCP makes it easier to call APIs; APIM makes it easier to see who called what and why. Use that data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most importantly, treat MCP exposure as a product decision, not just a technical switch. You’re defining how intelligent systems interact with your business logic.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>ai</category>
      <category>mcp</category>
      <category>api</category>
    </item>
    <item>
      <title>Terraform testing with Open Policy Agent and Conftest: Secure infrastructure through Terraform testing</title>
      <dc:creator>Florian Lenz</dc:creator>
      <pubDate>Tue, 02 Dec 2025 09:38:37 +0000</pubDate>
      <link>https://dev.to/florianlenz/terraform-testing-with-open-policy-agent-and-conftest-secure-infrastructure-through-terraform-3fk4</link>
      <guid>https://dev.to/florianlenz/terraform-testing-with-open-policy-agent-and-conftest-secure-infrastructure-through-terraform-3fk4</guid>
      <description>&lt;p&gt;Terraform has established itself as the leading tool for infrastructure as code: modules describe resources, plans show the planned changes, and an apply implements the configuration in the cloud. In practice, however, Terraform configurations are often only checked briefly, perhaps using terraform validate, and the rest of the code relies on peer reviews and good intentions. This is not enough, especially in regulated industries or security-critical projects. Errors such as incorrectly set defaults, publicly accessible resources, or missing encryption can lead to data breaches and high costs. In this article, you will learn about an approach that allows you to consistently test Terraform configurations—without creating real resources. The basis is the JSON output of terraform plan, which is checked against defined rules using &lt;a href="https://www.openpolicyagent.org/" rel="noopener noreferrer"&gt;Open Policy Agent (OPA)&lt;/a&gt; and &lt;a href="https://www.conftest.dev/" rel="noopener noreferrer"&gt;Conftest&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is testing Terraform configurations important?
&lt;/h2&gt;

&lt;p&gt;Terraform defines productive infrastructure. A single configuration error can make a database accessible worldwide, disable encryption, or sabotage cost control due to missing tags. According to recent cloud security studies, 80% of companies were affected by cloud security incidents last year, and Gartner predicts that by 2025, 99% of cloud security failures will be customer-related. This is particularly true for misconfigurations. More than 32% of reported incidents stem directly from misconfigurations. These figures illustrate that infrastructure code must be subjected to the same rigorous testing as application code.&lt;/p&gt;

&lt;p&gt;Latent misconfigurations that lie dormant in existing resources are particularly dangerous. For example, an AzureStorage account is publicly accessible by default via the public_network_access_enabled attribute. If this property is not explicitly disabled, the storage account is openly accessible. The same applies to AWS S3 buckets, security groups with open ports, or missing encryption. In large teams with many modules and different environments (Dev, QA, Prod), manually checking plans quickly becomes confusing. That's why we need an automated, repeatable testing approach that takes effect before execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations of classic terraform test approaches
&lt;/h2&gt;

&lt;p&gt;Since version 1.6, Terraform has included its own testing mechanism (terraform test). On paper, this sounds appealing—but in practice, weaknesses quickly become apparent. Testing with terraform test is an additional step in the workflow: in addition to plan and apply, another command must be integrated into the CI/CD pipeline, maintained, and understood. Many tests also require resources to be actually created in a test environment. This means additional cloud costs, authorization effort, and potentially long runtimes. In strictly regulated areas, this is often not approvable.&lt;br&gt;
Although efforts are being made to run tests based on the plan, these functions are still in their infancy and offer only limited support for complex compliance rules. The added value compared to a direct evaluation of the JSON plan remains low.&lt;/p&gt;

&lt;p&gt;Another disadvantage is that terraform test is closely tied to the Terraform world. The rules are difficult to transfer to other systems, resulting in an isolated solution. For organizations that use Kubernetes manifests, Helm charts, or other IaC formats in addition to Terraform, this leads to fragmented governance approaches. There is no uniform policy level. This is where OPA and Conftest come in.&lt;/p&gt;
&lt;h2&gt;
  
  
  Policy-as-Code and Open Policy Agent
&lt;/h2&gt;

&lt;p&gt;Policy-as-Code (PaC) is the principle of defining policies as versionable code and enforcing them automatically. Instead of relying on manual checks or written guidelines, rules are formulated in a declarative language. The Open Policy Agent (OPA) is a universal policy engine that evaluates these rules in the Rego language. OPA is used for Kubernetes admission controllers, API authorization, and cloud governance, among other things. Crucially, OPA is data-driven: it accepts any JSON and returns a decision.&lt;/p&gt;

&lt;p&gt;The core of the approach is simple: instead of just checking the planned changes, we validate the complete future state of the infrastructure before resources are created. Terraform provides everything we need for this. With terraform plan -out=tfplan, we generate a binary plan and then convert it into a machine-readable JSON file with terraform show -json. This JSON contains both the current state and the planned changes and the resulting end result – including modules, defaults, and dependencies.&lt;/p&gt;

&lt;p&gt;Conftest is a CLI wrapper around OPA that is specifically designed for checking structured files such as JSON, YAML, or HCL. A policy in Rego format is stored in a policy/ folder. When running conftest test against the JSON plan file, Conftest evaluates this policy and reports violations. This allows policies for Terraform, Kubernetes, and other formats to be bundled into a single tool.&lt;/p&gt;
&lt;h3&gt;
  
  
  Pipeline integration
&lt;/h3&gt;

&lt;p&gt;In a typical CI/CD pipeline, the process looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create plan: terraform plan -out=tfplan.binary generates a binary plan file.&lt;/li&gt;
&lt;li&gt;Convert plan: terraform show -json tfplan.binary &amp;gt; tfplan.json converts the plan to JSON.&lt;/li&gt;
&lt;li&gt;Evaluate policies: conftest test tfplan.json executes the Rego rules against the JSON. Each deny rule results in an error in the pipeline job.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This approach has several advantages. First, there are no cloud costs because no resources are built. Second, the test can be run early in the pipeline—even before merge requests are accepted. Third, the same policies can also be used for other technologies.&lt;/p&gt;
&lt;h2&gt;
  
  
  Example: Azure Storage Account without public access
&lt;/h2&gt;

&lt;p&gt;A concrete example illustrates the method. Suppose that every Azure Storage account must disable public network access. In the Terraform module, we see a resource of type azurerm_storage_account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_storage_account" "this" {
  name                     = var.storage_account_name
  resource_group_name      = var.resource_group_name
  location                 = var.location
  account_tier             = "Standard"
  account_replication_type = "LRS"

  public_network_access_enabled = false
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following Rego code defines a policy that ensures that public_network_access_enabled is set to false for every scheduled azurerm_storage_accountResource change:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

deny[msg] {
  some rc in input.planned_values
  rc.type == "azurerm_storage_account"
  rc.change.actions[_] == "create"
  val := object.get(rc.change.after, "public_network_access_enabled", null)
  val != false
  msg := sprintf("%s: public_network_access_enabled muss false sein, gefunden %v", [rc.address, val])
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run conftest test tfplan.json, the plan is checked against this rule. If public_network_access_enabled is not defined or is true, the test fails and outputs a clear error message. This prevents an insecure storage account from being accepted in the plan at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Extension to the entire future state
&lt;/h2&gt;

&lt;p&gt;The above example only checks for resource changes. However, in many projects, it is important to validate the entire future state. Existing resources that remain unchanged in the current plan may still be non-compliant. The JSON object planned_values describes the final state after apply. A policy that recursively runs through all modules and checks every azurerm_storage_account in the future state looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package terraform.azure.storage

# All storage accounts in planned state
storage_accounts contains sa if {
  # walk runs recursively through the entire object
  some path, node
  walk(input.planned_values.root_module, [path, node])

  # We are only interested in resource objects of type Storage Account
  node.type == “azurerm_storage_account”

  sa := node
}

# Violation if public_network_access_enabled is not exactly false
deny contains msg if {
  some i
  sa := storage_accounts[i]

  # Read value (null if not set)
  val := object.get(sa.values, “public_network_access_enabled”, null)

  # Anything other than false is prohibited (true or null / not set)
  val != false

  msg := sprintf(
    “Azure Storage Account ‘%s’ has invalid public_network_access_enabled value: %v”,
    [sa.name, val],
  )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This rule runs through all modules, collects every storage resource, and checks the properties in the future state. This allows latent misconfigurations to be detected, even if the resource in question is not changed in the current plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration into the CI/CD pipeline
&lt;/h2&gt;

&lt;p&gt;A solid pipeline should include the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code analysis &amp;amp; formatting: Use tools such as terraform fmt and tflint.&lt;/li&gt;
&lt;li&gt;Security scanners: Tools such as tfsec, Checkov, or Regula can perform static analyses based on known best practices.&lt;/li&gt;
&lt;li&gt;Policy checks with OPA/Conftest: Run the plan through OPA/Conftest before applying and block merge requests in case of violations. Be sure not to version the JSON file in the repository, but to generate it temporarily in the pipeline job.&lt;/li&gt;
&lt;li&gt;Drift detection: Use terraform plan regularly, even in production environments, to detect deviations between the code and the actual state. A comparison with OPA can indicate whether resources have been changed retrospectively.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this type of multi-stage pipeline design, you can combine code quality, security scanning, and policy enforcement. When choosing tools, compatibility and maintainability should be taken into account. OPA and Conftest can be integrated into GitHub Actions, GitLab CI, Jenkins, Azure DevOps, or Terraform Enterprise and generate machine-readable reports.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Infrastructure as code offers many advantages, but also carries risks. Traditional testing in Terraform is either too closely tied to the ecosystem or requires the creation of real resources. By combining terraform plan, Open Policy Agent, and Conftest, infrastructure plans can be reviewed early on, cost-neutrally, and comprehensively. This approach validates not only changes, but also the entire future state – a decisive advantage for audit and compliance requirements. At the same time, misconfigurations continue to be cited as one of the main causes of security incidents. That's why testing Terraform with OPA should be integrated into every modern DevOps pipeline. This turns “infrastructure as code” into “infrastructure with guarantees” – secure, compliant, and traceable.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is the difference between terraform validate and OPA/Conftest tests?&lt;/strong&gt;&lt;br&gt;
terraform validate only checks the syntax and basic structure of the code. OPA/Conftest, on the other hand, enable semantic checks: they detect whether certain attributes are set correctly, tags are present, or security requirements are met. This allows compliance rules to be enforced automatically before resources are created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do I have to apply the plan for OPA tests?&lt;/strong&gt;&lt;br&gt;
No. The big advantage of OPA/Conftest is that you can export the Terraform plan as JSON and check it without cloud access. No resources are created, so there are no costs. The plan represents the future state against which the policies are evaluated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I integrate OPA into my pipeline?&lt;/strong&gt;&lt;br&gt;
You can integrate OPA/Conftest into any CI/CD tools. In GitHub Actions, all you need is an additional job that runs terraform plan, terraform show -json, and conftest test. GitLab CI and Azure DevOps have similar mechanisms. It is important that the job aborts the build if conftest reports a violation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can the same policies also be used for Kubernetes or other tools?&lt;/strong&gt;&lt;br&gt;
Yes. OPA is not limited to Terraform. The policies check any JSON or YAML structures, such as Helm charts, Kubernetes manifests, or CloudFormation templates. This allows you to create a uniform governance layer across different platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happens if my Terraform modules change?&lt;/strong&gt;&lt;br&gt;
Since the test is based on the JSON plan, it doesn't matter how your modules are structured internally. As long as the resources appear in the plan, they will be checked. For new modules, you may need to define additional rules to cover new resource types.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I deal with complex rules?&lt;/strong&gt;&lt;br&gt;
Rego is a powerful language, but it also allows for complexity. For complex rules, it is recommended to build policies in a modular way, use helper functions, and write your own tests for the policies. The OPA Playground and community rules (e.g., for Kubernetes) offer helpful examples.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
      <category>devops</category>
    </item>
    <item>
      <title>Azure Landing Zone Deployment: Private VNETs, Self-Hosted Agents &amp; Service Connections</title>
      <dc:creator>Florian Lenz</dc:creator>
      <pubDate>Sun, 13 Jul 2025 19:35:48 +0000</pubDate>
      <link>https://dev.to/florianlenz/azure-landing-zone-deployment-private-vnets-self-hosted-agents-service-connections-3ld5</link>
      <guid>https://dev.to/florianlenz/azure-landing-zone-deployment-private-vnets-self-hosted-agents-service-connections-3ld5</guid>
      <description>&lt;p&gt;Security plays an important role in most cloud projects. A central component of this is the so-called landing zone, which serves as the basis for all other workloads. An essential security aspect in such setups is to only grant public access in exceptional cases. But this is precisely where a challenge often arises: How can I continue to carry out deployments with Azure DevOps from a private network?&lt;/p&gt;

&lt;p&gt;Without public access, external tools such as Azure DevOps have no access to the resources in the private network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ghtmo360eqkj2n1eryb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ghtmo360eqkj2n1eryb.png" alt="Example architecture of a private virtual network with an Azure DevOps connection" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, I will show you step by step how this problem can be solved. After reading it, you should be able to implement a similar architecture yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Initial situation: No public access, no deployment?
&lt;/h2&gt;

&lt;p&gt;If we set up a private virtual network (VNET) in Azure and block public access, this initially means that no connections from the Internet to the resources are permitted.&lt;/p&gt;

&lt;p&gt;Publicly accessible endpoints always pose a security risk: They can potentially be reached by attackers from the internet, making attacks such as port scanning, brute force attacks or exploits against vulnerabilities possible. For business-critical applications or sensitive data in particular, it is therefore common practice to reduce the attack surface as much as possible by avoiding public access.&lt;/p&gt;

&lt;p&gt;A virtual network (VNET) in Azure is a logical isolation within the Azure cloud, comparable to a separate network segment in a traditional data center. You can place resources such as virtual machines (VMs), databases, app services or other services in a VNET and allow them to communicate with each other via private IP addresses.&lt;/p&gt;

&lt;p&gt;Network security groups (NSGs), private endpoints and your own routing rules can be used to precisely control which data traffic is permitted and which is not. The VNET acts like a security fence around all the resources it contains.&lt;/p&gt;

&lt;p&gt;If we consistently do without public access, services such as Azure DevOps initially face an insurmountable hurdle: by default, pipelines use Microsoft-hosted build agents that run outside our Azure environment. These agents cannot simply “reach into” our private VNET, as they have no access to the private IP addresses and no route into the internal network.&lt;/p&gt;

&lt;p&gt;As a result, deployments to private subnets, internal databases or protected API endpoints would fail - even though these resources are actually our target.&lt;/p&gt;

&lt;p&gt;We therefore need a way to carry out deployments without compromising the security approach. The solution: We bring the agent into the private VNET, i.e. to where our resources are located. In this way, we can carry out deployments internally and at the same time take full advantage of the benefits of an isolated network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Set up self-hosted agent in the private VNET
&lt;/h2&gt;

&lt;p&gt;Even if the agent is located in the private network, it must be able to connect to Azure DevOps in order to execute builds and download artifacts. To do this, specific exceptions must be allowed in the firewall.&lt;/p&gt;

&lt;p&gt;Before you install the agent yourself, your VNET must be prepared accordingly. To do this, you should create your own subnet in which the VM for the agent will later be operated. This subnet should be configured in such a way that it has suitable security guidelines and is cleanly separated from the rest of the network. A Network Security Group (NSG) regulates incoming and outgoing traffic here. Outgoing traffic is primarily relevant for the self-hosted agent, as it has to communicate with Azure DevOps. Incoming traffic usually only requires management access (e.g. via SSH or Azure Bastion) so that you can reach and maintain the VM in the event of an error.&lt;/p&gt;

&lt;p&gt;When creating the VM itself, it is important to choose a suitable size and image. In many cases, a medium-sized standard VM, such as the Standard_D2s_v3 type, is sufficient. You can choose between Windows Server or Linux (e.g. Ubuntu) as the operating system, depending on which tools and build environments you require. It is crucial that you mount the VM directly in the prepared subnet and do not assign a public IP address. This ensures that the VM is only accessible internally and does not offer any unnecessary attack surfaces to the outside world.&lt;/p&gt;

&lt;p&gt;In order for the agent to be able to communicate with Azure DevOps, it must be allowed to reach certain destinations on the internet despite the private network. This requires specific releases in the firewall or in the NSG.  The VM requires access to at least:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;‍&lt;a href="https://dev.azure.com" rel="noopener noreferrer"&gt;https://dev.azure.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;https://*.dev.azure.com&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aex.dev.azure.com" rel="noopener noreferrer"&gt;https://aex.dev.azure.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/devops/organizations/security/allow-list-ip-url?view=azure-devops&amp;amp;tabs=IP-V4#allowed-domain-urls" rel="noopener noreferrer"&gt;weitere Domains&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only through these approvals can the agent register builds, download artifacts and send status messages back to Azure DevOps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Install agent on the VM
&lt;/h2&gt;

&lt;p&gt;As soon as the VM and the firewall rules are in place, you can install the agent in the VM. To do this, download the agent package from Azure DevOps and register the agent via a personal access token (PAT) or via OAuth.&lt;/p&gt;

&lt;p&gt;Before you start the actual installation, you should make sure that the VM has a working connection to the internet and that the URLs mentioned in step 1 are accessible. In addition, the operating system must be up-to-date and PowerShell should be installed on Windows systems and an up-to-date shell environment on Linux systems. In many cases, the .NET Core Runtime is also required, especially for newer agent versions.&lt;/p&gt;

&lt;p&gt;To authenticate the agent to Azure DevOps, you first create a personal access token. This token is required later so that the agent can connect to your Azure DevOps tenant. You can find the creation in the Azure DevOps portal under your user account (top right), there under Personal access tokens. When creating the token, assign a name, select the desired validity period and specify the required authorizations, in this case mainly agent pools (read &amp;amp; manage). After creating the token, copy it and keep it safe, as it is only displayed once.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwwqekdmqsypvuyimryg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwwqekdmqsypvuyimryg.png" alt="Personal Access Token with Agent Pools authorizations" width="800" height="1079"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step is to download the appropriate agent package. To do this, go to the project settings in Azure DevOps, open the Agent Pools section and select the pool in which the new agent should appear. If no pool exists yet, you can also create a new one here. After clicking on New agent, select your operating system and download the ZIP archive provided. Unzip this package into a local directory on the VM, for example C:\azagent on Windows or /opt/azagent on Linux.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkejeywdruh93kyaawvxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkejeywdruh93kyaawvxp.png" alt="Agent Download" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then open a console with administrative rights on the VM, change to the unpacked directory and start the configuration script. Under Windows, this script is called config.cmd, under Linux or macOS config.sh. When you start it, you will be guided through a series of questions: First, enter the URL of your Azure DevOps server (e.g. &lt;a href="https://dev.azure.com/mein-unternehmen" rel="noopener noreferrer"&gt;https://dev.azure.com/mein-unternehmen&lt;/a&gt;). Then select the PAT authentication method and insert the previously created token. In the next step, select the agent pool in which the agent is to be registered and assign a name for the agent itself, such as private-vnet-agent-01. Optionally, you can define tags with which you can address the agent later in pipelines.&lt;/p&gt;

&lt;p&gt;After the wizard has run, you will be asked whether the agent should be set up as a service. You should definitely confirm this option, as the agent will be started automatically each time the VM is restarted and will remain permanently available. After completing the configuration, you can activate the service on Windows with .\svc install and .\svc start, on Linux accordingly with sudo ./svc.sh install and sudo ./svc.sh start.&lt;/p&gt;

&lt;p&gt;Once all steps have been successfully completed, you can check in the Azure DevOps Portal whether the agent is displayed as online in the selected pool. From this moment on, the agent is ready for use, can accept build and release jobs and has access to internal resources via the private VNET without these having to be publicly accessible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypxu9xx00gijpcis0vox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypxu9xx00gijpcis0vox.png" alt="Check agent status in Azure DevOps" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Create app registration and service connection
&lt;/h2&gt;

&lt;p&gt;What we are still missing is authentication against Azure. So far, we have only created the agent that runs in the private network and executes our pipeline steps locally. However, in order for this agent to be able to create, change or delete resources in Azure - for example, deploy an app service instance or configure a database - it needs authorizations. This is exactly where the service connection comes into play.&lt;/p&gt;

&lt;p&gt;At this point, it is important to clearly understand the difference between agent and service connection: The agent is the “worker” that does the actual work, i.e. performs builds, tests and deployments. The service connection, on the other hand, is the “key” that gives this worker access to Azure resources. Without this key, the agent could theoretically issue commands, but would not have any rights to change anything in Azure.&lt;/p&gt;

&lt;p&gt;The service connection is based on an app registration in Microsoft Entra ID (formerly Azure AD). This app registration provides a technical identity that is secured by a secret. It enables Azure DevOps to log in securely to your Azure tenant without having to use a personal user or insecure access data.&lt;/p&gt;

&lt;p&gt;To create the app registration, first navigate to Microsoft Entra ID in the Azure portal and select the App registrations section. There you create a new app registration, assign a descriptive name, for example devops-service-connection, and register the application. In most cases, you can leave the options for supported account types at “This organization directory only”; a redirect URI is not required for this use case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ygw4ze36j6e6yekp2x3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ygw4ze36j6e6yekp2x3.png" alt="Creating an AppRegistration in Azure EntraID" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After registering, you must create a secret that serves as the password for this technical identity. Under the menu item Certificates &amp;amp; secrets, you create a new secret, enter a description and select a suitable validity period. After creating it, you will receive a value, which you must copy immediately and save securely, as it will not be displayed again later.&lt;/p&gt;

&lt;p&gt;In addition to this secret, you will need two other IDs from the app registration: the application ID (client ID) and the directory ID (tenant ID). Both can be found directly in the app registration overview. Together with the secret, they form the access data that is later stored in Azure DevOps.&lt;/p&gt;

&lt;p&gt;In order for the App Registration to actually be allowed to manage resources in Azure, it must be assigned the appropriate authorizations. To do this, you can either go to the entire subscription in the Azure portal or - if you want more granular control - to the respective resource group. Under Access Control (IAM), assign the Contributor role to App Registration. This role allows you to create, change and delete resources. In some cases, it may make sense to choose a more restrictive role or even define a custom role, depending on how exactly you want to restrict the authorizations.&lt;/p&gt;

&lt;p&gt;Once the app registration and authorizations have been prepared, the next step is to set up the service connection in Azure DevOps. To do this, open your project in Azure DevOps, go to the project settings, then to the Service connections section and create a new connection of the type Azure Resource Manager. Select the Service principal (manual) option here.&lt;/p&gt;

&lt;p&gt;In the next step, enter all previously saved information: the subscription ID, the name of the subscription, the application ID (client ID), the secret and the directory ID (tenant ID). After saving, Azure DevOps automatically checks the connection. If everything is configured correctly, you can then name the service connection, for example Azure-Prod-Connection, and save it.&lt;/p&gt;

&lt;p&gt;From this point on, your pipeline has an authorized connection to Azure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55bazowizrluryqxo7jy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55bazowizrluryqxo7jy.png" alt="Successfully executed job of a self-hosted Azure DevOps agent" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;By using a self-hosted agent in a private VNET, a secure and controlled deployment architecture can be set up that does not require public access. The combination with a cleanly configured service connection via an app registration offers the necessary flexibility for builds and deployments.&lt;/p&gt;

&lt;p&gt;Even if the initial outlay is somewhat higher, a considerable increase in security and stability is achieved in the long term - a decisive advantage for companies with high security requirements or compliance specifications.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>cloud</category>
      <category>landingzone</category>
    </item>
    <item>
      <title>Serverless for greenfield projects: How data-driven architectures are revolutionizing your software development</title>
      <dc:creator>Florian Lenz</dc:creator>
      <pubDate>Thu, 02 Jan 2025 19:26:35 +0000</pubDate>
      <link>https://dev.to/florianlenz/serverless-for-greenfield-projects-how-data-driven-architectures-are-revolutionizing-your-software-fm4</link>
      <guid>https://dev.to/florianlenz/serverless-for-greenfield-projects-how-data-driven-architectures-are-revolutionizing-your-software-fm4</guid>
      <description>&lt;h3&gt;
  
  
  tl;dr
&lt;/h3&gt;

&lt;p&gt;Serverless architectures are the ideal basis for greenfield projects. They allow a quick start, minimize costs thanks to the pay-per-use model and adapt dynamically to changing requirements. No user data yet? No problem: serverless helps you to gather valuable insights and optimize your infrastructure in a data-driven way. Instead of starting with an oversized architecture, you can rely on the KISS principle (“Keep it Simple, Stupid”) and save time and resources. Should your architecture need to change, the majority of your code remains reusable. Serverless offers flexibility, efficiency and a clear future perspective.&lt;/p&gt;

&lt;h2&gt;
  
  
  The growing demands on developers
&lt;/h2&gt;

&lt;p&gt;Software development teams are faced with an almost paradoxical challenge: they are expected to bring innovative products to market in ever shorter timeframes while making do with limited resources. However, the reality is often different. Developers lose valuable time administering the infrastructure, while unused resources put a strain on the budget due to overprovisioning. This situation not only leads to frustration, but also inhibits innovation. The question we need to ask ourselves is: How can we get rid of this ballast and concentrate fully on the essentials - creating added value?&lt;/p&gt;

&lt;p&gt;The role of the software developer has changed considerably in recent years. It is no longer just about writing code. Developers have to deal with a variety of additional tasks that used to be left to the operations teams.&lt;/p&gt;

&lt;p&gt;These include infrastructure management, setting up CI/CD pipelines, security monitoring and scaling applications, and these tasks often come at the expense of the actual development work. Studies show that developers can only spend around 40% of their time developing new functions. The rest of the time is spent on administrative tasks and maintenance. As a result, innovation falls by the wayside and projects are delayed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless the solution
&lt;/h2&gt;

&lt;p&gt;Serverless represents a new type of infrastructure management. It is not just a technical concept, but a completely new operating model that allows developers to focus on their core tasks. An illustrative example of this is Coca-Cola, which added a mobile application to its Freestyle vending machines during the COVID-19 pandemic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsad9kaq2d3rxuf25o1w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsad9kaq2d3rxuf25o1w.jpg" alt="Coca-Cola Freestyle&amp;lt;br&amp;gt;
" width="800" height="500"&gt;&lt;/a&gt;&lt;em&gt;Coca-Cola Freestyle&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The Coca-Cola Freestyle machines offer consumers the opportunity to create their own individual drink from over 100 flavors. But with the pandemic came a new challenge: customers were afraid to touch the machines' touchscreens. To solve this problem and create an innovative customer experience at the same time, Coca-Cola developed an app that allows users to operate the machines touch-free via their smartphone.&lt;/p&gt;

&lt;p&gt;The app had to be developed quickly as the pandemic required an immediate solution. Coca-Cola opted for a serverless approach to save time and costs. Using services such as AWS Lambda and API Gateway, the company was able to deploy a working solution within a few weeks, and the pay-per-use model allowed Coca-Cola to pay only for actual usage.&lt;/p&gt;

&lt;p&gt;This was particularly important as it was not clear at the outset how many customers would actually use the app. Serverless ensured that no resources were wasted and no bottlenecks occurred. The speed to market and flexibility of the serverless architecture was crucial to regaining customer trust while delivering an innovative digital experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  The perfect solution for greenfield projects
&lt;/h2&gt;

&lt;p&gt;Serverless architectures are ideal for greenfield projects: They give developers the freedom to get started immediately without having to carry the burden of a complex infrastructure. Instead of investing weeks in setup and maintenance, teams can get straight down to the real work - developing features and innovations.&lt;/p&gt;

&lt;p&gt;The first advantage is obvious: cost control. With the pay-per-use model, you only pay for what you actually use. No idle time, no wasted resources. At the same time, the architecture adapts flexibly - be it to a sudden rush of new users or to times of low demand. Serverless ensures that your infrastructure scales with your project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmqzhfckawdkoep8vlge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmqzhfckawdkoep8vlge.png" alt="Comparison of resource allocation models: Over-provisioning, under-provisioning and pay-as-you-go" width="800" height="263"&gt;&lt;/a&gt;&lt;em&gt;Comparison of resource allocation models: Over-provisioning, under-provisioning and pay-as-you-go&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;But as is so often the case in software development, changes are inevitable. Requirements change: suddenly more users than expected, new features or unexpectedly high traffic peaks. This is exactly where serverless comes into its own - at least at the beginning. As soon as your data and metrics mature, it becomes clearer whether serverless is still optimal or whether a new architecture is necessary.&lt;/p&gt;

&lt;p&gt;And that's the crucial point: just because your requirements might change later, you shouldn't choose an oversized infrastructure from the outset.a key advantage of serverless is that it enables data-driven development of your architecture. In the initial phase of a project, there is often a lack of precise data on user behavior, load distribution and other critical factors. With serverless, you can gather initial insights without time-consuming preparations or investments in oversized infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwho2ws4t9hb4pdtci0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwho2ws4t9hb4pdtci0s.png" alt="Monitoring of a serverless application with important information on access numbers, response times and load distribution (https://www.asserts.ai/blog/monitoring-aws-lambda/)" width="800" height="448"&gt;&lt;/a&gt;&lt;em&gt;Monitoring of a serverless application with important information on access numbers, response times and load distribution (&lt;a href="https://www.asserts.ai/blog/monitoring-aws-lambda/" rel="noopener noreferrer"&gt;https://www.asserts.ai/blog/monitoring-aws-lambda/&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As your project grows, you generate more and more valuable data and metrics. This data helps you to optimize your architecture in a targeted manner and make well-founded decisions. If it turns out that serverless is no longer optimal for your specific scenario, you can flexibly migrate to a new architecture - and reuse most of your code and logic in the process.&lt;/p&gt;

&lt;p&gt;The “KISS - Keep it Simple, Stupid” principle applies here: start simple. A lean serverless architecture not only reduces complexity, but also minimizes risks. It is better to start small and grow dynamically than to waste valuable time and resources with an oversized solution. Serverless is the perfect way to get started, giving you the flexibility to focus on the essentials while being prepared for the future, and the best part?&lt;/p&gt;

&lt;p&gt;If your requirements change at some point and a different architecture becomes necessary, migration is often easier than you think. Most of your code and logic is retained and can be easily integrated into the new architecture. This is also confirmed by Allen Helton, who switched from a serverless architecture to a monolithic architecture with the Amazon Prime Video/Audio Monitoring Service:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Conceptually, the high-level architecture remained the same. We still have exactly the same components as we had in the initial design (media conversion, detectors, or orchestration). This allowed us to reuse a lot of code and quickly migrate to a new architecture.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Serverless is therefore more than just a technical model - it is a way of thinking that enables teams to work faster and more agile, drive innovation and react flexibly to market changes.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>azure</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Azure Durable Task Scheduler advantages for Durable Functions</title>
      <dc:creator>Florian Lenz</dc:creator>
      <pubDate>Thu, 21 Nov 2024 21:46:30 +0000</pubDate>
      <link>https://dev.to/florianlenz/azure-durable-task-scheduler-advantages-for-durable-functions-2944</link>
      <guid>https://dev.to/florianlenz/azure-durable-task-scheduler-advantages-for-durable-functions-2944</guid>
      <description>&lt;p&gt;Managing complex, long-running processes is often a challenge in software development. Frameworks such as the Azure Durable Task Framework or Azure Durable Functions can help to orchestrate complex processes. Until now, however, they have only offered limited options for monitoring and analyzing processes. With the introduction of the Azure Durable Task Scheduler, significant progress has been made that considerably simplifies the development and monitoring of such workflows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuoepd4rppgy95d73pe85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuoepd4rppgy95d73pe85.png" alt=" " width="624" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges in the orchestration of workflows
&lt;/h2&gt;

&lt;p&gt;The orchestration of workflows is a central component of modern software development, especially when it comes to complex, long-running processes. Although many frameworks offer solutions, there are often practical hurdles that lead developers to pursue alternative approaches.&lt;/p&gt;

&lt;h3&gt;
  
  
  Complexity of process control
&lt;/h3&gt;

&lt;p&gt;Controlling workflows quickly becomes a challenge as complexity increases. Processes must connect different services and systems and often take dependencies or conditional logic into account. Durable Functions promised to cope with this complexity, but in practice we often came up against limits, especially when it came to the traceability and control of processes.&lt;/p&gt;

&lt;p&gt;To get around this problem, I relied on &lt;strong&gt;choreography patterns&lt;/strong&gt; in almost all projects. Instead of using a central orchestration component, we had the individual services communicate with each other via event mechanisms such as &lt;strong&gt;Azure Service Bus&lt;/strong&gt; or &lt;strong&gt;Event Grid&lt;/strong&gt;. This led to greater modularity and decoupling of the services, making the processes more flexible. However, this approach also had disadvantages: without central control, we had to develop additional tools and logic to maintain an overview.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmpq808ps0zykcng479u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmpq808ps0zykcng479u.png" alt=" " width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Error handling and recovery
&lt;/h3&gt;

&lt;p&gt;Errors are one of the unavoidable challenges in workflow orchestration. Network problems, service failures or unforeseen error sources can bring any process to a standstill. Durable Functions promised a theoretical remedy here too, but in practice it was often unclear how a workflow could be cleanly restored without losing data.&lt;/p&gt;

&lt;p&gt;To overcome this challenge, we also relied on &lt;strong&gt;dead letter queues&lt;/strong&gt;. Faulty messages or events were stored in special queues, analyzed and reprocessed if necessary. Although this approach generally worked well, it required careful planning and high-maintenance implementations. It was also difficult to make the entire process traceable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2aqws0mssztqcwhsvfp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2aqws0mssztqcwhsvfp1.png" alt=" " width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring and debugging
&lt;/h3&gt;

&lt;p&gt;An often underestimated problem in workflow orchestration is monitoring. The ability to analyse running processes and quickly identify errors is crucial for maintenance and optimization. With Durable Functions, you often felt “blind” in this respect. There was a lack of integrated dashboards or tools to gain a transparent insight into the status and processes of the workflows.&lt;/p&gt;

&lt;p&gt;The challenges of orchestrating workflows are many and varied, and there is rarely a perfect solution. While Durable Functions addressed many of these problems in theory, in practice questions of transparency, control and monitoring often remained unanswered. Alternative approaches such as choreography patterns offered more insight, but led to increased development &amp;amp; infrastructure effort. The new &lt;strong&gt;Azure Durable Task Scheduler&lt;/strong&gt; promises to eliminate many of these weaknesses - a real game changer for long-running workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Durable Task Schedulers
&lt;/h2&gt;

&lt;p&gt;With the introduction of the &lt;strong&gt;Azure Durable Task Scheduler&lt;/strong&gt;, Microsoft has taken a major step to fundamentally improve the way you work with &lt;strong&gt;Azure Durable Functions&lt;/strong&gt; and complex workflows. This new, fully Azure-managed backend provides developers with a robust platform to orchestrate long-running processes without having to deal with the previous limitations.&lt;/p&gt;

&lt;p&gt;The Azure Durable Task Scheduler is specifically designed to solve the challenges of state management, error handling and scaling for Durable Functions.&lt;/p&gt;

&lt;p&gt;With the Durable Task Scheduler, error handling is significantly simplified. The system can automatically reset processes to the last saved state so that workflows can continue to run seamlessly. This is particularly valuable for long-running processes where consistency and reliability are important. The robust recovery mechanisms reduce the effort that previously had to go into implementing custom solutions.&lt;/p&gt;

&lt;p&gt;By moving the storage and process logic to a specially developed backend, workflows benefit from significant performance improvements. The Task Scheduler automatically scales to cope with high loads while ensuring consistently high availability. Developers can rely on their processes remaining stable and performant regardless of the system load.&lt;/p&gt;

&lt;p&gt;One of the biggest new features of the Durable Task Scheduler is the &lt;strong&gt;Task Hub Dashboard&lt;/strong&gt;, an integrated monitoring tool that revolutionizes workflow monitoring and analysis. This dashboard offers developers the opportunity to&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gain live insights&lt;/strong&gt; into running workflows, including all intermediate steps and statuses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quickly identify errors&lt;/strong&gt; and take targeted measures to rectify them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analyze process metrics&lt;/strong&gt; to identify bottlenecks or inefficient steps.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Where previously manual workarounds and external tools were often necessary, the dashboard provides a central point of contact for debugging and monitoring - directly integrated and easy to use.&lt;/p&gt;

&lt;p&gt;The Azure Durable Task Scheduler brings many improvements that are directly tailored to the needs of developers. It combines the advantages of a managed backend with a modern monitoring solution and a scalable architecture. For anyone who has struggled with the limitations of Durable Functions or had to develop their own solutions such as event-based choreographies, the Durable Task Scheduler offers real added value.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Hub Dashboard
&lt;/h2&gt;

&lt;p&gt;With the &lt;strong&gt;Task Hub Dashboard&lt;/strong&gt;, Microsoft has introduced a long-awaited functionality that finally gives developers the transparency they need about their workflows. While Durable Functions have always been powerful, debugging, monitoring and tracking processes has always been a challenge. The Task Hub Dashboard solves this problem with an intuitive user interface that provides insight into all running, completed or faulty workflows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07fsr9czxoo8wzz51a03.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07fsr9czxoo8wzz51a03.png" alt=" " width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The dashboard serves as a central point of contact for everything to do with managing and analyzing Durable Functions. It provides developers with comprehensive information about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The current status of all workflows, including &lt;strong&gt;running&lt;/strong&gt;, &lt;strong&gt;completed&lt;/strong&gt; and &lt;strong&gt;failed&lt;/strong&gt; instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Detailed logs&lt;/strong&gt; and events documenting every step of the process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lead times&lt;/strong&gt;, bottlenecks and other important metrics for optimizing workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftimh7d9d6lub8jfahed5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftimh7d9d6lub8jfahed5.png" alt=" " width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The dashboard makes work much easier, especially when troubleshooting. Instead of having to fight your way through confusing logs or external tools, it provides all relevant information at a glance.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;timeline view&lt;/strong&gt; of the Task Hub Dashboard is a real highlight and sets new standards in the traceability of workflows. It displays the entire process history of an individual workflow instance as a timeline. This visualization offers several advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chronological overview&lt;/strong&gt;: Every step, every event and every action is displayed in the order in which they were actually executed. This makes it easy for developers to follow the course of the process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration analysis&lt;/strong&gt;: In addition to the sequence, the timeline also shows the duration of each step, which helps to identify bottlenecks or inefficient parts of the workflow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Troubleshooting made easy&lt;/strong&gt;: Erroneous steps are highlighted directly so that developers can quickly identify where and why something went wrong.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqwr9qtem11s2khc6rrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqwr9qtem11s2khc6rrx.png" alt=" " width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another function of the Task Hub Dashboard is the &lt;strong&gt;sequence diagram&lt;/strong&gt; overview. While the timeline offers a chronological perspective, the sequence diagram visualizes the &lt;strong&gt;logical sequence of interactions&lt;/strong&gt; within the workflow. This view is similar to classic UML sequence diagrams and offers numerous advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Transparency over interactions&lt;/strong&gt;: Developers can see how the different parts of the workflow - including external services or sub-processes - interact with each other.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visualization of dependencies&lt;/strong&gt;: Complex relationships between different process steps are clearly displayed, which helps with planning and optimization.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structure-level debugging&lt;/strong&gt;: If a step fails, the diagram shows not only the error, but also the affected downstream steps that depend on it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6nepr0nw5b3fwy060u0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6nepr0nw5b3fwy060u0.png" alt=" " width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When developing the dashboard, Microsoft took care to create a &lt;strong&gt;user-friendly interface&lt;/strong&gt; that remains clear even for complex workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With the &lt;strong&gt;Azure Durable Task Scheduler&lt;/strong&gt; and the associated &lt;strong&gt;Task Hub Dashboard&lt;/strong&gt;, Microsoft has delivered a crucial update for working with Durable Functions. These innovations directly address the weaknesses that have often led developers to avoid Durable Functions in the past or to resort to complicated workarounds.&lt;/p&gt;

&lt;p&gt;With these innovations, Azure Durable Functions is taking a big leap forward. The Durable Task Scheduler is an indispensable tool, especially for projects with complex, long-running processes or high transparency and scalability requirements.&lt;/p&gt;

&lt;p&gt;Would you like to try out the new features straight away? You can apply for early access here: &lt;a href="https://aka.ms/dts-ignite-signup" rel="noopener noreferrer"&gt;https://aka.ms/dts-ignite-signup&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.florian-lenz.io/blog/mehr-transparenz-und-kontrolle-fur-durable-functions-das-bringt-der-azure-durable-task-scheduler" rel="noopener noreferrer"&gt;https://www.florian-lenz.io/blog/mehr-transparenz-und-kontrolle-fur-durable-functions-das-bringt-der-azure-durable-task-scheduler&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://techcommunity.microsoft.com/blog/appsonazureblog/announcing-limited-early-access-of-the-durable-task-scheduler-for-azure-durable-/4286526" rel="noopener noreferrer"&gt;https://techcommunity.microsoft.com/blog/appsonazureblog/announcing-limited-early-access-of-the-durable-task-scheduler-for-azure-durable-/4286526&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://youtu.be/aJnZwU40K6A" rel="noopener noreferrer"&gt;https://youtu.be/aJnZwU40K6A&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>serverless</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Automated deployment slots in Azure with GitHub Actions: Testing pull requests in live environments</title>
      <dc:creator>Florian Lenz</dc:creator>
      <pubDate>Fri, 27 Sep 2024 23:22:53 +0000</pubDate>
      <link>https://dev.to/florianlenz/automated-deployment-slots-in-azure-with-github-actions-testing-pull-requests-in-live-environments-3k05</link>
      <guid>https://dev.to/florianlenz/automated-deployment-slots-in-azure-with-github-actions-testing-pull-requests-in-live-environments-3k05</guid>
      <description>&lt;p&gt;Continuous Integration and Continuous Deployment (CI/CD) are already standard in many companies. However, development teams are constantly looking for ways to work more efficiently and ensure that code is tested in a real-world environment before it goes live. The ability to test pull requests (PRs) directly in a ‘live’-like environment can significantly improve code quality and detect bugs early.&lt;/p&gt;

&lt;p&gt;Azure Deployment Slots offer an elegant solution to this by creating isolated test environments. In combination with GitHub Actions, you can automatically deploy and test PRs in these slots before they are merged into the ‘main’ branch. In this article, I'll show you how to set up PR-specific deployment slots and use GitHub Actions for this automation.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are Azure Deployment Slots?
&lt;/h3&gt;

&lt;p&gt;Azure App Service Deployment Slots are separate deployment environments within an App Service instance. They allow you to host multiple versions of your application simultaneously. Main advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Zero-downtime deployments&lt;/strong&gt;: New versions can be deployed and tested in one slot before going live&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy rollback&lt;/strong&gt;: In the event of problems, you can simply switch back to the previous slot.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Isolated environments&lt;/strong&gt;: Each slot has its own configurations and connection strings.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  GitHub Actions and Pull Requests
&lt;/h3&gt;

&lt;p&gt;GitHub Actions is a tool for automating software workflows directly in your GitHub repository. It allows you to create CI/CD pipelines that can react to various events such as pushes, pull requests or time-based triggers. In combination with Azure, you can automate your entire development and deployment process&lt;/p&gt;

&lt;h3&gt;
  
  
  The idea: Branch-specific deployment slots
&lt;/h3&gt;

&lt;p&gt;Imagine that every pull request automatically creates its own deployment slot. This enables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Live-like testing&lt;/strong&gt;: Test changes in an environment that is close to production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Early detection of errors&lt;/strong&gt;: Testing in the deployment slot allows problems to be detected before they reach the main branch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation&lt;/strong&gt;: Slots are automatically created and removed through integration with GitHub Actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step-by-step guide
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Set up workflow for pull requests
&lt;/h4&gt;

&lt;p&gt;To automatically create and manage deployment slots for each pull request, we'll set up a new GitHub Actions workflow that responds to pull request events.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Folder structure&lt;/strong&gt;: Create a .github/workflows folder in the root directory of your repository if it doesn't already exist.&lt;br&gt;
&lt;strong&gt;Create workflow file&lt;/strong&gt;: Create a new file called pull_request_deployment.yml in this folder.&lt;br&gt;
&lt;strong&gt;Configure workflow&lt;/strong&gt;: Use the following YAML code that extends your existing pipeline to manage deployment slots for PRs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PR Deployment CI/CD&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;opened&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;synchronize&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;reopened&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;closed&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github.event.action != 'closed'&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;windows-latest&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up .NET Core&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-dotnet@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;dotnet-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;8.x'&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build with dotnet&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dotnet build src/WebApplication/WebApplication.csproj --configuration Release&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dotnet publish&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dotnet publish src/WebApplication/WebApplication.csproj -c Release -o ./deploy&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Upload artifact for deployment job&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/upload-artifact@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.net-app&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./deploy&lt;/span&gt;

  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github.event.action != 'closed'&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;windows-latest&lt;/span&gt;
    &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;PR-slot-deployment'&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.deploy-to-webapp.outputs.webapp-url }}&lt;/span&gt;
    &lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt; &lt;span class="c1"&gt;# This is required for requesting the JWT&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Download artifact from build job&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/download-artifact@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.net-app&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Login to Azure&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure/login@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;client-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AZURE_CLIENT_ID }}&lt;/span&gt;
          &lt;span class="na"&gt;client-secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AZURE_CLIENT_SECRET }}&lt;/span&gt;
          &lt;span class="na"&gt;tenant-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AZURE_TENANT_ID }}&lt;/span&gt;
          &lt;span class="na"&gt;subscription-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AZURE_SUBSCRIPTION_ID }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create Deployment Slot&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;$SLOT_NAME="pr-${{ github.event.number }}"&lt;/span&gt;
          &lt;span class="s"&gt;az webapp deployment slot create --name prdeploymentwebapp --resource-group prdeployment --slot $SLOT_NAME&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to Azure Web App&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-to-webapp&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure/webapps-deploy@v3&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;app-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prdeploymentwebapp'&lt;/span&gt;
          &lt;span class="na"&gt;slot-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;pr-${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;github.event.number&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
          &lt;span class="na"&gt;package&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;

  &lt;span class="na"&gt;cleanup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github.event.action == 'closed'&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;windows-latest&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;PR-slot-deployment'&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.deploy-to-webapp.outputs.webapp-url }}&lt;/span&gt;
    &lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt; &lt;span class="c1"&gt;# This is required for requesting the JWT&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Login to Azure&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure/login@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;client-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AZURE_CLIENT_ID }}&lt;/span&gt;
          &lt;span class="na"&gt;client-secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AZURE_CLIENT_SECRET }}&lt;/span&gt;
          &lt;span class="na"&gt;tenant-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AZURE_TENANT_ID }}&lt;/span&gt;
          &lt;span class="na"&gt;subscription-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AZURE_SUBSCRIPTION_ID }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Delete Deployment Slot&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;$SLOT_NAME="pr-${{ github.event.number }}"&lt;/span&gt;
          &lt;span class="s"&gt;az webapp deployment slot delete --name prdeploymentwebapp --resource-group prdeployment --slot $SLOT_NAME&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Explanation:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Trigger&lt;/strong&gt;: The workflow is triggered by pull request events such as opened, synchronise, reopened and closed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build Job&lt;/strong&gt;: Builds and publishes the application, uploads the artefact for the deploy job.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy Job&lt;/strong&gt;: Creates a deployment slot with the name pr-(PR number), deploys the application to this slot and provides the URL of the live environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cleanup Job&lt;/strong&gt;: Is only triggered when the pull request is closed (merged or rejected). Deletes the corresponding deployment slot.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  2. Create pull request
&lt;/h4&gt;

&lt;p&gt;The pipeline will start automatically as soon as the pull request has been created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh656vkc80hudbu5hiui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh656vkc80hudbu5hiui.png" alt="Automated provision of a PR with GitHub Actions" width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  3. perform tests in the ‘live’ environment
&lt;/h4&gt;

&lt;p&gt;After the application has been deployed to the deployment slot, you can perform both manual and automated tests in this environment. The URL of the slot is usually:&lt;a href="https://prdeploymentwebapp-pr-12.azurewebsites.net" rel="noopener noreferrer"&gt;https://prdeploymentwebapp-pr-12.azurewebsites.net&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxa5ml6d7k9zokookd0q2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxa5ml6d7k9zokookd0q2.png" alt="Deployment slots Overview by PR Deployment" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Manual tests:&lt;/strong&gt;&lt;br&gt;
Open the above URL in your browser and perform the necessary tests.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Clean up the deployment slots
&lt;/h4&gt;

&lt;p&gt;After closing a pull request, the slot is automatically removed. This ensures that no legacy data remains available for an unnecessarily long time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn11aeim67qtox235b88w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn11aeim67qtox235b88w.png" alt="Cleanup process in the GitHub Actions" width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Best practices and pitfalls
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Number of deployment slots
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost control&lt;/strong&gt;: Each slot consumes resources. Limit the number of concurrent slots, especially in large projects with many parallel pull requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clean up&lt;/strong&gt;: Make sure that temporary slots are deleted after use to avoid unnecessary resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Dealing with many pull requests
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Limits in Azure&lt;/strong&gt;: Azure has limits on the number of slots per App Service Plan (up to 20 slots by default). Plan accordingly to avoid bottlenecks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scaling&lt;/strong&gt;: Resources could become scarce if the PR volume is high. Consider scaling the App Service Plan or optimising automated slot deletions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Workflow optimisation
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Parallel jobs&lt;/strong&gt;: Use the parallelism of GitHub Actions to speed up workflows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast deployments&lt;/strong&gt;: Minimise the number of steps and optimise build processes to reduce deployment times.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated testing&lt;/strong&gt;: automated deployment of the application also allows E2E testing to be performed using Cypress, for example.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Rollback strategies
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automatic rollback&lt;/strong&gt;: implement mechanisms to automatically roll back to the previous slot in the event of failed deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring and alerts&lt;/strong&gt;: monitor deployments and set up alerts to respond quickly to issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;With these customisations to your GitHub Actions Pipeline, you can effectively use Azure Deployment Slots to test pull requests in isolated, live-like environments. This increases the quality of your code and significantly reduces the risk of errors in the production environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Summary of customisations:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PR-specific slots&lt;/strong&gt;: Each pull request gets its own deployment slot, which is automatically removed once the PR is complete.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated slot management&lt;/strong&gt;: creation and deletion of slots automated by GitHub Actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These measures help to streamline the deployment process, increase efficiency and improve the reliability of your applications.‍&lt;/p&gt;

&lt;p&gt;Resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.florian-lenz.io/blog/automatisierte-deployment-slots-in-azure-mit-github-actions" rel="noopener noreferrer"&gt;https://www.florian-lenz.io/blog/automatisierte-deployment-slots-in-azure-mit-github-actions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://azure.github.io/AppService/2020/07/07/zero_to_hero_pt3.html" rel="noopener noreferrer"&gt;https://azure.github.io/AppService/2020/07/07/zero_to_hero_pt3.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/florianlenz96/pr-deployment" rel="noopener noreferrer"&gt;https://github.com/florianlenz96/pr-deployment&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>githubactions</category>
      <category>testing</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>arc42 for your software architecture: The best choice for sustainable documentation</title>
      <dc:creator>Florian Lenz</dc:creator>
      <pubDate>Sun, 18 Aug 2024 14:34:55 +0000</pubDate>
      <link>https://dev.to/florianlenz/arc42-for-your-software-architecture-the-best-choice-for-sustainable-documentation-383p</link>
      <guid>https://dev.to/florianlenz/arc42-for-your-software-architecture-the-best-choice-for-sustainable-documentation-383p</guid>
      <description>&lt;p&gt;It is a huge challenge for teams in today's software development to design complex systems efficiently and to document their architectures clearly and sustainably. Well-structured and comprehensible documentation is often underestimated. This can lead to communication problems, maintenance issues or unexpected costs later on. This is where arc42 comes into play. This proven template for architecture documentation offers a systematic and practice-oriented solution for mastering these challenges and ensuring the long-term success of your software projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is arc42?
&lt;/h2&gt;

&lt;p&gt;arc42 is a comprehensive and field-tested template that has been specially developed for the documentation of software architectures. It offers a structured and modular approach to documenting all relevant aspects of a software architecture efficiently and comprehensibly. The template is flexibly customizable and suitable for projects of any size and complexity.&lt;/p&gt;

&lt;p&gt;arc42 was developed by experienced software architects and has been used successfully in numerous projects worldwide. It covers all important areas, from basic decisions to system contexts and detailed architecture views. Thanks to its clear structure, arc42 enables transparent communication between all parties involved and ensures that the architecture remains maintainable and expandable in the long term.&lt;/p&gt;

&lt;p&gt;With arc42, you can ensure that your software architecture is not only based on solid foundations, but is also documented in such a way that it remains understandable and accessible to all stakeholders - a decisive advantage in a constantly evolving technology world.&lt;/p&gt;

&lt;h2&gt;
  
  
  The individual areas of arc42 documentation
&lt;/h2&gt;

&lt;p&gt;arc42 divides the documentation of a software architecture into several clearly defined areas. Each of these areas has a specific purpose and helps to document the architecture comprehensively and comprehensibly. In the following, we will look at the most important areas and explain why they are essential.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncd0dz121v6rimtpnuwd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncd0dz121v6rimtpnuwd.png" alt=" " width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Introduction and objectives&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Purpose&lt;/strong&gt;: This section describes the motivation and overarching objectives of the system. It explains why the system is being developed in the first place and the key requirements it must fulfill.&lt;br&gt;
&lt;strong&gt;Importance&lt;/strong&gt;: A clear definition of the objectives is crucial to ensure that all stakeholders understand the direction of the project and that the architecture is aligned with it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcodhsmixtenu2kf2xi5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcodhsmixtenu2kf2xi5.png" alt=" " width="515" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Constraints&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Purpose&lt;/strong&gt;: Constraints are the specifications and framework conditions that restrict or influence the architecture. These can be of a technical, organizational or legal nature. Examples include certain technologies that must be used, standards that must be adhered to or legal regulations that the system must fulfill.&lt;br&gt;
&lt;strong&gt;Importance&lt;/strong&gt;: Understanding and documenting constraints is crucial, as they can limit the freedom of architectural decisions and anticipate certain design decisions. They help to define the scope of the architecture and ensure that all requirements and regulations are met.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb2c2pdguw6w4jx8nixt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb2c2pdguw6w4jx8nixt.png" alt=" " width="509" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Contexts &amp;amp; Scope&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Purpose&lt;/strong&gt;: This section describes the various contexts of the system, how it interacts with external systems and which external interfaces exist.&lt;br&gt;
&lt;strong&gt;Importance&lt;/strong&gt;: Understanding the system contexts is important in order to recognize dependencies on other systems and their effects on your own architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw6uhn5fj7831jaop1z5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw6uhn5fj7831jaop1z5.png" alt=" " width="305" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Solution strategy&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Purpose&lt;/strong&gt;: The solution strategy provides an overview of the basic technical and architectural approaches chosen to meet the requirements. It describes how the architecture will achieve the objectives and how the key challenges will be addressed.&lt;br&gt;
&lt;strong&gt;Importance&lt;/strong&gt;: A clearly formulated solution strategy is essential to ensure that all stakeholders understand and support the key architectural decisions. It serves as a guide for the more detailed elaboration of the architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Building block view&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Purpose&lt;/strong&gt;: The building block view describes the internal structure of the system by showing the most important building blocks (modules, components) and their relationships to each other.&lt;br&gt;
&lt;strong&gt;Importance&lt;/strong&gt;: This view makes it possible to clearly understand the internal structures of the system and to ensure that the architecture remains scalable and maintainable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bpsewn8pe4ionkjdud6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bpsewn8pe4ionkjdud6.png" alt=" " width="800" height="978"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Runtime view&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Purpose&lt;/strong&gt;: The runtime view shows how the system components interact at runtime and which communication flows exist between the components.&lt;br&gt;
&lt;strong&gt;Importance&lt;/strong&gt;: An understanding of the runtime view is essential for analyzing and optimizing system performance and identifying potential bottlenecks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2f17njx6lry7ybkla5qk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2f17njx6lry7ybkla5qk.png" alt=" " width="523" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Deployment view&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Purpose&lt;/strong&gt;: This section describes how the various components of the system are physically distributed, for example on servers, cloud instances or devices.&lt;br&gt;
&lt;strong&gt;Importance&lt;/strong&gt;: The distribution view helps to understand the infrastructure requirements and ensure that the system runs efficiently and reliably in different environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Cross cutting concepts&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Purpose&lt;/strong&gt;: Cross cutting concepts include overarching concepts that run through different parts of the system and are not limited to a single component. This includes aspects such as security, error handling, logging, persistence, authentication and authorization, or even performance optimizations.&lt;br&gt;
&lt;strong&gt;Importance&lt;/strong&gt;: The documentation of these overarching concepts is important as it ensures that these central aspects of the system are implemented consistently and uniformly. A well thought-out cross-cutting concept can significantly improve the maintainability and expandability of the system and ensure that important requirements are taken into account system-wide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Architecture decisions&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Purpose&lt;/strong&gt;: This section documents the important architectural decisions that were made and the reasons that led to these decisions.&lt;br&gt;
&lt;strong&gt;Importance&lt;/strong&gt;: The documentation of architectural decisions is crucial for traceability and makes it easier to make well-founded adjustments or extensions in the future.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Quality requirements&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Purpose&lt;/strong&gt;: The quality requirements for the system are described here, e.g. performance, security, reliability and maintainability.&lt;br&gt;
&lt;strong&gt;Importance&lt;/strong&gt;: Understanding the quality requirements is crucial to ensure that the architecture meets these requirements and that the system can be operated successfully in the long term.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11. Risk assessment and technical debt&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Purpose&lt;/strong&gt;: This section documents the potential risks and technical debt of the system, as well as strategies to mitigate them.&lt;br&gt;
&lt;strong&gt;Importance&lt;/strong&gt;: A conscious examination of risks and technical debt enables proactive measures to be taken to minimize negative effects on the project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;12. Glossary&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Purpose&lt;/strong&gt;: The glossary contains definitions and explanations of important terms used in the project.&lt;br&gt;
&lt;strong&gt;Importance&lt;/strong&gt;: A common understanding of terms and concepts is essential to avoid misunderstandings and ensure effective communication within the team.&lt;/p&gt;

&lt;h2&gt;
  
  
  The advantages of arc42: argumentation for its use
&lt;/h2&gt;

&lt;p&gt;The use of arc42 offers numerous advantages that can significantly improve software projects. The most important reasons why arc42 should be used in every development project are listed below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Traceability and comprehensibility&lt;/strong&gt;: arc42 ensures that all architectural decisions are clearly documented and comprehensible. This enables every team member to quickly familiarize themselves with the architecture, regardless of whether they were involved from the beginning or joined later. The uniform structure of arc42 promotes a common language within the team, which makes collaboration and understanding much easier.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintainability and flexibility&lt;/strong&gt;: Software projects are constantly evolving and the architecture needs to be adapted to new requirements. A well-documented architecture ensures that these changes can be implemented efficiently and without risk. arc42 offers a flexible documentation structure that makes it possible to plan and implement architectural changes without jeopardizing existing functionality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficient communication with stakeholders&lt;/strong&gt;: The architecture of a software project is not only relevant for the development team, but also for customers, managers and other stakeholders. arc42 makes it possible to document the architecture in a way that is understandable for non-technical stakeholders. This promotes transparency and trust in the project and facilitates the communication of decisions and progress.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Saving time and increasing efficiency&lt;/strong&gt;: Using arc42 saves a lot of time, as the template provides a proven structure and methodology for documentation. There is no need to reinvent the wheel, allowing the focus to be placed on developing and improving the software. arc42 provides the tools to make documentation efficient while ensuring high quality.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why arc42 is the best choice
&lt;/h2&gt;

&lt;p&gt;arc42 is not the only tool for architecture documentation, but it is the best choice for several reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Proven framework&lt;/strong&gt;: arc42 was developed by renowned software architects who have brought extensive practical experience to this template. It is based on best practices and has proven itself in a wide range of projects and industries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility and adaptability&lt;/strong&gt;: Regardless of whether you are working on a small start-up project or a large corporate project, arc42 is flexible enough to adapt to specific requirements. The individual modules of the template can be used or adapted as required, making it suitable for a wide variety of projects and teams.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standardization and consistency&lt;/strong&gt;: arc42 creates a standardized documentation structure that is consistent for everyone involved. This not only facilitates the maintenance and further development of the project, but also the induction of new team members. Standardized documentation leads to fewer misunderstandings and promotes efficient collaboration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expandability and future-proofing&lt;/strong&gt;: arc42 is designed to grow with the project. It enables step-by-step expansion and adaptation of the documentation, which is particularly invaluable for long-term projects. This means that the architecture remains well documented and easy to maintain in the future.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Call to action: Investment in the architecture
&lt;/h2&gt;

&lt;p&gt;The decision to use arc42 in projects represents an investment in the future of software architecture. A well-documented architecture is the key to success - it not only makes the project more transparent and maintainable, but also ensures that it remains sustainable in the long term. arc42 should be used to take documentation to the next level and ensure that the project is still successful and scalable years from now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Arc42 in action
&lt;/h2&gt;

&lt;p&gt;DokChess is a sample project that demonstrates the use of the arc42 template to document a software architecture. It is a chess application whose architecture was structured and documented with the help of arc42. The project shows how the different areas of arc42 - from building blocks to runtime views to quality requirements - can be used in a real software project to create a clear, understandable and maintainable architecture.&lt;/p&gt;

&lt;p&gt;For more information, you can view the project here: &lt;a href="https://www.dokchess.de/" rel="noopener noreferrer"&gt;https://www.dokchess.de/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;arc42 offers a structured, tried-and-tested and flexible method for documenting software architectures. The advantages are clear: traceability, maintainability, efficient communication and consistent, standardized documentation. The use of arc42 ensures that the architecture is not only optimally equipped for the current project, but also for future developments. An investment in arc42 is an investment in the sustainability and long-term success of software projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources &amp;amp; Sources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.florian-lenz.io/blog/arc42-fur-ihre-softwarearchitektur-die-beste-wahl-fur-nachhaltige-dokumentation" rel="noopener noreferrer"&gt;https://www.florian-lenz.io/blog/arc42-fur-ihre-softwarearchitektur-die-beste-wahl-fur-nachhaltige-dokumentation&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev.to/arc42/brief-introduction-to-arc42-1c0l"&gt;https://dev.to/arc42/brief-introduction-to-arc42-1c0l&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.arc42.de/" rel="noopener noreferrer"&gt;https://www.arc42.de/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://entwickler.de/software-architektur/architekturdokumention-arc42-template" rel="noopener noreferrer"&gt;https://entwickler.de/software-architektur/architekturdokumention-arc42-template&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/BrutalHack/Arc42AzureDevOpsWiki" rel="noopener noreferrer"&gt;https://github.com/BrutalHack/Arc42AzureDevOpsWiki&lt;/a&gt;&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>architecture</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Standardized Error Messages in .NET REST APIs - Implementing RFC 7807 Problem Details</title>
      <dc:creator>Florian Lenz</dc:creator>
      <pubDate>Wed, 17 Jul 2024 10:49:45 +0000</pubDate>
      <link>https://dev.to/florianlenz/standardized-error-messages-in-net-rest-apis-implementing-rfc-7807-problem-details-1nc0</link>
      <guid>https://dev.to/florianlenz/standardized-error-messages-in-net-rest-apis-implementing-rfc-7807-problem-details-1nc0</guid>
      <description>&lt;h2&gt;
  
  
  The Silent Killer: Unhandled Exceptions
&lt;/h2&gt;

&lt;p&gt;Imagine an application fetching user data from an external API. If the API is unavailable and exceptions aren't handled, the application can crash, leading to poor user experience and frustration for developers and users alike.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Importance of Good Error Messages
&lt;/h2&gt;

&lt;p&gt;Good error messages are crucial for efficient troubleshooting and for helping clients understand what went wrong. Without clear error messages, issues such as confusion, extended development cycles, poor user experience, increased support requests, and loss of trust can arise.&lt;/p&gt;

&lt;h2&gt;
  
  
  HTTP Status Codes and Error Handling
&lt;/h2&gt;

&lt;p&gt;Understanding and correctly using HTTP status codes is key to effective API error handling. They help communicate the status and nature of the error to the client, enabling targeted troubleshooting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2xx Success&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;200 OK&lt;/strong&gt;: Request was successful.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;204 No Content&lt;/strong&gt;: Request was successful, but no content to return.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;202 Accepted&lt;/strong&gt;: Request accepted, processing not completed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4xx Client Errors&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;400 Bad Request&lt;/strong&gt;: The request was invalid or cannot be processed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;401 Unauthorized&lt;/strong&gt;: Authentication is required and has failed or not been provided.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;403 Forbidden&lt;/strong&gt;: The server understands the request but refuses to authorize it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;404 Not Found&lt;/strong&gt;: The requested resource could not be found.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5xx Server Errors&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;500 Internal Server Error&lt;/strong&gt;: A generic error for unexpected server issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;502 Bad Gateway&lt;/strong&gt;: Invalid response from an upstream server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;503 Service Unavailable&lt;/strong&gt;: The server is currently unavailable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;504 Gateway Timeout&lt;/strong&gt;: The server didn't receive a timely response from an upstream server.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Domain-Driven Design (DDD) and Error Handling
&lt;/h2&gt;

&lt;p&gt;It's important to distinguish between domain errors (business logic) and application errors (technical problems). This distinction helps in choosing the correct status codes and clearly communicating where the problem lies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Domain Exceptions&lt;/strong&gt;&lt;br&gt;
Domain exceptions occur when business rules are violated and should typically return 4xx status codes. Examples include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ValidationException&lt;/strong&gt;: Invalid data sent by the client.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://example.com/probs/validation"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Invalid request parameters"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"detail"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The provided data is invalid. Please check the following fields."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"instance"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/api/bookings"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"errors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"startDate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The start date must be in the future."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"endDate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The end date must be after the start date."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"roomNumber"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The specified room number does not exist."&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;EntityNotFoundException&lt;/strong&gt;: The requested entity does not exist.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://example.com/probs/entity-not-found"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Entity not found"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"detail"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The booking ID '98765' was not found."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"instance"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/api/bookings/98765"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;BusinessRuleViolationException&lt;/strong&gt;: A business rule was violated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://example.com/probs/business-rule-violation"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Business rule violation"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;409&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"detail"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The booking cannot be created as the room is already occupied for the specified period."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"instance"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/api/bookings"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Application Exceptions
&lt;/h2&gt;

&lt;p&gt;Application exceptions relate to technical problems or unexpected errors in the application code and should return 5xx status codes. Examples include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TimeoutException&lt;/strong&gt;: A timeout occurred, e.g., in a database query.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://example.com/probs/timeout"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Request timeout"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;504&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"detail"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The request timed out. Please try again later."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"instance"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/api/bookings"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-30T12:34:56Z"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;IOException&lt;/strong&gt;: An I/O error, e.g., accessing the file system.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://example.com/probs/io-error"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"I/O error"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"detail"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"An error occurred while accessing the file system. Please try again later."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"instance"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/api/files/upload"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-30T12:34:56Z"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;DatabaseException&lt;/strong&gt;: A database connection or query error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://example.com/probs/database-error"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Database error"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"detail"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"An error occurred while connecting to the database. Please try again later."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"instance"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/api/bookings"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-30T12:34:56Z"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why This Distinction Matters
&lt;/h2&gt;

&lt;p&gt;Distinguishing between domain and application exceptions is crucial for clear communication and efficient error handling:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Accurate Error Diagnosis&lt;/strong&gt;: Specific status codes and error types help clients understand whether the problem is on their side (4xx) or the server side (5xx).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Targeted Error Resolution&lt;/strong&gt;: Domain exceptions provide clear guidance on what inputs or business rules need adjustment. Application exceptions indicate technical issues requiring server-side fixes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved User Experience&lt;/strong&gt;: Clear and precise error messages enable users and developers to react and resolve issues more quickly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency and Stability&lt;/strong&gt;: Accurate error handling improves the efficiency of development and support teams and enhances overall application stability.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Implementing ProblemDetails for Error Handling in .NET Core&lt;br&gt;
After understanding the importance of HTTP status codes and the distinction between domain and application exceptions, let's see how to implement these principles in a .NET Core application.&lt;/p&gt;
&lt;h2&gt;
  
  
  Define Domain Exceptions
&lt;/h2&gt;

&lt;p&gt;In a Domain-Driven Design (DDD) architecture, it's useful to define specific domain exceptions that inherit from a generic DomainException. These exceptions can then be processed correctly in middleware and transformed into standardized HTTP responses using the ProblemDetails class.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Define Domain Exceptions&lt;/strong&gt;&lt;br&gt;
Create a base class DomainException and specific domain exceptions that inherit from it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;abstract&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DomainException&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Exception&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="nf"&gt;DomainException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ValidationException&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;DomainException&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;IDictionary&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="k"&gt;]&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Errors&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;ValidationException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;IDictionary&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="k"&gt;]&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;Errors&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;EntityNotFoundException&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;DomainException&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;EntityNotFoundException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BusinessRuleViolationException&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;DomainException&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;BusinessRuleViolationException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Create Middleware for Error Processing&lt;/strong&gt;&lt;br&gt;
Create a middleware class that catches these domain exceptions and transforms them into ProblemDetails responses:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ExceptionMiddleware&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="n"&gt;RequestDelegate&lt;/span&gt; &lt;span class="n"&gt;_next&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="n"&gt;ILogger&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ExceptionMiddleware&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;ExceptionMiddleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;RequestDelegate&lt;/span&gt; &lt;span class="n"&gt;next&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ILogger&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ExceptionMiddleware&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;_next&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;next&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;_logger&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt; &lt;span class="nf"&gt;InvokeAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;HttpContext&lt;/span&gt; &lt;span class="n"&gt;httpContext&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;_next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;httpContext&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;DomainException&lt;/span&gt; &lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;$"A domain exception occurred: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;HandleDomainExceptionAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;httpContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Exception&lt;/span&gt; &lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;$"An unexpected error occurred: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;HandleExceptionAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;httpContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt; &lt;span class="nf"&gt;HandleDomainExceptionAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;HttpContext&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;DomainException&lt;/span&gt; &lt;span class="n"&gt;exception&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;ProblemDetails&lt;/span&gt; &lt;span class="n"&gt;problemDetails&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;exception&lt;/span&gt; &lt;span class="k"&gt;switch&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;ValidationException&lt;/span&gt; &lt;span class="n"&gt;validationEx&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ValidationProblemDetails&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;validationEx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Errors&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;Title&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Invalid request parameters"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Status&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;StatusCodes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Status400BadRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Detail&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;exception&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Instance&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Path&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;EntityNotFoundException&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;ProblemDetails&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;Title&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Entity not found"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Status&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;StatusCodes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Status404NotFound&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Detail&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;exception&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Instance&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Path&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;BusinessRuleViolationException&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;ProblemDetails&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;Title&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Business rule violation"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Status&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;StatusCodes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Status409Conflict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Detail&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;exception&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Instance&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Path&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;ProblemDetails&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;Title&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Domain error"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Status&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;StatusCodes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Status400BadRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Detail&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;exception&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Instance&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Path&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;

        &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ContentType&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"application/problem+json"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StatusCode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;problemDetails&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Status&lt;/span&gt; &lt;span class="p"&gt;??&lt;/span&gt; &lt;span class="n"&gt;StatusCodes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Status400BadRequest&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WriteAsJsonAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;problemDetails&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt; &lt;span class="nf"&gt;HandleExceptionAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;HttpContext&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Exception&lt;/span&gt; &lt;span class="n"&gt;exception&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;problemDetails&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;ProblemDetails&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;Title&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"An unexpected error occurred"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;Status&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;StatusCodes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Status500InternalServerError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;Detail&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;exception&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;Instance&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Path&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;

        &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ContentType&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"application/problem+json"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StatusCode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;StatusCodes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Status500InternalServerError&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WriteAsJsonAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;problemDetails&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Register Middleware&lt;/strong&gt;&lt;br&gt;
Register the middleware in your Startup or Program file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;IApplicationBuilder&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;IWebHostEnvironment&lt;/span&gt; &lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UseMiddleware&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ExceptionMiddleware&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;

    &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;UseRouting&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;UseEndpoints&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;endpoints&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;endpoints&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MapControllers&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By following these steps, you can implement standardized error messages in your .NET Core application, improving both developer and user experience by providing clear and actionable error information. This approach not only enhances communication but also aligns with the principles of Domain-Driven Design (DDD) and ensures your application adheres to the RFC 7807 Problem Details specification.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.florian-lenz.io/blog/einheitliche-fehlermeldungen-in-rest-apis-implementierung-von-rfc-7807-problem-details" rel="noopener noreferrer"&gt;Blog&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dotnet</category>
      <category>programming</category>
      <category>api</category>
      <category>aspdotnet</category>
    </item>
    <item>
      <title>Why serverless? Advantages and disadvantages of serverless computing explained</title>
      <dc:creator>Florian Lenz</dc:creator>
      <pubDate>Wed, 03 Jul 2024 16:02:40 +0000</pubDate>
      <link>https://dev.to/florianlenz/why-serverless-advantages-and-disadvantages-of-serverless-computing-explained-2m1m</link>
      <guid>https://dev.to/florianlenz/why-serverless-advantages-and-disadvantages-of-serverless-computing-explained-2m1m</guid>
      <description>&lt;p&gt;Nowadays, companies want to improve their IT infrastructure. One option is serverless. This allows developers to create and operate applications without having to worry about the server infrastructure.&lt;/p&gt;

&lt;p&gt;Well-known services such as Azure Functions from Microsoft Azure and AWS Lambda from Amazon Web Services (AWS) are examples of how developers can run their application code serverless. However, these services are only part of what serverless offers. In addition to the ability to run code, databases, message brokers and API gateways can also be run serverless.&lt;/p&gt;

&lt;p&gt;Serverless is an umbrella term for services that do not require their own server administration, scale automatically and are billed on a usage basis. These features make serverless an attractive option for companies that want to optimize costs, increase scalability and shorten time-to-market.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is serverless?
&lt;/h2&gt;

&lt;p&gt;Serverless is a term that can be misunderstood. Serverless does not mean that there are no more servers. There are still servers on which your applications run. The difference is that you no longer have to take care of the servers. As a developer, you don't have to buy, deploy or maintain a physical server. You write your code, deploy it and the cloud provider takes care of the rest.&lt;/p&gt;

&lt;p&gt;And that rest can be a lot. If you currently run your applications on-premise, you may be familiar with the following situations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need new hardware and have to order it.&lt;/li&gt;
&lt;li&gt;The hardware needs to be installed.&lt;/li&gt;
&lt;li&gt;You have to make updates and keep the servers secure.&lt;/li&gt;
&lt;li&gt;If a server goes down, you need to know what to do.
This list can go on and on. With serverless, you don't have to worry about anything.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Serverless means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No server management: you no longer have to worry about the servers.&lt;/li&gt;
&lt;li&gt;The resources are automatically adjusted so that there is always enough capacity available.&lt;/li&gt;
&lt;li&gt;You only pay for the resources you use.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These features have advantages and disadvantages. Let's assume an HTTP endpoint is only called ten times a month. In this case, it is better to run serverless. You only pay for the ten calls. This is cheaper than running a server all year round.&lt;/p&gt;

&lt;p&gt;The pay-as-you-go cost model is not always the best solution. If your requests are constant and highly predictable, it may be cheaper to run a fixed number of servers.&lt;/p&gt;

&lt;p&gt;Whether serverless makes sense for your application depends on your specific needs and usage patterns. In later sections, I will explain how to do the evaluation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison of on-premise vs. IaaS vs. CaaS vs. PaaS vs. FaaS
&lt;/h2&gt;

&lt;p&gt;Switching from traditional on-premise models to modern cloud solutions can be a big improvement for many companies. To better understand the different models, here is a comparison:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcu8ztkysd9ois82ztjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcu8ztkysd9ois82ztjm.png" alt=" " width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  On-Premise
&lt;/h2&gt;

&lt;p&gt;With this model, the server is located in the company's own data center. All tasks, from hardware procurement, installation and maintenance to scaling and security updates, are the responsibility of the company. This costs a lot of money and requires many employees.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure as a Service (IaaS)
&lt;/h2&gt;

&lt;p&gt;The first step into the cloud. Here, the cloud provider takes care of the hardware and virtualization. You no longer have to order or install physical hardware. Instead, you can set up virtual machines with just a few clicks. However, you still have to manage the operating system, scaling and application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Container as a Service (CaaS)
&lt;/h2&gt;

&lt;p&gt;Container as a Service is a further development of IaaS. The cloud provider takes over the management of the container orchestration system, such as Docker or Kubernetes. This makes it easier to manage and scale containerized applications, but the company still has to maintain the container infrastructure and applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Platform as a Service (PaaS)
&lt;/h2&gt;

&lt;p&gt;The cloud provider also takes care of the operating system and the runtime environment. Developers only have to take care of the application, its scaling and configuration. PaaS does a lot automatically and relieves the IT department. Nevertheless, you can still do a lot yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Functions as a Service (FaaS)
&lt;/h2&gt;

&lt;p&gt;FaaS stands for "Functions as a Service". The cloud provider takes over the management of the entire infrastructure. You only take care of the application code and functions. Scaling takes place automatically and is scaled up or down as required. Billing is based on actual usage (pay as you go). This is particularly good for applications with irregular or unpredictable loads.&lt;/p&gt;

&lt;p&gt;Each model has its advantages and is suitable for different requirements and usage patterns. Serverless or FaaS is good if you are looking for a flexible, affordable and fast solution. Then you can focus on development and business value instead of worrying about infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Load distribution: a decisive factor for the choice of serverless
&lt;/h2&gt;

&lt;p&gt;Serverless is a sensible architecture if the load distribution is right.&lt;/p&gt;

&lt;p&gt;Let's imagine an application that has an even load from 7 am to 7 pm. In this case, serverless does not make sense. The load is even and predictable. It is not necessary to adjust the performance automatically. If each execution is billed individually, it does not make sense in terms of costs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjaywzughg400bdsqpjw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjaywzughg400bdsqpjw.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The situation is different for applications with unpredictable loads. Let's take an application that receives many requests at different times. Sometimes there are few requests, sometimes many. This pattern shows that serverless could be a good solution. Serverless automatically adapts to the current demand. You only pay for the resources you use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filxywkobfeq9wc79nihg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filxywkobfeq9wc79nihg.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You need to know what your application needs and how it is used. A precise analysis shows whether serverless is the right choice. Serverless is good for applications with fluctuating loads. It scales automatically and is billed according to consumption.&lt;/p&gt;

&lt;p&gt;If your application has a stable and predictable load, a traditional cloud solution might be better. For irregular and unpredictable loads, serverless is better because it adjusts automatically and is cheaper.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Serverless - an innovative solution with clear advantages and some challenges
&lt;/h2&gt;

&lt;p&gt;Serverless is a technology that makes the management of servers superfluous. It ensures that applications can be developed and provided faster and more cost-effectively.&lt;/p&gt;

&lt;p&gt;Serverless has the following advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No server management: developers can concentrate on the code without having to worry about the infrastructure.&lt;/li&gt;
&lt;li&gt;Serverless applications scale automatically as more or fewer users access them. This is particularly practical when user numbers fluctuate greatly.&lt;/li&gt;
&lt;li&gt;Cost optimization: You only pay for the resources you use. This is particularly cost-efficient when the number of users is low.&lt;/li&gt;
&lt;li&gt;The time-to-market is shorter because developers can concentrate on developing new functions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Serverless also has disadvantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You are dependent on one provider. If you want to switch, it can be difficult.&lt;/li&gt;
&lt;li&gt;Companies have less control over the hardware and operating system.&lt;/li&gt;
&lt;li&gt;If there is little use, it takes longer for functions to start. The instances are then scaled down to zero. However, some providers have already developed solutions for this.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Serverless is particularly suitable for applications that are used a lot or a little. The automatic scaling and usage-based billing are very advantageous in such scenarios. For applications with a stable and predictable load, traditional cloud models such as IaaS or PaaS are often more favorable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get the Serverless Cheatsheet now!
&lt;/h2&gt;

&lt;p&gt;Are you ready to utilize the full potential of your software? Our cheatsheet gives you an overview of the differences between IaaS, PaaS and FaaS and helps you find the best solution for your next application. Don't miss out on this valuable resource to help you make informed decisions and optimize your IT infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpalv9n9r11wtarlg7mc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpalv9n9r11wtarlg7mc.png" alt=" " width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.florian-lenz.io/blog/was-ist-serverless" rel="noopener noreferrer"&gt;Was ist Serverless&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://youtu.be/OhK5FX5PJyc" rel="noopener noreferrer"&gt;Was ist Serverless? | Azure Serverless Computing einfach erklärt&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>serverless</category>
      <category>azure</category>
      <category>softwaredevelopment</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Future-proof software development with the Azure Serverless Modulith</title>
      <dc:creator>Florian Lenz</dc:creator>
      <pubDate>Mon, 10 Jun 2024 20:39:36 +0000</pubDate>
      <link>https://dev.to/florianlenz/future-proof-software-development-with-the-azure-serverless-modulith-12bl</link>
      <guid>https://dev.to/florianlenz/future-proof-software-development-with-the-azure-serverless-modulith-12bl</guid>
      <description>&lt;p&gt;When developing modern software solutions, architects and developers are faced with the challenge of designing systems that are scalable, maintainable and cost-effective. Traditionally, monolithic architectures offer the advantage of simplicity in development and deployment, but often reach their limits in terms of scalability and maintainability. Microservices solve many of these problems, but bring their own challenges such as increased complexity and difficult coordination.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a monolith?
&lt;/h2&gt;

&lt;p&gt;Monolithic architectures are characterized by the fact that all components of an application are closely linked and provided as a single unit. This can simplify development and deployment, but leads to difficulties in scalability and maintainability as the application grows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modularity in the monolith
&lt;/h2&gt;

&lt;p&gt;Modularity allows a monolith to be divided into clearly delineated modules. These modules can be developed and maintained independently of each other, which improves maintainability and enables more targeted scaling of individual modules.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Serverless Modulith: an innovative solution
&lt;/h2&gt;

&lt;p&gt;The Serverless Modulith combines the advantages of modularity with the strengths of serverless computing. Each module is provided as an independent serverless function, which offers several advantages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: each module can be scaled independently, based on specific requirements and load.&lt;br&gt;
&lt;strong&gt;Cost efficiency&lt;/strong&gt;: Thanks to usage-based billing, companies only pay for the computing time they actually use.&lt;br&gt;
&lt;strong&gt;Reduced complexity&lt;/strong&gt;: Developers can focus on the business logic as the cloud provider manages the infrastructure.&lt;br&gt;
&lt;strong&gt;Faster time-to-market&lt;/strong&gt;: Individual modules can be developed and deployed independently, which speeds up the introduction of new functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture overview
&lt;/h2&gt;

&lt;p&gt;A serverless modulith with Azure Functions consists of a structured and modularized application. Each functional unit is provided as an independent serverless function. Here is an example of such a structure:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API Gateway&lt;/strong&gt;: An Azure API Management Gateway serves as a central point for all incoming requests and forwards them to the corresponding Azure Functions. It handles authentication, authorization and rate limiting.&lt;br&gt;
&lt;strong&gt;Logic&lt;/strong&gt;: Each module of the application is implemented as a separate Azure Function. These modules can perform tasks such as data processing, API endpoints, background tasks or event-driven processes.&lt;br&gt;
&lt;strong&gt;Communication&lt;/strong&gt;: The modules communicate with each other via Azure Service Bus or Azure Event Grid to enable loose coupling and asynchronous processing.&lt;br&gt;
&lt;strong&gt;Database&lt;/strong&gt;: Azure Cosmos DB, Azure SQL Database or Azure Table Storage can be used as central databases that the individual modules access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1z67hwkcc76avoftuga.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1z67hwkcc76avoftuga.jpg" alt=" " width="617" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges when using serverless moduliths
&lt;/h2&gt;

&lt;p&gt;Although the serverless modulith offers many advantages, there are also challenges that need to be considered:&lt;/p&gt;

&lt;h3&gt;
  
  
  Vendor lock-in
&lt;/h3&gt;

&lt;p&gt;When using cloud-based serverless services such as Azure Functions, AWS Lambda or Google Cloud Functions, a certain degree of vendor lock-in is unavoidable. Architecture and implementation depend heavily on the specific services and APIs of the chosen cloud provider. However, it is important to emphasize that vendor lock-in is not inherently bad. The use of standard software solutions such as Office 365 or Teams also leads to vendor lock-in. It is crucial that companies carefully consider whether this dependency could be problematic in the future.&lt;/p&gt;

&lt;p&gt;One approach to minimizing risk is to use hybrid solutions such as Docker containers in a serverless environment on Azure. These containers can be operated in different cloud environments or even on-premises, which increases flexibility and reduces the risk of lock-in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cold starts
&lt;/h3&gt;

&lt;p&gt;Cold starts are another common problem with serverless architectures. These occur when functions are reactivated after a period of inactivity and cause additional latency. Modern hosting plans such as the Azure Functions Premium Plan or AWS Lambda Provisioned Concurrency offer solutions that significantly reduce this problem. These plans make it possible to keep a certain number of instances warm, which minimizes request latency and improves response times.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The Serverless Modulith is an innovative solution for modern software development requirements. It combines the advantages of modularity and serverless computing and enables companies to work more flexibly, cost-effectively and scalably. At the same time, the complexity of infrastructure management is reduced. Despite the challenges, such as vendor lock-in and cold starts, the advantages outweigh the disadvantages, especially if suitable strategies are implemented to overcome these challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ressources:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.florian-lenz.io/blog/serverless-modular-monolithen-der-wegbereiter-fuer-agile-entwicklungen" rel="noopener noreferrer"&gt;Azure Serverless Modulith&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=5kbJSH-dKKo" rel="noopener noreferrer"&gt;Azure Serverless Modulith Video&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>serverless</category>
      <category>azure</category>
      <category>azurefunctions</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>How to create a globally distributed Azure App incredibly easy - high availability and low latency included!</title>
      <dc:creator>Florian Lenz</dc:creator>
      <pubDate>Mon, 20 May 2024 11:13:41 +0000</pubDate>
      <link>https://dev.to/florianlenz/how-to-create-a-globally-distributed-azure-app-incredibly-easy-high-availability-and-low-latency-included-1f09</link>
      <guid>https://dev.to/florianlenz/how-to-create-a-globally-distributed-azure-app-incredibly-easy-high-availability-and-low-latency-included-1f09</guid>
      <description>&lt;p&gt;With digitalization and globalization, where users are distributed worldwide, it is essential to develop applications that are both fast and reliable, regardless of where the user is located. A globally distributed Azure app enables exactly that: it ensures that your application is not only available anywhere in the world, but also offers low latency and fast response times. In this article, I show how such an app can be deployed incredibly easily and what benefits this brings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why a globally distributed application makes sense
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Low latency:&lt;/strong&gt; If an application is used by users on different continents, the physical distance can lead to high latency, which affects the user experience. Distributing the application across multiple Azure regions ensures that data is processed as close to the user as possible, significantly reducing latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High availability:&lt;/strong&gt; A globally distributed architecture increases the application's reliability. If one region fails or is overloaded, data traffic can be automatically redirected to other regions. This ensures that the application remains available at all times, which is particularly important for business-critical applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Low costs:&lt;/strong&gt; Thanks to Azure Functions and other serverless services, the costs for a globally distributed application can be controlled and optimized. Azure Functions enables automatic scaling of resources based on actual demand, so you only pay for the resources you use. This results in an efficient cost structure that makes it possible to operate a globally distributed application without incurring significantly higher costs compared to an application in a single region. The pay-as-you-go model of Azure Functions ensures that costs remain low, even if the application is distributed globally.&lt;/p&gt;

&lt;p&gt;By using Azure services such as Azure Cosmos DB, Azure Functions and Azure Front Door, these benefits can be easily realized and applications can be optimally tailored to the global needs of users.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic concepts and tools for a globally distributed Azure app
&lt;/h3&gt;

&lt;p&gt;Before a globally distributed Azure app can be deployed, it is important to understand the basic concepts and tools involved. Here are the essential Azure services used for global distribution and optimization of an application:&lt;/p&gt;

&lt;p&gt;Azure Cosmos DB: Azure Cosmos DB is a globally distributed, multimodal database service that can be seamlessly distributed across multiple regions. It offers guaranteed low latency and high availability as well as the ability to automatically replicate data and keep it consistent. This is particularly important for applications that need to access real-time data worldwide.&lt;/p&gt;

&lt;p&gt;Azure Functions: Azure Functions is a serverless service that makes it possible to scale applications quickly and efficiently. With Azure Functions, you only pay for the resources you use, which minimizes operating costs. Automatic scaling ensures that the application runs smoothly even with high data traffic.&lt;/p&gt;

&lt;p&gt;Azure Front Door: Azure Front Door provides global load balancing and accelerated application delivery by acting as a global HTTP/HTTPS load balancing service. It optimizes traffic, increases resiliency and ensures fast response time regardless of where users are located. Azure Front Door also improves security with built-in DDoS protection mechanisms and Web Application Firewall (WAF) capabilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fols6odhlqedcdcwbc7tb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fols6odhlqedcdcwbc7tb.png" alt=" " width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These services work together to ensure that the application is globally available, fast and cost-efficient. The next section describes in detail how these tools can be configured and used in a few steps to deploy a globally distributed Azure app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps to deploy a globally distributed Azure app
&lt;/h2&gt;

&lt;p&gt;The following steps are necessary to deploy a globally distributed Azure app.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create and configure the Cosmos DB
&lt;/h3&gt;

&lt;p&gt;To lay the foundation for a globally distributed Azure app, setting up and configuring Azure Cosmos DB is a crucial step. Start by logging into the Azure portal and creating a new Azure Cosmos DB instance. Choose the API model that best suits the requirements of your application - be it SQL, MongoDB or Cassandra.&lt;/p&gt;

&lt;p&gt;Once the instance has been created, it is important to configure the global replications. You can do this by adding the desired geographical regions in which your data should be replicated. This configuration ensures that your data is available worldwide and stored close to the users, which significantly reduces latency. In addition, the multi-region write function ensures high availability and consistency of the data, as it enables write operations in several regions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh740yepatv9rdxfyuvvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh740yepatv9rdxfyuvvp.png" alt=" " width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Azure Cosmos DB settings, you will find options for managing the consistency levels and configuring the replication strategies. Here you can select the desired level of consistency that best suits the requirements of your application - from strong consistency to eventual consistency.&lt;/p&gt;

&lt;p&gt;A major advantage of Azure Cosmos DB is the ability to add additional regions after the initial deployment. The Azure Cosmos DB instance can be opened via the Azure portal and the "Add regions" option can be selected in the "Replication" area. After selecting the desired regions, the process is completed by clicking on "Save". Azure Cosmos DB replicates the data to the new regions automatically and without downtime. This allows the database's reach and availability to be expanded flexibly and seamlessly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50vtlgw7f57on1ufe27u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50vtlgw7f57on1ufe27u.png" alt=" " width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup and configuration of Azure Functions
&lt;/h3&gt;

&lt;p&gt;After setting up Azure Cosmos DB, the next step is to configure Azure Functions to use serverless functions that can scale automatically and interact efficiently with globally distributed data.&lt;/p&gt;

&lt;p&gt;First, a new Azure Functions app is created in the Azure portal. A serverless plan (Consumption Plan) that supports automatic scaling is selected. A separate Azure Functions app is created for each desired region to ensure that the functions are executed close to the users and therefore offer low latency.&lt;/p&gt;

&lt;p&gt;Once the Functions apps have been created in the respective regions, the desired functionality can be implemented. This ensures that the functions interact efficiently with Azure Cosmos DB to perform data operations. The code for the functions can be written directly in the portal or deployed from a local development environment.&lt;/p&gt;

&lt;p&gt;Once the functions are implemented, they must be deployed to all Functions apps created in the respective regions. This ensures that the application is available in every region and that the benefits of low latency and high availability can be utilized.&lt;/p&gt;

&lt;p&gt;Azure Functions uses automatic scaling to ensure that the application runs smoothly even with high data traffic. The serverless architecture means that you only pay for the resources you actually use, which keeps operating costs low.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration of the Azure Front Door
&lt;/h3&gt;

&lt;p&gt;After setting up Azure Cosmos DB and Azure Functions, the next step is to configure Azure Front Door to ensure global load balancing and optimization of application deployment.&lt;/p&gt;

&lt;p&gt;A new Azure Front Door instance is created in the Azure portal. The Azure Functions endpoints created are added as backend pools. In the Azure Front Door settings, load balancing methods can be configured to distribute traffic based on the geographical location of users. This ensures that user requests are routed to the closest or most available data center, minimizing latency and improving the user experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcl9za6jniozsynuqao08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcl9za6jniozsynuqao08.png" alt=" " width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Routing rules are set up to define specific requirements and priorities for data traffic. This includes routing requests to specific backend pools based on URL paths or other criteria.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxilq207yxj90u7h1f3e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxilq207yxj90u7h1f3e.png" alt=" " width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition, security features such as the Web Application Firewall (WAF) can be activated to protect the application from threats. Azure Front Door also offers integrated DDoS protection mechanisms that help secure the application from attacks and ensure availability.&lt;/p&gt;

&lt;p&gt;After all configurations are completed, Azure Front Door is tested to ensure that traffic is distributed correctly and the application is available in all regions. This configuration allows the global reach and performance of the Azure app to be optimally utilized.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security configuration for Azure Functions
&lt;/h3&gt;

&lt;p&gt;To ensure that only traffic from Azure Front Door is allowed and thus security mechanisms cannot be bypassed, appropriate settings must be made in Azure Functions. This configuration increases the security of the application by ensuring that all requests are routed through Azure Front Door and checked.&lt;/p&gt;

&lt;p&gt;The Azure Functions app is opened in the Azure portal and the "Network" -&amp;gt; "Public network access" area is opened in the settings. Here, the access restriction is configured by adding a rule that only allows the IP range of Azure Front Door. Azure Front Door uses a specific set of IP addresses that are updated regularly. So that not all IP addresses have to be entered, the type "Service Tag" can be selected and then the service tag "AzureFrontDoor.Backend". This now configures that only traffic via Azure Front Door is allowed through to the Azure Function. Additional protection is provided by checking the X-Azure-FDID header. This is set by Azure Front Door and ensures that traffic is only allowed through from your own Azure Front Door.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkp51rc26ore5v3goykim.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkp51rc26ore5v3goykim.png" alt=" " width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These measures help to protect the application from direct access by only allowing validated and verified requests. This ensures that security features such as the Web Application Firewall (WAF) and DDoS protection mechanisms configured in Azure Front Door cannot be bypassed and the application remains protected from potential threats.&lt;/p&gt;

&lt;h3&gt;
  
  
  Price considerations
&lt;/h3&gt;

&lt;p&gt;One of the most attractive features of a globally distributed Azure app is its cost efficiency. With a high number of requests, for example 100 million globally distributed requests per month, the total cost for such an application is only about 353 USD. This estimate includes the use of Azure Cosmos DB, Azure Functions and Azure Front Door.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4z4ovjbeu4n3avzxz5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4z4ovjbeu4n3avzxz5a.png" alt=" " width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Azure Functions offers a serverless billing model where you only pay for the resources you actually use. This means that costs are dynamically adjusted to demand, avoiding unnecessary expenditure. Azure Cosmos DB charges for usage based on the operations performed and the amount of storage required, while Azure Front Door takes care of managing global traffic and optimizing application delivery without incurring additional infrastructure costs.&lt;/p&gt;

&lt;p&gt;The ability to automatically scale and efficiently utilize resources contributes significantly to cost efficiency. By only paying for actual consumption, companies can achieve high performance and global availability at a fraction of the cost of traditional infrastructure models. This price flexibility and transparency make Azure an excellent choice for deploying globally distributed applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best practices and tips for optimization
&lt;/h2&gt;

&lt;p&gt;In order to exploit the full potential of a globally distributed Azure app, some best practices and tips for optimization and management should be followed:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Select consistency model:&lt;/strong&gt; Select the most appropriate consistency model for your application in Azure Cosmos DB. Strict consistency ensures data integrity but can increase latency. Eventual consistency offers higher availability and lower latency, but with possible data inconsistency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Efficient data partitioning:&lt;/strong&gt; Ensure that your data is partitioned efficiently. Good partitioning in Azure Cosmos DB improves scalability and performance by preventing certain partitions from being overloaded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Caching strategies:&lt;/strong&gt; Implement caching strategies to speed up frequent data accesses and reduce the load on the database. Azure Redis Cache can be a useful addition here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use serverless architectures:&lt;/strong&gt; Take advantage of serverless architectures like Azure Functions to scale automatically and save costs. Make sure to design functions so that they are short and precise in order to minimize execution costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optimize load balancing:&lt;/strong&gt; Configure Azure Front Door to optimally distribute traffic. Use geo-routing and other load balancing algorithms to achieve the best performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security measures:&lt;/strong&gt; Ensure that your application is protected by appropriate security measures. Enable the Web Application Firewall (WAF) in Azure Front Door and use Azure Security Center to detect and remediate potential threats.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated scaling:&lt;/strong&gt; Use the automatic scaling of Azure Functions and Azure Cosmos DB to dynamically adapt resources to current demand. This helps to optimize costs while ensuring high availability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regular review and adjustment:&lt;/strong&gt; Carry out regular performance reviews and adjust configurations if necessary. Regularly review metrics and reports from Azure Monitor and Application Insights to identify bottlenecks and optimization opportunities.&lt;/p&gt;

&lt;p&gt;By implementing these best practices and tips, the performance and efficiency of the globally distributed Azure App can be significantly improved. Continuous optimization and management ensures that the application meets changing requirements and functions optimally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Deploying a globally distributed Azure app offers significant benefits in terms of performance, availability and cost efficiency. By using Azure Cosmos DB, Azure Functions and Azure Front Door, an application can be deployed quickly and securely worldwide. With just a few configurations, new regions can be added and security mechanisms implemented to ensure the protection of the application. In addition, the application remains cost-efficient, even with high data traffic. The combination of these powerful Azure services makes it possible to meet the demands of a globalized and digitized world while providing an optimal user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=9j22DodPimw" rel="noopener noreferrer"&gt;Azure Function global verteilt bereitstellen | Serverless global distributed Azure App&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="//florian-lenz.io/blog/global-distributed-azure-app"&gt;Global verteilte Azure App unglaublich einfach bereitstellen – Hochverfügbarkeit und niedrige Latenz inklusive!&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://azure.microsoft.com/en-us/blog/how-to-build-globally-distributed-applications-with-azure-cosmos-db-and-pulumi/" rel="noopener noreferrer"&gt;How to build globally distributed applications with Azure Cosmos DB and Pulumi&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/florianlenz96/GlobalDistributedAzureFunctions" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>cloud</category>
      <category>serverless</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
