<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: evdbogaard</title>
    <description>The latest articles on DEV Community by evdbogaard (@evdbogaard).</description>
    <link>https://dev.to/evdbogaard</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/evdbogaard"/>
    <language>en</language>
    <item>
      <title>Running MLflow in Azure Container Apps: Lessons from our setup</title>
      <dc:creator>evdbogaard</dc:creator>
      <pubDate>Tue, 30 Sep 2025 07:18:39 +0000</pubDate>
      <link>https://dev.to/evdbogaard/running-mlflow-in-azure-container-apps-lessons-from-our-setup-58jf</link>
      <guid>https://dev.to/evdbogaard/running-mlflow-in-azure-container-apps-lessons-from-our-setup-58jf</guid>
      <description>&lt;p&gt;We have been doing more and more work with prompting, and we wanted a way to keep track of which prompts were used, how many tokens they consumed, and what the resulting output was. Just as important, the data needed to be available to everyone on the team so we could all learn from the results. To support this, we decided to set up an MLflow server.&lt;/p&gt;

&lt;p&gt;MLflow is an open-source platform for managing the machine learning lifecycle. While it is often used for tracking experiments and storing models, we mainly used it for tracking our prompt runs and related metadata. At first, one team member ran MLflow locally in a Docker container, which worked fine as long as only one person needed it. But as the project grew this was no longer sustainable. We wanted a setup that was reliable, shareable, and preserved our experiment history. Our main goals were to put MLflow online and connect it to a SQL database for persistent storage. Our first thought was simple: take the same Docker image and run it inside an Azure Container App. Easy, right? Unfortunately, we hit some difficulties along the way.&lt;/p&gt;

&lt;p&gt;In this blog, we'll walk through the initial &lt;code&gt;docker run&lt;/code&gt; command we used locally, all the way to the final version that worked in Azure. Along the way, we explained the hurdles we encountered and how we solved them.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Running locally
&lt;/h2&gt;

&lt;p&gt;The command to run MLflow locally was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 5000:5000 ghcr.io/mlflow/mlflow mlflow server &lt;span class="nt"&gt;--host&lt;/span&gt; 0.0.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The most important part of the command was the &lt;code&gt;--host 0.0.0.0&lt;/code&gt;. By default, the MLflow server only accepted connections from the local machine. It needed the &lt;code&gt;--host&lt;/code&gt; argument to accept connections from other machines as well. The &lt;code&gt;-p 5000:5000&lt;/code&gt; mapped port 5000 from the container to localhost.&lt;/p&gt;

&lt;p&gt;The command &lt;code&gt;mlflow server&lt;/code&gt; was required as a startup command for the image to actually start the server. Without it, nothing would start.&lt;/p&gt;

&lt;p&gt;Running MLflow like this meant all data was stored locally in the container's filesystem. This was a problem, because as soon as the container was deleted, all stored data was lost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Artifact and backend store
&lt;/h2&gt;

&lt;p&gt;As stated before, all data was stored in a simple folder structure. With only one or two people using MLflow, this could work as changes were appended to existing files. However, there were no concepts like file locks to prevent multiple people from writing simultaneously. The more people used MLflow, the higher the chance of corrupting these files.&lt;/p&gt;

&lt;p&gt;To solve this MLflow has the option to configure external stores where it can send its information so it's safely stored outside the container.&lt;/p&gt;

&lt;p&gt;MLflow had two types of stores to preserve information about runs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backend store: stores metadata about runs (IDs, start and end times, parameters, metrics, tags, logs, etc.)&lt;/li&gt;
&lt;li&gt;Artifact store: stores artifacts for each run (model weights, images, logs, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting up database store
&lt;/h2&gt;

&lt;p&gt;MLflow supports a wide range of databases for the backend store. As we were already using Azure SQL Database, we decided to use that for MLflow as well.&lt;/p&gt;

&lt;p&gt;To tell MLflow to use a database we added the following argument to the Docker run command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nt"&gt;--backend-store-uri&lt;/span&gt; &lt;span class="s2"&gt;"mssql+pyodbc://{dbUser}:{dbPassword}@{dbServer}.database.windows.net/{dbName}?driver=ODBC+Driver+18+for+SQL+Server&amp;amp;Encrypt=yes&amp;amp;TrustServerCertificate=no&amp;amp;Connection+Timeout=30"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we added this argument, MLflow failed to start because the base image did not have all the required dependencies for MSSQL. To solve this, we created our own &lt;code&gt;Dockerfile&lt;/code&gt; and installed the missing dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ghcr.io/mlflow/mlflow:latest&lt;/span&gt;

&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; root&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    curl gnupg2

&lt;span class="k"&gt;RUN &lt;/span&gt;curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    curl https://packages.microsoft.com/config/debian/11/prod.list &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apt/sources.list.d/mssql-release.list &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nv"&gt;ACCEPT_EULA&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Y apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; msodbcsql18

&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; pyodbc

&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; 1000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensured both &lt;code&gt;msodbcsql18&lt;/code&gt; and &lt;code&gt;pyodbc&lt;/code&gt; were installed. With these, MLflow started successfully, but as explained earlier, we still needed an artifact store or else information about individual runs would be missing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the artifacts store
&lt;/h2&gt;

&lt;p&gt;MLflow supports multiple artifact storage types, such as Amazon S3, Azure Blob Storage, and Google Cloud Storage. Since we were most familiar with Azure, we used Azure Blob Storage.&lt;/p&gt;

&lt;p&gt;To point MLflow to Azure Blob Storage, we used the following argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nt"&gt;--artifacts-destination&lt;/span&gt; wasbs://&lt;span class="o"&gt;{&lt;/span&gt;blobContainerName&lt;span class="o"&gt;}&lt;/span&gt;@&lt;span class="o"&gt;{&lt;/span&gt;storageAccountName&lt;span class="o"&gt;}&lt;/span&gt;.blob.core.windows.net/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For authentication, there were two options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connection string: By passing in an environment variable to the container MLflow would automatically use it to connect to the correct Storage Account (&lt;code&gt;-e AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;AccountName={storageAccountName};AccountKey={storageAccountKey};EndpointSuffix=core.windows.net"&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Role-Based Access Control (RBAC): our preferred option, since we have been moving away from using keys and connection strings as much as possible in Azure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To use RBAC, we removed the connection string and added two Python packages in our Dockerfile:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;azure-storage-blob&lt;/code&gt;: required for both RBAC and connection strings&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;azure-identity&lt;/code&gt;: required for RBAC
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; pyodbc azure-storage-blob azure-identity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bringing it to Azure Container Apps
&lt;/h2&gt;

&lt;p&gt;Up to this point, everything we did was still running with a local Docker command. The full command looked like this (&lt;code&gt;evdbmlflow&lt;/code&gt; was our locally built image):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 5000:5000 evdbmlflow:latest mlflow server &lt;span class="nt"&gt;--host&lt;/span&gt; 0.0.0.0 &lt;span class="nt"&gt;--backend-store-uri&lt;/span&gt; &lt;span class="s2"&gt;"mssql+pyodbc://{dbUser}:{dbPassword}@{dbServer}.database.windows.net/{dbName}?driver=ODBC+Driver+18+for+SQL+Server&amp;amp;Encrypt=yes&amp;amp;TrustServerCertificate=no&amp;amp;Connection+Timeout=30"&lt;/span&gt; &lt;span class="nt"&gt;--artifacts-destination&lt;/span&gt; wasbs://&lt;span class="o"&gt;{&lt;/span&gt;blobContainerName&lt;span class="o"&gt;}&lt;/span&gt;@&lt;span class="o"&gt;{&lt;/span&gt;storageAccountName&lt;span class="o"&gt;}&lt;/span&gt;.blob.core.windows.net/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To get everything running in Azure Container Apps we did the following steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Build and push the image&lt;/strong&gt;:&lt;br&gt;
  We created our Dockerfile and pushed the image to an Azure Container Registry (ACR)&lt;br&gt;
&lt;strong&gt;2. Provisioned supporting resources&lt;/strong&gt;:&lt;br&gt;
  We deployed the required Azure services: Azure SQL Database, Azure Blob Storage, Log Analytics, Container App Environment, Container App, Role Assignments(RBAC). These made sure we had the backend to write our metadata and artifacts to and permissions were set so the container had access to everything it needs.&lt;br&gt;
&lt;strong&gt;3. Deploy the Container App&lt;/strong&gt;: Finally we've configured the Container App to use our custom image and added the startup command and arguments we also used for our local container so it could connect to Blob Storage and SQL.&lt;/p&gt;

&lt;p&gt;Instead of clicking through the portal, we've provisioned everything using Bicep with a few CLI commands. The full setup is available in this GitHub repository: &lt;a href="https://github.com/evdbogaard/blog-mlflow-azure" rel="noopener noreferrer"&gt;https://github.com/evdbogaard/blog-mlflow-azure&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;acr.bicep&lt;/code&gt;: Used to create the container registry&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Dockerfile&lt;/code&gt;: The full &lt;code&gt;Dockerfile&lt;/code&gt; used&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;dockercommands.sh&lt;/code&gt;: cli commands used to login to the registry, build the image, and push the image&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;infra.bicep&lt;/code&gt;: Provisioning of all the supporting resources and the container app itself.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You now have an MLflow server up and running inside an Azure Container App. In the Bicep templates we kept the resource configurations simple, but in production you would want to tighten security and optimize costs.&lt;/p&gt;
&lt;h2&gt;
  
  
  MLflow tracking server behavior in Azure Container Apps
&lt;/h2&gt;

&lt;p&gt;While we had MLflow running in the cloud, we noticed that in some cases artifacts still were not logged. After investigation, we discovered that the bug came from our client-side Python code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;mlflow&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AzureChatOpenAI&lt;/span&gt;

&lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;mlflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_experiment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;My Test Experiment&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;mlflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;langchain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;autolog&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AzureChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AZURE_OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;azure_endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AZURE_OPENAI_ENDPOINT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;deployment_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AZURE_OPENAI_DEPLOYMENT_NAME&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;api_version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2024-12-01-preview&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is MLflow?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we ran the code locally, both traces and artifacts appeared in MLflow. Running it in a local container also worked. But once we moved the container to Azure Container Apps, artifacts stopped working.&lt;/p&gt;

&lt;p&gt;It turned out that Azure Container Apps injected a number of environment variables. One of them was &lt;code&gt;OTEL_EXPORTER_OTLP_ENDPOINT&lt;/code&gt;. When this was set, MLflow switched from its default behavior and attempted to send traces to an OpenTelemetry Collector. We could not find a way to prevent this variable from being injected.&lt;/p&gt;

&lt;p&gt;Our workaround was to unset it explicitly before importing MLflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OTEL_EXPORTER_OTLP_ENDPOINT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this change, behavior was consistent between local and cloud runs. When we opnened the MLflow UI, we could see the traces showing up correctly, including detailed breakdowns of the prompts sent to the LLM, the responses, token usage, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqovy440m62fmdbeslyf9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqovy440m62fmdbeslyf9.png" alt=" " width="800" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5z85nsewzyblrlq3pam.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5z85nsewzyblrlq3pam.png" alt=" " width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;If you found the content of this blog useful, here are some resources for further reading:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/machine-learning/concept-mlflow?view=azureml-api-2" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/azure/machine-learning/concept-mlflow?view=azureml-api-2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://mlflow.org/docs/latest/ml/tracking/server/" rel="noopener noreferrer"&gt;https://mlflow.org/docs/latest/ml/tracking/server/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://mlflow.org/docs/latest/ml/tracking/artifact-stores/" rel="noopener noreferrer"&gt;https://mlflow.org/docs/latest/ml/tracking/artifact-stores/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://mlflow.org/docs/latest/ml/tracking/backend-stores/" rel="noopener noreferrer"&gt;https://mlflow.org/docs/latest/ml/tracking/backend-stores/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.mlflow.org/docs/2.17.2/llms/tracing/index.html#using-opentelemetry-collector-for-exporting-traces" rel="noopener noreferrer"&gt;https://www.mlflow.org/docs/2.17.2/llms/tracing/index.html#using-opentelemetry-collector-for-exporting-traces&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>mlflow</category>
      <category>containerapps</category>
    </item>
    <item>
      <title>Getting structured JSON from LLMs in C#</title>
      <dc:creator>evdbogaard</dc:creator>
      <pubDate>Fri, 25 Jul 2025 11:26:08 +0000</pubDate>
      <link>https://dev.to/evdbogaard/getting-structured-json-from-llms-in-c-25i4</link>
      <guid>https://dev.to/evdbogaard/getting-structured-json-from-llms-in-c-25i4</guid>
      <description>&lt;p&gt;Lately we've been trying to get more hands on with AI and started working on a project that uses a large language model (LLM) to extract specific information from documents. In order to use the output of the prompt in our code we needed the model to return JSON and deserialize that into a structured type.&lt;/p&gt;

&lt;p&gt;While this worked well in many cases, we ran into issues where the generated JSON didn't match our expected structure and failed to deserialize. In this blog we'll look at our initial implementation and some possible solutions we found to prevent the deserialization issues.&lt;/p&gt;

&lt;p&gt;We began by having the LLM extract information and return it to use as a JSON string. That string was then deserialized into a C# object for further use. For most documents, this approach worked just fine. However, we noticed that in some cases, the model would hallucinate and generate either properties that didn't exist or types we weren't expecting. This lead to deserialization issues causing us to have no data for that specific document. Unfortunately, tweaking the temperature parameter didn't prevent this from happening.&lt;/p&gt;

&lt;h2&gt;
  
  
  JSON mode
&lt;/h2&gt;

&lt;p&gt;Our first approach was to use JSON mode. This setting tells the LLM to return a valid JSON object, but it doesn’t enforce the structure beyond that. Here’s a basic code snippet showing how we implemented it. To keep the focus on how JSON mode is used we’ve stripped out the prompt and other details from the code snippet that felt unrelated to the problem.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;azureOpenAIClient&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;AzureOpenAIClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Uri&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;AzureCliCredential&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;chatClient&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;azureOpenAIClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetChatClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;deploymentName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;ChatCompletionOptions&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Omitted out other options...&lt;/span&gt;
    &lt;span class="n"&gt;ResponseFormat&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ChatResponseFormat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateJsonObjectFormat&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ChatMessage&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;messages&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;SystemChatMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="c1"&gt;// The prompt telling what information to retrieve from the document&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;AssistantChatMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;example&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="c1"&gt;// JSON example of how the output should look&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;UserChatMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documentContent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// Content of the document to process&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;completion&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;chatClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CompleteChatAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;JsonSerializer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Deserialize&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;OutputModel&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;completion&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;First&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To increase the chances of having a valid JSON response we did two things. First, we used &lt;code&gt;ChatResponseFormat.CreateJsonObjectFormat()&lt;/code&gt; to tell the model we expect JSON as the output. Together with this setting you have to specify in your prompt that JSON should be generated. Second, we included an &lt;code&gt;AssistantChatMessage&lt;/code&gt; containing a valid JSON example of the expected output. It's filled with dummy data and serialized to a string before passed to the model.&lt;/p&gt;

&lt;p&gt;The code to do that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="nf"&gt;CreateExample&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;responseModel&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;OutputModel&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;PersonNames&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="s"&gt;"John Doe"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s"&gt;"Doe, J."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s"&gt;"Doe, John"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;Addresses&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;Street&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"123 Maple Street"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;City&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Springfield"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;PostalCode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"62704"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Country&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"USA"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;Street&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"456 Queen St"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;City&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Toronto"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;Street&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"789 Rue de Rivoli"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;City&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Paris"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Country&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"France"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;Street&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"10 Downing Street"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;City&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"London"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;PostalCode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"SW1A 2AA"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Country&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"UK"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;City&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Cupertino"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;PostalCode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"95014"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Country&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"USA"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;JsonSerializer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;responseModel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;PropertyNameCaseInsensitive&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;record&lt;/span&gt; &lt;span class="nc"&gt;OutputModel&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;PersonNames&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;AddressModel&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;Addresses&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;record&lt;/span&gt; &lt;span class="nc"&gt;AddressModel&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;Street&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;City&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;PostalCode&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;Country&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of the time the LLM generated an output that was correct for us. For example:&lt;br&gt;
&lt;code&gt;{"PersonNames":["Henk"],"Addresses":[{"City":"Amsterdam","Country":"Netherlands"}]}&lt;/code&gt;&lt;br&gt;
Unfortunately on some occasions it generated an output that was a bit different and would look like this:&lt;br&gt;
&lt;code&gt;{"PersonNames": ["Henk"], "Addresses": ["Amsterdam"]}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In that case, the &lt;code&gt;Addresses&lt;/code&gt; property was a string array instead of an array of &lt;code&gt;AddressModel&lt;/code&gt;, which led to deserialization errors.&lt;/p&gt;
&lt;h2&gt;
  
  
  Structured outputs
&lt;/h2&gt;

&lt;p&gt;Newer models introduced structured outputs, which is an improvement upon the JSON mode and solves this exact problem. While JSON mode guarantees valid JSON, but structured outputs go further by enforcing a specific schema.&lt;/p&gt;

&lt;p&gt;To use this, we swapped &lt;code&gt;CreateJsonObjectFormat()&lt;/code&gt; for &lt;code&gt;CreateJsonSchemaFormat()&lt;/code&gt; and provided a JSON schema based on our expected model. Since .NET doesn't generate JSON schemas by default, we used the &lt;code&gt;NJsonSchema&lt;/code&gt; NuGet package for the example code here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// dotnet add package NJsonSchema&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;schema&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;JsonSchema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;FromType&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;OutputModel&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;
&lt;span class="n"&gt;ChatCompletionOptions&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Omitted out other options...&lt;/span&gt;
    &lt;span class="n"&gt;ResponseFormat&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ChatResponseFormat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateJsonSchemaFormat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;jsonSchemaFormatName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;nameof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;OutputModel&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;jsonSchema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;BinaryData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;FromString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToJson&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By using structured outputs we can now trust the LLM to always return a JSON string we can safely deserialize back in our code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using semantic kernel
&lt;/h3&gt;

&lt;p&gt;A different way to call LLMs in C# is by using Semantic Kernel. It provides extra abstractions that make using LLMs easier in code. Structured outputs also has better support as it no longer requires manual JSON schema generation.&lt;/p&gt;

&lt;p&gt;Here's what that code looks like when rewritten for Semantic Kernel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;kernel&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Kernel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateBuilder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddAzureOpenAIChatCompletion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;deploymentName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;deploymentName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;credentials&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;AzureCliCredential&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;chatClient&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;kernel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetRequiredService&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IChatCompletionService&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;messages&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ChatHistory&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddSystemMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddAssistantMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;example&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddUserMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documentContent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;OpenAIPromptExecutionSettings&lt;/span&gt; &lt;span class="n"&gt;settings&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ResponseFormat&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;OutputModel&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;completion&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;chatClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetChatMessageContentAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By passing &lt;code&gt;typeof(OutputModel)&lt;/code&gt; into the &lt;code&gt;OpenAIPromptExecutionSettings&lt;/code&gt;, Semantic Kernel automatically handled schema generation for us.&lt;/p&gt;

&lt;h2&gt;
  
  
  Validating outputs
&lt;/h2&gt;

&lt;p&gt;Structured outputs worked great, but not all models we used supported them. In those cases, we had to fall back on a more manual approach. We built a custom &lt;code&gt;JsonConverter&lt;/code&gt; to validate and deserialize each property individually. If one value failed, we skipped it and kept the rest. This meant we would still lose some data, but we could display most of the generated output instead of nothing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;internal&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;OutputModelConverter&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;JsonConverter&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;OutputModelConverter&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="n"&gt;OutputModelModel&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nf"&gt;Read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;ref&lt;/span&gt; &lt;span class="n"&gt;Utf8JsonReader&lt;/span&gt; &lt;span class="n"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Type&lt;/span&gt; &lt;span class="n"&gt;typeToConvert&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;JsonSerializerOptions&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;OutputModelModel&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

        &lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;var&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;JsonDocument&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ParseValue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;ref&lt;/span&gt; &lt;span class="n"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;root&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RootElement&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PersonNames&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;TryGetStringArray&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"PersonNames"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;??&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Addresses&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;TryGetObjectArray&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;AddressModel&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Addresses"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;??&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Utf8JsonWriter&lt;/span&gt; &lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;NerResponseModel&lt;/span&gt; &lt;span class="k"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;JsonSerializerOptions&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;JsonSerializer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]?&lt;/span&gt; &lt;span class="nf"&gt;TryGetStringArray&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;JsonElement&lt;/span&gt; &lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;propertyName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;TryGetProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;propertyName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;out&lt;/span&gt; &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;property&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;property&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ValueKind&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="n"&gt;JsonValueKind&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Array&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;property&lt;/span&gt;
                    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;EnumerateArray&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Where&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ValueKind&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="n"&gt;JsonValueKind&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Select&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetString&lt;/span&gt;&lt;span class="p"&gt;()!)&lt;/span&gt;
                    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToArray&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;catch&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// ...logging which property failed&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;[]?&lt;/span&gt; &lt;span class="n"&gt;TryGetObjectArray&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;JsonElement&lt;/span&gt; &lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;propertyName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;JsonSerializerOptions&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;TryGetProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;propertyName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;out&lt;/span&gt; &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;property&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;property&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ValueKind&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="n"&gt;JsonValueKind&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Array&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;list&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;
            &lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;property&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;EnumerateArray&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;try&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;obj&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Deserialize&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;obj&lt;/span&gt; &lt;span class="p"&gt;!=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                        &lt;span class="n"&gt;list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="k"&gt;catch&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="c1"&gt;// ...logging which property failed&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Count&lt;/span&gt; &lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;[..&lt;/span&gt; &lt;span class="n"&gt;list&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;If you want to dive deeper into the topics discussed in this post, here are a few helpful links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses#json-mode" rel="noopener noreferrer"&gt;https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses#json-mode&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses" rel="noopener noreferrer"&gt;https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devblogs.microsoft.com/semantic-kernel/using-json-schema-for-structured-output-in-net-for-openai-models/" rel="noopener noreferrer"&gt;https://devblogs.microsoft.com/semantic-kernel/using-json-schema-for-structured-output-in-net-for-openai-models/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/converters-how-to" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/converters-how-to&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/RicoSuter/NJsonSchema" rel="noopener noreferrer"&gt;https://github.com/RicoSuter/NJsonSchema&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>csharp</category>
      <category>structuredoutputs</category>
      <category>json</category>
      <category>ai</category>
    </item>
    <item>
      <title>Handling Nullable References with deserialization in .NET 9</title>
      <dc:creator>evdbogaard</dc:creator>
      <pubDate>Tue, 31 Dec 2024 07:26:34 +0000</pubDate>
      <link>https://dev.to/evdbogaard/handling-nullable-references-with-deserialization-in-net-9-1b57</link>
      <guid>https://dev.to/evdbogaard/handling-nullable-references-with-deserialization-in-net-9-1b57</guid>
      <description>&lt;p&gt;.NET 9 introduced many features, including long-awaited improvements for JSON deserialization. These updates enhance handling nullable types, which helps avoid common issues like &lt;code&gt;NullReferenceException&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Over the years, C# has provided tools to manage nullability, such as enabling nullable annotations and using the &lt;code&gt;required&lt;/code&gt; modifier. However, deserialization often bypassed these safeguards. Let’s explore how .NET 9 addresses this problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nullable reference types
&lt;/h2&gt;

&lt;p&gt;Before C# 8.0, &lt;code&gt;NullReferenceException&lt;/code&gt; errors were common, especially with uninitialized strings. A string without a value defaults to &lt;code&gt;null&lt;/code&gt;, creating potential runtime issues. Developers often assigned default values to avoid exceptions, but these solutions lacked visibility.&lt;/p&gt;

&lt;p&gt;With C# 8.0, nullable reference types were introduced to improve safety by detecting null references. Enabling this feature requires adding &lt;code&gt;&amp;lt;Nullable&amp;gt;enable&amp;lt;/Nullable&amp;gt;&lt;/code&gt; to your project file.&lt;/p&gt;

&lt;p&gt;Once enabled, the compiler issues warnings about potentially uninitialized properties. For example, the class below triggers warnings for &lt;code&gt;FirstName&lt;/code&gt; and &lt;code&gt;LastName&lt;/code&gt;, as there is no constructor to initialize them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;FirstName&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;LastName&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To resolve these warnings, you can provide a default value or mark the property as optional:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;FirstName&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Default value ensures it's never null&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="n"&gt;LastName&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="c1"&gt;// Optional, can still be null&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The &lt;code&gt;required&lt;/code&gt; modifier
&lt;/h2&gt;

&lt;p&gt;Setting default values isn't always ideal. In C# 11, the &lt;code&gt;required&lt;/code&gt; modifier was introduced to enforce property initialization during object creation. A &lt;code&gt;required&lt;/code&gt; property must be set, or the code won’t compile.&lt;/p&gt;

&lt;p&gt;Setting a default value isn't always what we want. In C# 11 the required modifier was introduced which allows us to say "you cannot create a new object of this class, without at least specifying these fields/properties".&lt;br&gt;
It allows us to get rid of the default value and now prevents code compilation if a required property isn't being set.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;required&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;FirstName&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;required&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;LastName&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// --- Example&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;person&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;Person&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;FirstName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"John"&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt; &lt;span class="c1"&gt;// Compilation error: 'LastName' is required&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This modifier ensures objects are properly initialized.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with deserialization
&lt;/h2&gt;

&lt;p&gt;By default, System.Text.Json ignores nullability when deserializing objects. Consider the example below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;required&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;FirstName&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;required&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;LastName&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ---&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"{\"firstName\":\"John\", \"lastName\":null}"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;person&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;JsonSerializer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Deserialize&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Person&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;JsonSerializerOptions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Web&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WriteLine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;$"Person: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;FirstName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LastName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Output: "Person: John "&lt;/span&gt;
&lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LastName&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToUpper&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="c1"&gt;// Throws NullReferenceException&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The deserializer creates a &lt;code&gt;Person&lt;/code&gt; object with &lt;code&gt;LastName&lt;/code&gt; set to &lt;code&gt;null&lt;/code&gt;, even though this would not be possible with direct initialization. This behavior undermines the protections provided by nullable reference types and &lt;code&gt;required&lt;/code&gt; modifiers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Respecting Nullable Annotations in .NET 9
&lt;/h2&gt;

&lt;p&gt;.NET 9 addresses this issue with the &lt;code&gt;RespectNullableAnnotationsDefault&lt;/code&gt; feature. This opt-in feature enforces nullability rules during deserialization, ensuring invalid objects aren’t created. Since it introduces breaking changes, it’s disabled by default but recommended for new projects.&lt;/p&gt;

&lt;p&gt;Here's the updated behavior:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Person&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;required&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;FirstName&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;required&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;LastName&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ---&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"{\"firstName\":\"John\", \"lastName\":null}"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;person&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;JsonSerializer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Deserialize&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Person&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;JsonSerializerOptions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Web&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// Throws JsonException: "The property or field 'lastName' on type 'Person' doesn't allow setting null values"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Enabling &lt;code&gt;RespectNullableAnnotationsDefault&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;To enable this feature globally, add the following to your project file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ItemGroup&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;RuntimeHostConfigurationOption&lt;/span&gt; &lt;span class="n"&gt;Include&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"System.Text.Json.Serialization.RespectNullableAnnotationsDefault"&lt;/span&gt; &lt;span class="n"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"true"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;ItemGroup&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, enable it for specific calls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;JsonSerializerOptions&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;RespectNullableAnnotations&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="n"&gt;JsonSerializer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Deserialize&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Person&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With .NET 9's &lt;code&gt;RespectNullableAnnotationsDefault&lt;/code&gt; option, developers can better enforce property validity during deserialization. These improvements complement existing nullability tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;If you want to read more about changes to system.text.json in .NET 9 or nullable references in general, please check out the following links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://devblogs.microsoft.com/dotnet/system-text-json-in-dotnet-9/" rel="noopener noreferrer"&gt;https://devblogs.microsoft.com/dotnet/system-text-json-in-dotnet-9/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/dotnet/csharp/nullable-references" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/dotnet/csharp/nullable-references&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dotnet</category>
      <category>csharp</category>
      <category>json</category>
    </item>
    <item>
      <title>Using versioning with Bicep Registry</title>
      <dc:creator>evdbogaard</dc:creator>
      <pubDate>Wed, 15 Feb 2023 07:44:27 +0000</pubDate>
      <link>https://dev.to/evdbogaard/using-versioning-with-bicep-registry-168d</link>
      <guid>https://dev.to/evdbogaard/using-versioning-with-bicep-registry-168d</guid>
      <description>&lt;p&gt;One of the great things about Bicep is that it allows you to split it up in smaller modules that can be easily referenced from another Bicep file. This increases readability of your files and also allows for easier reuse of these modules. When you want to reference the same module in different repositories there are a couple of ways to do this. One of them is by using a Bicep Registry. For this you can use &lt;a href="https://azure.microsoft.com/en-us/products/container-registry" rel="noopener noreferrer"&gt;Azure Container Registry&lt;/a&gt; which next to container images also accepts Bicep files. To later on reference them in a Bicep file you can use the following url &lt;code&gt;br:evdbregistry.azurecr.io/bicep/modules/mymodule:v1&lt;/code&gt;.&lt;br&gt;
The first part is url of the specific registry and the module path. You end with a tag which points to a specific version of that module inside the registry.&lt;/p&gt;

&lt;p&gt;At the company I work we started using a Bicep Registry as well and wanted to have versioning for all files that are put inside the registry. To do this we setup a new repository and pipeline to handle this. However, there were two main problems we needed to tackle.&lt;/p&gt;

&lt;p&gt;These issues were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which tags to use when pushing to the registry?&lt;/li&gt;
&lt;li&gt;How to push only the files that changed into the registry, instead of everything all the time?&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Which tag to pick?
&lt;/h2&gt;

&lt;p&gt;When you push a bicep module towards a registry you need to supply a tag. An easy way for tags is to keep track of which version it is (i.e. v1, v2, etc.). The information on which version a specific file is can be stored inside the bicep module itself with the help of the &lt;code&gt;metadata&lt;/code&gt; keyword.&lt;/p&gt;

&lt;p&gt;When a bicep file is build into an ARM Json file some extra data is added in the metadata section. This contains version of bicep used and a hash. You can add to this section by using the &lt;code&gt;metadata&lt;/code&gt; keyword as follows: &lt;code&gt;metadata version = 'v1'&lt;/code&gt;.&lt;br&gt;
When you build your bicep file you can see the metadata is added in the json file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "metadata": {
    "_generator": {
      "name": "bicep",
      "version": "0.14.6.61914",
      "templateHash": "7937327168356229929"
    },
    "version": "v1"
  },
  "resources": []
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Finding changed files
&lt;/h2&gt;

&lt;p&gt;We the version info we could in theory write every file all the time to the container registry, but it is a big waste to do this for files that didn't even change.&lt;br&gt;
To determine which files have changed I use the following git command: &lt;code&gt;git diff --name-only --diff-filter=d HEAD^ HEAD&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This gives the differences between the current commit (HEAD) and it's parent (HEAD^). Because of &lt;code&gt;--name-only&lt;/code&gt; the information we get back is stripped down to only file names that have changed. The &lt;code&gt;diff-filter&lt;/code&gt; is added to exclude deleted files as we don't want to write those to a registry as they no longer exist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;note:&lt;/strong&gt; Azure DevOps Pipelines by default use a shallow fetch as a way of optimization. For this command to work you need a depth of 2. To edit this go to your pipeline -&amp;gt; edit -&amp;gt; trigger -&amp;gt; YAML -&amp;gt; Get sources -&amp;gt; Shallow fetch&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9kifqa8mailuc5ao0u1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9kifqa8mailuc5ao0u1.png" alt="Shallow fetch" width="800" height="689"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Putting everything in YAML
&lt;/h2&gt;

&lt;p&gt;The pipeline consists out of two stages. The first stage will find all changed files, build them, and put them into an artifact. The second stage will get the artifact and push the files one by one to the registry.&lt;br&gt;
Each stage has its own PowerShell script to do this.&lt;/p&gt;
&lt;h3&gt;
  
  
  Build stage
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$changedFiles = git diff --name-only --diff-filter=d HEAD^ HEAD
$artifactsDirectory = "$(Build.ArtifactStagingDirectory)"
foreach($fileName in $changedFiles)
{
  Write-Host "Found $fileName"
  if (!$fileName.EndsWith(".bicep"))
  {
    continue
  }

  $file = Get-ChildItem $fileName
  az bicep build -f $fileName --outfile "${artifactsDirectory}/$($file.BaseName).json"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;First we get all files that have been changed since the last commit. From this list we need to filter out anything that isn't a bicep file. There are other files in the repository (like the yml file itself) that don't need to be pushed to the registry. Finally we use &lt;code&gt;az bicep build&lt;/code&gt; to generate a json file. This is a good check to see if the bicep file itself is valid, and in the next stage we need to read the json file to get the version information out of it.&lt;br&gt;
It doesn't matter that we converted the bicep files to json already in this stage. The &lt;code&gt;az bicep publish&lt;/code&gt; accepts both json and bicep files.&lt;br&gt;
All build files are put in together in the artifact staging directory and are published as an artifact in the final step of this stage.&lt;/p&gt;
&lt;h3&gt;
  
  
  Publish stage
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$files = Get-ChildItem $(Pipeline.Workspace) -Filter "*.json" -R

foreach ($file in $files)
  {
    $fileBaseName = $file.BaseName
    $jsonFile = Get-Content $file.FullName | Out-String | ConvertFrom-Json
    $version = $jsonFile.metadata.version

    Write-Host "Pushing to registry ${fileBaseName}:${version}"
    az bicep publish --file $file.FullName --target "br:evdbregistry.azurecr.io/bicep/modules/${fileBaseName}:${version}"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In this stage the artifact is downloaded and we load in all the files one by one. As they are json files we can easily convert them to json and pick the version information from the metadata block. Finally we use &lt;code&gt;az bicep publish&lt;/code&gt; and give the necessary parameters to push the script inside the registry.&lt;/p&gt;

&lt;p&gt;Full yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;trigger:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

stages:
  - stage: build
    jobs:
      - job: generate_artifact
        displayName: "Generate artifacts"
        steps:
        - task: PowerShell@2
          displayName: Copy changed files
          inputs:
            targetType: 'inline'
            script: |
              $changedFiles = git diff --name-only --diff-filter=d HEAD^ HEAD
              $artifactsDirectory = "$(Build.ArtifactStagingDirectory)"
              foreach($fileName in $changedFiles)
              {
                Write-Host "Found $fileName"
                if (!$fileName.EndsWith(".bicep"))
                {
                  continue
                }

                $file = Get-ChildItem $fileName
                az bicep build -f $fileName --outfile "${artifactsDirectory}/$($file.BaseName).json"
              }

        - publish: $(Build.ArtifactStagingDirectory)
          artifact: $(Build.BuildNumber)

  - stage: publish
    displayName: "Publish to container registry"
    dependsOn: build
    condition: succeeded()
    jobs:
      - deployment: publish_to_registry
        displayName: Publish to registry
        environment: 'BicepEnv'
        strategy:
          runOnce:
            deploy:
              steps:
              - task: AzureCLI@2
                displayName: 'Publish to registry'
                inputs:
                  azureSubscription: 'Bicep ARM'
                  scriptType: 'pscore'
                  scriptLocation: 'inlineScript'
                  inlineScript: |
                    $files = Get-ChildItem $(Pipeline.Workspace) -Filter "*.json" -R

                    foreach ($file in $files)
                    {
                      $fileBaseName = $file.BaseName
                      $jsonFile = Get-Content $file.FullName | Out-String | ConvertFrom-Json
                      $version = $jsonFile.metadata.version

                      Write-Host "Pushing to registry ${fileBaseName}:${version}"
                      az bicep publish --file $file.FullName --target "br:evdbregistry.azurecr.io/bicep/modules/${fileBaseName}:${version}"
                    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Automatic version number
&lt;/h2&gt;

&lt;p&gt;What I don't like about above solution is that it still requires manual update of version numbers. If you're not careful and update a file, without updating the version number it will just overwrite the version that is stored inside the registry.&lt;/p&gt;

&lt;p&gt;If you want an automatic version number you can use the pipeline variable &lt;code&gt;$(Build.BuildNumber)&lt;/code&gt;. This will generate a string that looks like &lt;code&gt;20230101.1&lt;/code&gt;. This guarantees that the tag is always unique as each build has a new number.&lt;/p&gt;

&lt;p&gt;The downside of this approach is that there is no clear way to know which tags exists without looking into the registry itself. So it's up to yourself to see what you like best, fully automatic tags or more predictable version numbers that need to be updated manually.&lt;/p&gt;

</description>
      <category>web3</category>
      <category>blockchain</category>
    </item>
    <item>
      <title>.NET 7 Rate Limiting</title>
      <dc:creator>evdbogaard</dc:creator>
      <pubDate>Tue, 01 Nov 2022 08:03:53 +0000</pubDate>
      <link>https://dev.to/evdbogaard/net-7-rate-limiting-4k97</link>
      <guid>https://dev.to/evdbogaard/net-7-rate-limiting-4k97</guid>
      <description>&lt;h2&gt;
  
  
  What is rate limiting?
&lt;/h2&gt;

&lt;p&gt;With rate limiting we determine how much a specific resource can be accessed. If you for example know a resource can handle x amount of requests per minute you can block any request that goes over that so your resource isn't overwhelmed by the requests and starts throwing errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rate limiting algorithms
&lt;/h2&gt;

&lt;p&gt;With .NET 7 a few different algorithms are included for us to use. If these do not do what you want it's always possible to create your own. Inside the &lt;code&gt;System.Threading.RateLimiting&lt;/code&gt; namespace there are the abstract classes &lt;code&gt;ReplenishingRateLimiter&lt;/code&gt; and &lt;code&gt;RateLimiter&lt;/code&gt; which can be used to create a custom implementation.&lt;/p&gt;

&lt;p&gt;In the basis all RateLimiters have a &lt;code&gt;PermitLimit&lt;/code&gt;, &lt;code&gt;QueueLimit&lt;/code&gt;, and &lt;code&gt;QueueProcessingOrder&lt;/code&gt;. Besides that depending on the specific RateLimiter there are other options you can set.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;PermitLimit&lt;/code&gt; holds the number of permits a given rate limiter can give. When there are no permits left new requests are either put in a queue or rejected.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;QueueLimit&lt;/code&gt; holds the number of requests that can be queued when there are no permits left. When a new permit becomes available it is first given to a request that is already waiting in the queue. Based on the &lt;code&gt;QueueProcessingOrder&lt;/code&gt; it is either given to the oldest request in the queue or the newest request. If there are no permits and the queue is already full a request gets rejected immediately.&lt;/p&gt;

&lt;h3&gt;
  
  
  ConcurrencyLimiter
&lt;/h3&gt;

&lt;p&gt;The number of permits given to this limiter tell it how many requests it can handle at the same time. When a request is made a permit is given out and when the request is done this permit is returned to the limiter.&lt;/p&gt;

&lt;h3&gt;
  
  
  FixedWindowRateLimiter
&lt;/h3&gt;

&lt;p&gt;For this limiter a time window needs to be specified. Within that window it can handle the specified amount of requests. Unlike the &lt;code&gt;ConcurrencyLimiter&lt;/code&gt; the permits are not given back when a request is done. Only after the time window is expired the full amount of permits is reset and new requests can be handled.&lt;/p&gt;

&lt;h3&gt;
  
  
  SlidingWindowRateLimiter
&lt;/h3&gt;

&lt;p&gt;Similar to the &lt;code&gt;FixedWindowRateLimiter&lt;/code&gt;, but the time window is separated in segment. Each segment keeps track of how many permits are given out in that segment and they replenish separately instead of only after the entire time window has passed.&lt;br&gt;
For example a time window 1 hour which is split into 3 segments of 20 minutes. In the first segments 10 permits are used. The seconds segment 5 permits are used and in the third segment 15 permits are used. In total this means 30 permits are used. After 20 minutes the first segment is dropped and those 10 permits are free to use again.&lt;/p&gt;
&lt;h3&gt;
  
  
  TokenBucketRateLimiter
&lt;/h3&gt;

&lt;p&gt;Different to the other limiters this one uses tokens instead of permits. Similar to the &lt;code&gt;FixedWindowRateLimiter&lt;/code&gt; is that once a token is used it is not given back. The way it replenishes new tokens is by setting a replenishment period. Every time that period expires a predetermined amount of tokens is added back, never exceeding the maximum amount of tokens that is specified.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to use
&lt;/h2&gt;

&lt;p&gt;There are two packages we need to use rate limiting. The first one is &lt;code&gt;System.Threading.RateLimiting&lt;/code&gt;. This package holds most of the rate limiting knowledge and the four algorithms described above.&lt;br&gt;
The second package is &lt;code&gt;Microsoft.AspNetCore.RateLimiting&lt;/code&gt; which has everything we need to add rate limiting to the middleware.&lt;/p&gt;

&lt;p&gt;Each rate limiter has an &lt;code&gt;AcquireAsync&lt;/code&gt; and &lt;code&gt;AttemtAcquire&lt;/code&gt; method which can be used for asynchronous or synchronous retrieval of a permit. When there are no permits available, but there is still room in the queue these methods also keep waiting till a permit becomes available.&lt;br&gt;
You get a &lt;code&gt;RateLimitLease&lt;/code&gt; back which holds information if the system was successful in getting a permit. It is still your own responsibility to check this and take action if no permits were available. The lease implements &lt;code&gt;IDisposable&lt;/code&gt; so it's important to call Dispose() before you are done (or use using statement to take care of it for you), as some rate limiters need it to know a permit is given back and can be reused again.&lt;/p&gt;

&lt;p&gt;Below you can find some sample code of how this would look:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private static readonly ConcurrencyLimiter _limiter = new(new ConcurrencyLimiterOptions()
    {
        PermitLimit = 2,
        QueueProcessingOrder = QueueProcessingOrder.OldestFirst,
        QueueLimit = 0
    });

public async Task Get()
    {
        using var lease = await _limiter.AcquireAsync(HttpContext);

        if (!lease.IsAcquired)
            throw new Exception("No permits available");

        // Continue with your code
        ...
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Middleware
&lt;/h3&gt;

&lt;p&gt;Having to type &lt;code&gt;AcquireAsync&lt;/code&gt; and check the response for every method is a bit cumbersome, so lucky for us it's possible to add rate limiters through middleware.&lt;/p&gt;

&lt;p&gt;In Program.cs you can call &lt;code&gt;app.UseRateLimiter()&lt;/code&gt; which takes a &lt;code&gt;RateLimiterOptions&lt;/code&gt; as parameter. There are three thing you can set here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;RejectionStatusCode&lt;/code&gt;: The HTTP status code that is used when a request is rejected by the rate limiter. By default this is 503.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;OnRejected&lt;/code&gt;: A Func delegate that is called when a call is rejected. It has both the &lt;code&gt;HttpContext&lt;/code&gt; and the &lt;code&gt;RateLimitLease&lt;/code&gt; of the current request.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GlobalLimitter&lt;/code&gt;: A specific limiter that is executed for every request that passed through the middleware. If a specific request has multiple limiters that are set up for it the GlobalLimitter is always called first.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Besides these options there are also some helper methods to easily create policies. For each rate limiting algorithm described above there is a specific Add helper method, like &lt;code&gt;AddConcurrencyLimiter&lt;/code&gt;. Each helper method takes a string as first argument, which is the name used for the policy. And a second argument which are the options for that specific rate limiter.&lt;/p&gt;

&lt;p&gt;There is also an &lt;code&gt;AddPolicy&lt;/code&gt; method which can be used for custom created policies.&lt;/p&gt;

&lt;p&gt;To use a policy you can use the &lt;code&gt;EnableRateLimitingAttribute&lt;/code&gt;. It has similar behavior to for example role based authorization where you can put the attribute on a controller or method itself. The &lt;code&gt;DisableRateLimiting&lt;/code&gt; attribute can be used if you want to exempt a method or controller from rate limiting.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ApiController]
[Route("[controller]")]
[EnableRateLimiting("policyName")]
public class WeatherForecastController : ControllerBase
{ ... }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  PartitionedRateLimiter
&lt;/h3&gt;

&lt;p&gt;The rate limiters we've seen so far cannot tell the difference between requests and handle everything the same. However in practice we probably want to have different behavior if someone is logged in or not. Or limit requests per ip address. For this the &lt;code&gt;PartitionedRateLimiter&lt;/code&gt; is created. It basically creates a rate limiter per partition. You can pass in anything you want, but if you look at the &lt;code&gt;GlobalLimiter&lt;/code&gt; we've seen earlier you see that it uses HttpContext. From that context we can then determine which value we want to use for the partition. To create on we can use a static method, an example of this can be seen below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PartitionedRateLimiter.Create&amp;lt;HttpContext, string&amp;gt;(httpContext =&amp;gt; RateLimitPartition.GetFixedWindowLimiter(httpContext.User.Identity.Name, options =&amp;gt; new()
    {
        AutoReplenishment = true,
        PermitLimit = 100,
        QueueLimit = 0,
        QueueProcessingOrder = QueueProcessingOrder.OldestFirst,
        Window = TimeSpan.FromHours(1)
    }))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example we create a rate limiter that allows for 100 request per hour. This is linked to the &lt;code&gt;User.Identity.Name&lt;/code&gt;, so two different users can have each 100 requests before their requests get rejected.&lt;br&gt;
Now let's say that besides the 100 requests per hour we want to limit the total to 1000 per day. This can be done with the &lt;code&gt;CreateChained&lt;/code&gt; helper method. You can chain multiple partitioned rate limiters together like this, and they don't even need to use the same partition. So you can for example rate limit on a user id and ip address at the same time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PartitionedRateLimiter.CreateChained(
    PartitionedRateLimiter.Create&amp;lt;HttpContext, string&amp;gt;(httpContext =&amp;gt;
 RateLimitPartition.GetFixedWindowLimiter(httpContext.User.Identity.Name!, options =&amp;gt; new()
    {
        AutoReplenishment = true,
        PermitLimit = 100,
        Window = TimeSpan.FromHours(1)
    })),
    PartitionedRateLimiter.Create&amp;lt;HttpContext, string&amp;gt;(httpContext =&amp;gt;
                RateLimitPartition.GetFixedWindowLimiter(httpContext.User.Identity?.Name!, options =&amp;gt; new()
    {
        AutoReplenishment = true,
        PermitLimit = 1000,
        Window = TimeSpan.FromDays(1)
    })));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Custom policies
&lt;/h2&gt;

&lt;p&gt;In case you want to have a more advanced policy you can implement a custom one using the &lt;code&gt;IRateLimiterPolicy&amp;lt;T&amp;gt;&lt;/code&gt; interface. There are two things you need to implement for this, &lt;code&gt;OnRejected&lt;/code&gt;, which has similar behavior to the one used inside &lt;code&gt;UseRateLimiter&lt;/code&gt;, and &lt;code&gt;GetPartition&lt;/code&gt; which returns a specific &lt;code&gt;RateLimitPartition&lt;/code&gt;. See below for a small example of where an authenticated user has no limiting, but otherwise there is a limit on ip address.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class MyRateLimiterPolicy : IRateLimiterPolicy&amp;lt;string&amp;gt;
{
    public Func&amp;lt;OnRejectedContext, CancellationToken, ValueTask&amp;gt;? OnRejected { get; } = (context, cancellationToken) =&amp;gt;
    {
        context.HttpContext.Response.StatusCode = 429;
        return new();
    };

    public RateLimitPartition&amp;lt;string&amp;gt; GetPartition(HttpContext httpContext)
    {
        if (httpContext.User.Identity!.IsAuthenticated)
            return RateLimitPartition.GetNoLimiter(httpContext.User.Identity.Name!);

        return RateLimitPartition.GetFixedWindowLimiter(httpContext.Connection.RemoteIpAddress!.ToString(), options =&amp;gt; new()
        {
            PermitLimit = 100,
            Window = TimeSpan.FromHours(1)
        });
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://devblogs.microsoft.com/dotnet/announcing-rate-limiting-for-dotnet/"&gt;https://devblogs.microsoft.com/dotnet/announcing-rate-limiting-for-dotnet/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>ratelimit</category>
      <category>csharp</category>
    </item>
    <item>
      <title>Health checks in ASP.Net Core web API</title>
      <dc:creator>evdbogaard</dc:creator>
      <pubDate>Tue, 14 Sep 2021 06:00:08 +0000</pubDate>
      <link>https://dev.to/evdbogaard/health-checks-in-asp-net-core-web-api-1n44</link>
      <guid>https://dev.to/evdbogaard/health-checks-in-asp-net-core-web-api-1n44</guid>
      <description>&lt;p&gt;When you have an API running in the cloud it is important to know how healthy it is and if it might experience issues by itself or other services it relies on.&lt;br&gt;
To help you with that health checks were added tot ASP.Net Core to allow near-real-time monitoring of information about the state of your system. With only a few lines of code you can enable it in your own API.&lt;br&gt;
In your &lt;code&gt;ConfigureServices&lt;/code&gt; method add the following line &lt;code&gt;services.AddHealthChecks&lt;/code&gt; and in the &lt;code&gt;Configure&lt;/code&gt; method under the &lt;code&gt;endpoints.MapControllers();&lt;/code&gt; line add &lt;code&gt;endpoints.MapHealthChecks("/hc");&lt;/code&gt;. Congratulations you now have created and endpoint that shows the status of your API. If you run your code and go to that url you can see it reports &lt;code&gt;Healthy&lt;/code&gt;. As you haven't added any checks it doesn't really have any value yet, but the basis is set. Now let's look on how to add some checks to it.&lt;/p&gt;
&lt;h2&gt;
  
  
  Custom health checks
&lt;/h2&gt;

&lt;p&gt;You can add multiple health checks that need to be checked. Fortunately for us there is already a nice Github repository available with all kinds of health checks in it. You can check it out &lt;a href="https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks" rel="noopener noreferrer"&gt;here&lt;/a&gt;. There are nuget packages for different kinds of services like db, service bus, storage, keyvault, etc.&lt;br&gt;
Although it's nice these packages are already available for us, there will be cases where you need to write your own custom health check. For this they created the &lt;code&gt;IHealthCheck&lt;/code&gt; interface.&lt;br&gt;
You have to create a new class that implements that interface and you have created your own custom health check. Add &lt;code&gt;AddCheck&amp;lt;MyCustomHealthCheck&amp;gt;()&lt;/code&gt; after the &lt;code&gt;AddHealthChecks&lt;/code&gt; call, it's fluent to your &lt;code&gt;ConfigureServices&lt;/code&gt; method and the health checks are automatically executed when the end point is called.&lt;/p&gt;
&lt;h2&gt;
  
  
  Demo project
&lt;/h2&gt;

&lt;p&gt;To make the above text a little more visual lets show it all through a small demo project. We're going to create a new dotnet web api project and add a check to CosmosDb and also create a custom check through which we can fake an unhealthy app.&lt;br&gt;
So first lets create a new project. I prefer command line to do this, but there are of course other possible options.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the command line type: &lt;code&gt;dotnet new webapi -o Demo.HealthCheck.Api&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Open the newly created project in your favorite editor&lt;/li&gt;
&lt;li&gt;In &lt;code&gt;Startup.cs&lt;/code&gt; find &lt;code&gt;ConfigureServices&lt;/code&gt; and at the end of the method add &lt;code&gt;services.AddHealthChecks();&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;In the same file find the line &lt;code&gt;endpoints.MapControllers();&lt;/code&gt; and under it add &lt;code&gt;endpoints.MapHealthChecks("/hc");&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run the code and verify that the endpoint &lt;code&gt;/hc&lt;/code&gt; exists and returns &lt;code&gt;Healthy&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this we have the basic setup working. It's now time to add some actual checks. For demo purposes I created a CosmosDb account in the Azure portal. As this is not a blog post on CosmosDb I assume you created these yourself or simply omit this checks from your own project. The idea behind it is valid for other services like Sql Server, MySql, Postgres, etc. You have to supply a connection string and it tries to connect to it. If it succeeds everything is good, else &lt;code&gt;Unhealthy&lt;/code&gt; is returned.&lt;/p&gt;

&lt;p&gt;There are nuget package available for different services. For CosmosDb we need to type the following in the command line: &lt;code&gt;dotnet add package AspNetCore.HealthChecks.CosmosDb&lt;/code&gt;.&lt;br&gt;
Now that we have the package installed we can alter our &lt;code&gt;AddHealthChecks()&lt;/code&gt; line to include a check for CosmosDb. You can now add &lt;code&gt;AddCosmosDb()&lt;/code&gt; as a check. It asks for a connection string and you can optionally also pass in a database name. The health check tries to connect to the database and reports if it experiences issues. Your code should look similar to the example below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services.AddHealthChecks()
    .AddCosmosDb(cosmosDbConnString, "master");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can try it and and pass in a non-existing database name or wrong connection string and go to the &lt;code&gt;/hc&lt;/code&gt; endpoint. It shows &lt;code&gt;Unhealthy&lt;/code&gt; in those cases.&lt;/p&gt;

&lt;p&gt;Now lets create a custom check for us to implement. This can be anything you want. For demo purposes we're going to create a signleton class that holds a boolean called Healthy. If that boolean is true the check responds with healthy, else it will return unhealthy.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new class called &lt;code&gt;HealthService.cs&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Inside that class add a &lt;code&gt;bool&lt;/code&gt; property call &lt;code&gt;Healthy&lt;/code&gt;, make it &lt;code&gt;true&lt;/code&gt; by default.&lt;/li&gt;
&lt;li&gt;In &lt;code&gt;Startup.cs&lt;/code&gt; inject the class as a singleton: &lt;code&gt;services.AddSingleton&amp;lt;HealthService&amp;gt;();&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create a new class called &lt;code&gt;CustomCheck.cs&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Have the class implement the interface &lt;code&gt;IHealthCheck&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Inject the &lt;code&gt;HealthService&lt;/code&gt; we just created and have it return &lt;code&gt;Healthy&lt;/code&gt; on true or &lt;code&gt;Unhealthy&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;In &lt;code&gt;Startup.cs&lt;/code&gt; add to the other health checks the code &lt;code&gt;AddCheck&amp;lt;CustomCheck&amp;gt;(nameof(CustomCheck));&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The code how it should look is below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class CustomCheck : IHealthCheck
    {
        readonly HealthService _healthService;

        public CustomCheck(HealthService healthService) =&amp;gt; _healthService = healthService;

        public Task&amp;lt;HealthCheckResult&amp;gt; CheckHealthAsync(HealthCheckContext context, CancellationToken cancellationToken = default)
            =&amp;gt; _healthService.Healthy
                ? Task.FromResult(HealthCheckResult.Healthy())
                : Task.FromResult(HealthCheckResult.Unhealthy());
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Startup.cs
services
    .AddSingleton&amp;lt;HealthService&amp;gt;()
    .AddHealthChecks()
    .AddCosmosDb(cosmosDbConnString, "master")
    .AddCheck&amp;lt;CustomCheck&amp;gt;(nameof(CustomCheck));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some things of interest. The HealthCheckResults we return in our custom check are functions. They can be given a description as parameter to make it clearer why a specific response has failed. Secondly when adding our custom check we have to supply a name. This name is used internally to know which check we are talking about. We didn't supply it with the CosmosDb check as the nuget package implementation made sure a default one was set.&lt;/p&gt;

&lt;p&gt;Besides the name we can supply other parameters. We can set a timeout how long a check can take and add tags and use those to better identify checks or even ignore checks with certain tags. &lt;/p&gt;

&lt;p&gt;If we run the code now it still shows &lt;code&gt;Healthy&lt;/code&gt;, but when you toggle the bool to false it will show &lt;code&gt;Unhealthy&lt;/code&gt;. Right now the only way to do this is by restarting the application, but if you add an endpoint that would update that value changes can be seen on refresh of the page.&lt;/p&gt;

&lt;h2&gt;
  
  
  UI
&lt;/h2&gt;

&lt;p&gt;Besides a simple text answer there is also a lightweight graphical UI provided that makes it more visual. It comes with its own nuget package and similar to the normal health checks can be added with only a few lines of code.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the command line type &lt;code&gt;dotnet add package AspNetCore.HealthChecks.UI&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;In the command line type &lt;code&gt;dotnet add package AspNetCore.HealthChecks.UI.InMemory.Storage&lt;/code&gt; (There are different storage options for the UI like Sql, Postgress, etc. They all have their own nuget package. For the demo we use the simple InMemory storage)&lt;/li&gt;
&lt;li&gt;In the command line type 'dotnet add package AspNetCore.HealthChecks.UI.Client`&lt;/li&gt;
&lt;li&gt;Add the following line to &lt;code&gt;Startup.cs&lt;/code&gt;: &lt;code&gt;services.AddHealthChecksUI().AddInMemoryStorage();&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Update your &lt;code&gt;UseEndpoints()&lt;/code&gt; code in &lt;code&gt;Startup.cs&lt;/code&gt; to the code below:
&lt;code&gt;&lt;/code&gt;&lt;code&gt;
endpoints.MapControllers();
endpoints.MapHealthChecks("/hc", new HealthCheckOptions
{
ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
});
endpoints.MapHealthChecksUI(options =&amp;gt; options.UIPath = "/hc-ui");
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;ResponseWriter&lt;/code&gt; change is needed for the UI to fully understand how to display everything. Before the health endpoint only pushed out a single word. Now it will push out a json object which the UI will read and know how to display. You can see this when you go to the health endpoint yourself.&lt;/p&gt;

&lt;p&gt;If you run the project and go to &lt;code&gt;/hc-ui&lt;/code&gt; you see an empty status screen. We setup the code for the UI, but forgot to tell it which checks to look at. Let's do that now. Update your &lt;code&gt;AddHealthChecksUI&lt;/code&gt; code to the following:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
services&lt;br&gt;
    .AddHealthChecksUI(options =&amp;gt;&lt;br&gt;
    {&lt;br&gt;
        options.AddHealthCheckEndpoint("Healthcheck API", "/hc");&lt;br&gt;
    })&lt;br&gt;
    .AddInMemoryStorage();&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We supply it with the health endpoint of this API and give it a name as well. If we look at our UI again we see the following:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnqrp7j7s5x9oj7tsyfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnqrp7j7s5x9oj7tsyfk.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Write some code to update the bool we used for the custom check and see how the UI is updated. Or rotate the key used for your CosmosDb connection and see that the checks update correctly. You can inspect the details of each check and see when it failed.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F217qdml7xdff8cie0c8h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F217qdml7xdff8cie0c8h.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;appsettings.json&lt;/code&gt; you can provide configuration for the UI to look at multiple APIs and display them all on a single place. This way you can group results of different APIs in a single place.&lt;/p&gt;

&lt;h2&gt;
  
  
  To the cloud
&lt;/h2&gt;

&lt;p&gt;It is good to have checks and being able to output the data, but we don't want to manually check the dashboard every time to see if there is a problem.&lt;br&gt;
Luckily for us Azure has provided some features that automatically check the health endpoint of a web app and can take an unhealthy instance out of the load balancer until it's healthy again, or even restart or replace it.&lt;/p&gt;

&lt;p&gt;In the Azure portal under App service we can setup a Health check.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihb6h63cbx5683ynlzxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihb6h63cbx5683ynlzxh.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
There is only one option to set initially and this is &lt;code&gt;Enable&lt;/code&gt;. Once you select that option two extra options become available. The first one is the path to your health endpoint, which is /hc in our case. The second is the time an app can be unhealthy before it is removed from the load balancer. Once you save your changes every minute a call to the health endpoint will be made.&lt;/p&gt;

&lt;p&gt;Azure only looks at the HTTP response the page gives. If the response is in the 2XX range the instance is considered healthy, else it is shown as degraded or unhealthy. It also doesn't follow redirects. If your webapp settings allow http the check by Azure is done over http. If it is set to HTTPS only then Azure used https calls to check the endpoint. In case your health checks return HTTP 307 response make sure you set your webapp to only use https or remove the redirect from your code (it's in there by default).&lt;/p&gt;

&lt;p&gt;There are two important configuration values you can set for your webapp.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;WEBSITE_HEALTHCHECK_MAXPINGFAILURES&lt;/code&gt;: How many failed requests are needed for the service to be removed from the load balancer (is also set by the slider in the portal)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT&lt;/code&gt;: How many unhealthy instances will be excluded from the load balancer. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When an instance is removed from the load balancer Azure keeps checking the health endpoint to see if it returns to being healthy. If this takes to long it will try restarting the underlying VM and if the instance in unhealthy for over an hour it will be replaced. The instance still counts towards your total instances for scaling rules, it only is removed from the load balancer. When scaling happens Azure first checks the health endpoint to make sure the new instance is working correctly before adding it.&lt;/p&gt;

&lt;p&gt;Besides the automatic removal of an instance from the load balancer you can also setup alerting rules to get notified when a certain percentage of your instances is in an unhealthy state. &lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;This is just a small view on what can be done with health checks in ASP.Net Core. In case you are interested feel free to check out the resources below who go into the subject matter a bit deeper.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/monitor-app-health" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/monitor-app-health&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/health-check" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/health-check&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks" rel="noopener noreferrer"&gt;https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/app-service/monitor-instances-health-check" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/app-service/monitor-instances-health-check&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The demo project used can be found &lt;a href="https://github.com/evdbogaard/demo-healthcheck-api" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aspnetcore</category>
      <category>healthchecks</category>
      <category>azure</category>
    </item>
    <item>
      <title>Bicep vs ARM templates</title>
      <dc:creator>evdbogaard</dc:creator>
      <pubDate>Sun, 30 May 2021 19:34:29 +0000</pubDate>
      <link>https://dev.to/evdbogaard/bicep-vs-arm-templates-bf9</link>
      <guid>https://dev.to/evdbogaard/bicep-vs-arm-templates-bf9</guid>
      <description>&lt;p&gt;For a while now Bicep v0.3 is out. Prior to this version it was already a great tool to use, but since v0.3 it is fully capable to do the same as an ARM template. So, what is it and why would you want to use it?&lt;/p&gt;

&lt;p&gt;Infrastructure as code solutions allow you to declare how your infrastructure should look in a file (e.g json). When setup the way you want it makes it easier to add new resources and make sure they have the same settings as others. Especially for resources you only add once a year it can be easy for a developer to forget the exact settings he used the last time. Instead of looking them up they are already documented in the file and only adding the name to an array makes sure it gets created correctly. This also lowers the changes of human error, which is always nice.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Bicep?
&lt;/h1&gt;

&lt;p&gt;Bicep is a Domain Specific Language (DSL) that allows declarative deployment of Azure resources. Everything you can do with an ARM template you can also do with Bicep and as soon as a new resource is added to Azure it is immediately supported by Bicep.&lt;/p&gt;

&lt;h1&gt;
  
  
  Azure Resource Manager (ARM) templates
&lt;/h1&gt;

&lt;p&gt;As Bicep is closely linked to ARM templates you can't really talk about one, without mentioning the other. They both allow you to do the exact same thing, however when you start working with ARM templates you quickly notice it has some drawbacks. Let's start by comparing some ARM and Bicep templates with each other.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "subscriptionId": {
            "type": "string",
            "defaultValue": "[subscription().id]"
        },
        "name": {
            "type": "string",
            "defaultValue": "demo-bicep-webapp"
        },
        "location": {
            "type": "string",
            "defaultValue": "[resourceGroup().location]"
        },
        "serverFarmResourceGroup": {
            "type": "string",
            "defaultValue": "[resourceGroup().name]"
        },
        "sku": {
            "type": "string",
            "defaultValue": "Free"
        },
        "skuCode": {
            "type": "string",
            "defaultValue": "F1"
        }
    },
    "resources": [
        {
            "apiVersion": "2018-11-01",
            "name": "[parameters('name')]",
            "type": "Microsoft.Web/sites",
            "location": "[parameters('location')]",
            "dependsOn": [
                "[concat('Microsoft.Web/serverfarms/', parameters('name'))]"
            ],
            "properties": {
                "name": "[parameters('name')]",
                "siteConfig": {
                    "metadata": [
                        {
                            "name": "CURRENT_STACK",
                            "value": "dotnetcore"
                        }
                    ]
                },
                "serverFarmId": "[concat('/subscriptions/', parameters('subscriptionId'),'/resourcegroups/', parameters('serverFarmResourceGroup'), '/providers/Microsoft.Web/serverfarms/', parameters('name'))]"
            }
        },
        {
            "apiVersion": "2018-11-01",
            "name": "[parameters('name')]",
            "type": "Microsoft.Web/serverfarms",
            "location": "[parameters('location')]",
            "properties": {
                "name": "[parameters('name')]"
            },
            "sku": {
                "Tier": "[parameters('sku')]",
                "Name": "[parameters('skuCode')]"
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above you see a small ARM template that creates an App Service Plan and a Web App. As you can see there is a lot of boilerplate code to get everything working. Each value for a parameter needs to be on a new line. You have to figure out yourself which resources depend on each other and add them to the correct &lt;code&gt;dependsOn&lt;/code&gt; block. And you need a pretty elaborate &lt;code&gt;concat&lt;/code&gt; to get the Web App with the &lt;code&gt;serverFarmId&lt;/code&gt; of the App Service Plan you create in the same template.&lt;/p&gt;

&lt;p&gt;Below you see the same template, but written in Bicep. First thing you could notice is that it's only half the size of the ARM one. Paramters are more compact as the type, name, and default value can all be written in a single line. Secondly Bicep is smart enough to figure out if resources depend on each other. We use &lt;code&gt;appServicePlan&lt;/code&gt; inside the resource of &lt;code&gt;webApp&lt;/code&gt;, this allows it to know it first needs to deploy &lt;code&gt;appServicePlan&lt;/code&gt; and automatically adds the &lt;code&gt;dependsOn&lt;/code&gt; part when it gets converted from Bicep to an ARM template.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;param name string = 'demo-bicep-webapp'
param location string = resourceGroup().location
param sku string = 'Free'
param skuCode string = 'F1'

resource webApp 'Microsoft.Web/sites@2018-11-01' = {
  name: name
  location: location
  properties: {
    name: name
    siteConfig: {
      metadata: [
        {
          name: 'CURRENT_STACK'
          value: 'dotnetcore'
        }
      ]
    }
    serverFarmId: appServicePlan.id
  }
}

resource appServicePlan 'Microsoft.Web/serverfarms@2018-11-01' = {
  name: name
  location: location
  properties: {
    name: name
  }
  sku: {
    Tier: sku
    Name: skuCode
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Modules
&lt;/h1&gt;

&lt;p&gt;With the small examples used above it is easy to see what is going on. However, when you're deploying your entire infrastructure this way a single file quickly becomes too big.&lt;br&gt;
ARM has linked templates to deal with this problem. A separate file you can link towards which holds a part of your deployment. Unfortunately linked templates cannot be a local file. The file must be accessible through an url. This can be a link to a storage account or a public repository like GitHub.&lt;/p&gt;

&lt;p&gt;To use it add a deployment to your resources array with the link to the template and a link to the parameters file if available. Below is a small example of how this looks, taken from Microsoft docs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"resources": [
  {
    "type": "Microsoft.Resources/deployments",
    "apiVersion": "2020-10-01",
    "name": "linkedTemplate",
    "properties": {
      "mode": "Incremental",
      "templateLink": {
        "uri": "https://mystorageaccount.blob.core.windows.net/AzureTemplates/newStorageAccount.json",
        "contentVersion": "1.0.0.0"
      },
      "parametersLink": {
        "uri": "https://mystorageaccount.blob.core.windows.net/AzureTemplates/newStorageAccount.parameters.json",
        "contentVersion": "1.0.0.0"
      }
    }
  }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Bicep they introduced modules. When converted to ARM template a module translated to a nested template. A nested template is similar to a linked template with the difference that instead of an external link, the template is written in the same file. This generates a big ARM template that isn't friendly to read, but as you work from Bicep you don't need to make changes to the ARM template directly.&lt;/p&gt;

&lt;p&gt;Each file saved with a bicep extension can be used as a module. From another bicep file use the module keyword and supply the name and params it needs. The name is only for the deployment and must be unique in the deployment itself. The params are the same params the file you point to uses. Modules also support a scope which allows you to deploy the module in a different resource group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module webapp './webapp.bicep' = {
  name: 'webappDeploy'
  params: {
    name: 'demo-bicep-webapp'
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  CLI commands
&lt;/h1&gt;

&lt;p&gt;With the introduction of Bicep v0.3 it has been integrated with the Azure CLI. Before you had to install Bicep yourself, but now if you have Azure CLI installed you can do all the same commands starting with &lt;code&gt;az bicep&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The command you probably use the most is &lt;code&gt;az bicep build&lt;/code&gt;. It takes a single &lt;code&gt;--file&lt;/code&gt; as input argument and converts it to an ARM template with the json extension. By default it has the same name as the input file, but you can choose to give it a different name with the &lt;code&gt;--output&lt;/code&gt; argument. Prior to v0.3 it was possible to build multiple files at once, but since the integration with Azure CLI it seems only one file can be build at a time. If someone knows of a way to have multiple files build at once I'd be happy to learn that.&lt;/p&gt;

&lt;p&gt;Besides building a Bicep file there is also a command to deploy your file to a resource group or subscription. This command previously only accepted an ARM template, but now it can handle Bicep files directly. In the background it converts them first to an ARM template and continues the deployment with that template.&lt;br&gt;
The commands to do this are in the &lt;code&gt;az deployment&lt;/code&gt; group. You can choose either &lt;code&gt;sub&lt;/code&gt; or &lt;code&gt;group&lt;/code&gt; depending if you want to deploy to an subscription or resource group. &lt;code&gt;create&lt;/code&gt; is used to start a new deployment. Furthermore there are arguments to point to the bicep file, which can be a local file or an URL to a file, and add a parameters file if necessary. The total command could look like this: &lt;code&gt;az deployment group create -g demo-rg --template-file webapp.bicep --parameters webapp.parameters.dev.json&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;There is another argument you can pass along with the above command. It is called &lt;code&gt;--confirm-with-what-if&lt;/code&gt; or simply &lt;code&gt;-c&lt;/code&gt;. This argument checks what you current deployment would change in your resource group or subscription and shows a list of changes it will make. You then need to confirm that you want those changes before it applies them.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rrxbzhq55ibneady8f4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rrxbzhq55ibneady8f4.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Decompile an ARM template
&lt;/h1&gt;

&lt;p&gt;As Bicep is new compared to ARM templates there is the possibility that you already have multiple templates. To make the step towards Bicep easier they added a command to convert an ARM template to Bicep. The command for this is &lt;code&gt;az bicep decompile&lt;/code&gt;. It takes a json file as input and tries it best to make it in Bicep. If we take the ARM template from the beginning it converts to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;param subscriptionId string = subscription().id
param name string = 'demo-bicep-webapp'
param location string = resourceGroup().location
param serverFarmResourceGroup string = resourceGroup().name
param sku string = 'Free'
param skuCode string = 'F1'

resource name_resource 'Microsoft.Web/sites@2018-11-01' = {
  name: name
  location: location
  properties: {
    name: name
    siteConfig: {
      metadata: [
        {
          name: 'CURRENT_STACK'
          value: 'dotnetcore'
        }
      ]
    }
    serverFarmId: '/subscriptions/${subscriptionId}/resourcegroups/${serverFarmResourceGroup}/providers/Microsoft.Web/serverfarms/${name}'
  }
  dependsOn: [
    Microsoft_Web_serverfarms_name
  ]
}

resource Microsoft_Web_serverfarms_name 'Microsoft.Web/serverfarms@2018-11-01' = {
  name: name
  location: location
  properties: {
    name: name
  }
  sku: {
    Tier: sku
    Name: skuCode
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see it's not perfect as the &lt;code&gt;serverFarmId&lt;/code&gt; can be simplified and that would eliminate the need of the &lt;code&gt;dependsOn&lt;/code&gt;, but it is a great start if you have multiple ARM templates and want to move to Bicep.&lt;/p&gt;

&lt;p&gt;There is more Bicep can do. As stated earlier everything that is possible in an ARM template is also possible in Bicep, with the exception of some know limitations (&lt;a href="https://github.com/Azure/bicep#known-limitations" rel="noopener noreferrer"&gt;https://github.com/Azure/bicep#known-limitations&lt;/a&gt;). They are actively working on Bicep so the list gets shorter over time. The modules, less boilerplate code, and cli integration are already enough for me to choose Bicep over ARM template, but be sure to check it out yourself to see what more it supports.&lt;/p&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Azure/bicep" rel="noopener noreferrer"&gt;https://github.com/Azure/bicep&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>bicep</category>
      <category>arm</category>
    </item>
    <item>
      <title>Branch policies in Azure Repos</title>
      <dc:creator>evdbogaard</dc:creator>
      <pubDate>Tue, 27 Apr 2021 19:39:33 +0000</pubDate>
      <link>https://dev.to/evdbogaard/branch-policies-in-azure-repos-11c5</link>
      <guid>https://dev.to/evdbogaard/branch-policies-in-azure-repos-11c5</guid>
      <description>&lt;p&gt;Hi, I've been wanting to start writing some blog posts for a while, but always find excuses why not to do it. Now I had a small thing for work I had to figure out and though why not give it a go. Hope you find it interesting :)&lt;/p&gt;

&lt;h1&gt;
  
  
  What are Azure Repos?
&lt;/h1&gt;

&lt;p&gt;When you have an Azure DevOps account you get access to Azure Repos. It is a collection of version control tools that help with code management. There are two types of version control offered within Azure Repos: Git and Team Foundation Version Control (TFVC).&lt;/p&gt;

&lt;p&gt;Personally I have only experience with Git repos so I'll be focusing on them in this post. With an Azure DevOps account you can create both private and public git repositories for free.&lt;/p&gt;

&lt;h1&gt;
  
  
  What are branch policies?
&lt;/h1&gt;

&lt;p&gt;Branch policies help you protect important branches and enforce code quality. When a branch has a policy set to it, it is no longer possible to commit directly to it. All changes need to be done through a Pull Request (PR).&lt;br&gt;
There are different kind of policies that can be set up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Require a minimum number of reviewers&lt;/li&gt;
&lt;li&gt;Check for linked work items&lt;/li&gt;
&lt;li&gt;Check for comment resolution&lt;/li&gt;
&lt;li&gt;Limit merge types&lt;/li&gt;
&lt;li&gt;Build validation&lt;/li&gt;
&lt;li&gt;Status checks&lt;/li&gt;
&lt;li&gt;Automatically include code reviewers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I won't go into depth about what each policy does. Most names are self explanatory and in case you want to read up more on them I suggest reading &lt;a href="https://docs.microsoft.com/en-us/azure/devops/repos/git/branch-policies?view=azure-devops" rel="noopener noreferrer"&gt;this article&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting policies through Azure DevOps
&lt;/h2&gt;

&lt;p&gt;The easiest way to set branch policies is through the DevOps website. There are two ways to set them. Create a branch policy for all repositories at once, or set one per repository.&lt;/p&gt;
&lt;h3&gt;
  
  
  All repositories branch policy
&lt;/h3&gt;

&lt;p&gt;Go to &lt;code&gt;Project Settings&lt;/code&gt; -&amp;gt; &lt;code&gt;Repositories&lt;/code&gt; -&amp;gt; &lt;code&gt;Policies&lt;/code&gt;&lt;br&gt;
At the bottom of the page you see all project-wide branch policies currently active. By clicking on the Plus icon you can select which branch to protect and enable the policies you need.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbe9cmszcwco3xztz4ze4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbe9cmszcwco3xztz4ze4.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When activating a specific policy the page shows all options you can set for that policy. Some have more options than others. Changes are immediately saved and don't require a confirmation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96frjknnmugxd3psn315.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96frjknnmugxd3psn315.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Specific repository branch policy
&lt;/h3&gt;

&lt;p&gt;Go to &lt;code&gt;Project Settings&lt;/code&gt; -&amp;gt; &lt;code&gt;Repositories&lt;/code&gt; -&amp;gt; Select the repository -&amp;gt; &lt;code&gt;Policies&lt;/code&gt; -&amp;gt; Select the branch&lt;br&gt;
You now see a page that shows the policies set for this specific branch/repo combination. Setting or changing a policy works the same as previously described. If there is a project-wide branch policy set you can also see it here, but not alter it. You will see a small information message after the policy itself.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6hkco5zv5j366e0jak9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6hkco5zv5j366e0jak9.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Automate setting policies with Azure CLI
&lt;/h2&gt;

&lt;p&gt;As with most things Azure there are multiple ways to do it. If you have multiple repositories or branches you want to set the same policies on, but cannot use the project-wide branch policies then automation is your friend.&lt;br&gt;
I had a problem similar to this and want to show how I solved it.&lt;/p&gt;

&lt;p&gt;What I tried to do was adding policies to the &lt;code&gt;main&lt;/code&gt; branch for multiple repositories, however some repositories didn't needed these rules. I also wanted to make sure that running the script multiple times would not result in errors or duplicate policies.&lt;/p&gt;

&lt;p&gt;Here is the code I came up with (PowerShell):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$orgUrl = "URL_OF_ORGANIZATION" # https://dev.azure.com/OrgName
$project = "NAME_OF_PROJECT"

$repositories = @("Repository", "Names", "To", "Set", "Policies", "For")
$branch = "NAME_OF_BRANCH"
$approversId = "ID_OF_GROUP_OR_USER"

foreach($repo in $repositories) {
    $repoId = (az repos show --org $orgUrl -p $project --repository $repo --query id -o tsv)
    echo "$repo has id: $repoId"

    $currentPolicies = (az repos policy list --org $orgUrl -p $project --repository-id $repoId --branch $branch --query [].type.displayName -o tsv)

    if ($currentPolicies -eq $null -Or !$currentPolicies.Contains("Minimum number of reviewers")) {
        echo "Creating minimum number of reviewers policy for $repo"
        az repos policy approver-count create --org $orgUrl -p $project --branch $branch --repository-id $repoId --allow-downvotes false --blocking true --creator-vote-counts true --enabled true --minimum-approver-count 1 --reset-on-source-push false -o none
    } else {
        echo "$repo already has minimum number of reviewers policy"
    }

    if ($currentPolicies -eq $null -Or !$currentPolicies.Contains("Comment requirements")) {
        echo "Creating comment requirements policy for $repo"
        az repos policy comment-required create --org $orgUrl -p $project --branch $branch --repository-id $repoId --blocking true --enabled true -o none
    } else {
        echo "$repo already has comment requirements policy"
    }

    if ($currentPolicies -eq $null -Or !$currentPolicies.Contains("Work item linking")) {
        echo "Creating work item linking policy for $repo"
        az repos policy work-item-linking create --org $orgUrl -p $project --branch $branch --repository-id $repoId --blocking true --enabled true -o none
    } else {
        echo "$repo already has work item linking policy"
    }

    if ($currentPolicies -eq $null -Or !$currentPolicies.Contains("Required reviewers")) {
        echo "Creating required reviewers policy for $repo"
        az repos policy required-reviewer create --org $orgUrl -p $project --branch $branch --repository-id $repoId --blocking true --enabled true --message "PR Approvers" --required-reviewer-ids $approversId -o none
    } else {
        echo "$repo already has required reviewers policy"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's look at it step by step. First of all &lt;code&gt;az repos&lt;/code&gt; commands need an organization url and a project name. These can be set as default with the &lt;code&gt;az devops configure&lt;/code&gt; command, but if they are not set they need to be passed with each command.&lt;/p&gt;

&lt;p&gt;The approverIds is the ID of a group I created in Azure DevOps. The ID I obtained with this command: `az devops team list --org $orgUrl -p $project --query '[].{name:name, id:id}'&lt;/p&gt;

&lt;p&gt;After that I create an array with repo names and set the branch I want to operate on. We then loop through each repository and get the ID and already active policies for that repo/branch combination.&lt;/p&gt;

&lt;p&gt;Each policy has an unique id or displayname. I grabbed the displaynames here to make it more understandable. For each policy we want to set we check if it's displayname is already in the active policies list and skip if so.&lt;/p&gt;

&lt;h3&gt;
  
  
  Further improvements
&lt;/h3&gt;

&lt;p&gt;The current code is really basic, but does it's job. We can improve it further by limiting the number of calls it needs to make. Especially with lots of repositories it can take a while to run.&lt;br&gt;
I'm also not a fan of the &lt;code&gt;Contains&lt;/code&gt; check, if someone knows a more efficient way to check please share it.&lt;br&gt;
Finally there is an issue that if a policy is already set we simply ignore it, even if it has a different setup. Besides the &lt;code&gt;create&lt;/code&gt; command there is also &lt;code&gt;update&lt;/code&gt;. We probably need to do some kind of checking here to make sure all policies are updated correctly.&lt;/p&gt;

&lt;h1&gt;
  
  
  Further reading
&lt;/h1&gt;

&lt;p&gt;If this sounds interesting for you make sure to check out the following links for extra information.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/devops/repos/get-started/what-is-repos?view=azure-devops" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/devops/repos/get-started/what-is-repos?view=azure-devops&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/devops/repos/git/branch-policies-overview?view=azure-devops" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/devops/repos/git/branch-policies-overview?view=azure-devops&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/cli/azure/repos/policy?view=azure-cli-latest" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/cli/azure/repos/policy?view=azure-cli-latest&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
    </item>
  </channel>
</rss>
