<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kamil Mrzygłód</title>
    <description>The latest articles on DEV Community by Kamil Mrzygłód (@kamil-mrzyglod).</description>
    <link>https://dev.to/kamil-mrzyglod</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kamil-mrzyglod"/>
    <language>en</language>
    <item>
      <title>Topaz vs Azurite: what actually works locally and what doesn't</title>
      <dc:creator>Kamil Mrzygłód</dc:creator>
      <pubDate>Tue, 05 May 2026 11:59:04 +0000</pubDate>
      <link>https://dev.to/kamil-mrzyglod/topaz-vs-azurite-what-actually-works-locally-and-what-doesnt-231p</link>
      <guid>https://dev.to/kamil-mrzyglod/topaz-vs-azurite-what-actually-works-locally-and-what-doesnt-231p</guid>
      <description>&lt;p&gt;If you have ever written a line of Azure code on a laptop, you have used Azurite. It is the official local emulator for Azure Storage, ships in every Visual Studio install, and runs unchanged in tens of thousands of CI pipelines. For Storage-only workloads it is an excellent tool — Microsoft maintains it, Azure SDKs target it, and the parity with the real Azure Storage REST API is strong.&lt;/p&gt;

&lt;p&gt;The problem is that real applications stop at Azure Storage roughly never. The moment you reach for a secret in Key Vault, publish a message to Service Bus, push an image to a Container Registry, or want a &lt;code&gt;DefaultAzureCredential&lt;/code&gt; chain that does not silently fall back to interactive browser auth, Azurite has nothing to offer. You are left bolting together a Service Bus emulator from a community Docker image, mocking the Key Vault SDK in tests, and hoping that the way your CI fakes Entra tokens does not drift away from how production behaves.&lt;/p&gt;

&lt;p&gt;Topaz is a single .NET 8 (and when 1.3 version is released - .NET 10) binary that emulates Azure Storage, Key Vault, Service Bus, Event Hubs, Container Registry, Managed Identity, RBAC, ARM, and a working Entra ID layer in one process. This post is an honest comparison between the two, focused on what developers who already know Azurite actually run into.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Run the Azure SDK, Azure CLI, Terraform, and &lt;code&gt;docker push&lt;/code&gt; against a local emulator — no Azure subscription, no service principal, no cloud charges.&lt;/p&gt;


&lt;pre class="highlight shell"&gt;&lt;code&gt;brew tap thecloudtheory/topaz &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; brew &lt;span class="nb"&gt;install &lt;/span&gt;topaz &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; topaz-host   &lt;span class="c"&gt;# macOS&lt;/span&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://raw.githubusercontent.com/TheCloudTheory/Topaz/main/install/get-topaz.sh | bash   &lt;span class="c"&gt;# Linux&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://topaz.thecloudtheory.com/docs/intro" rel="noopener noreferrer"&gt;Getting started →&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Where Topaz and Azurite agree
&lt;/h2&gt;

&lt;p&gt;It is worth being clear about this up front: for Azure Storage, Azurite is good. Anyone telling you otherwise is selling something. Topaz does not exist because Azurite is bad at Storage — it exists because Azurite is &lt;em&gt;only&lt;/em&gt; Storage, and most real Azure applications are not.&lt;/p&gt;

&lt;p&gt;For Blob, Queue, and Table data plane operations, the two emulators are at full parity on the cases that matter for day-to-day development:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Storage feature&lt;/th&gt;
&lt;th&gt;Topaz&lt;/th&gt;
&lt;th&gt;Azurite&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Blob basic operations (put, get, delete, head, list)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Blob metadata, container metadata&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Block blobs (&lt;code&gt;Put Block&lt;/code&gt;, &lt;code&gt;Put Block List&lt;/code&gt;, &lt;code&gt;Get Block List&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Page blobs (&lt;code&gt;Put Page&lt;/code&gt;, &lt;code&gt;Get Page Ranges&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Container ACLs and stored access policies&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Container and blob leases (acquire / renew / change / release / break)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Blob copy and snapshots&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Table create / delete / query / entity CRUD&lt;/td&gt;
&lt;td&gt;Yes (stable)&lt;/td&gt;
&lt;td&gt;Yes (preview)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Table ACL (stored access policies)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Queue messages (enqueue, dequeue, peek, update, delete, clear)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Queue ACL&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RA-GRS secondary endpoints&lt;/td&gt;
&lt;td&gt;Partial (roadmap)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The two practical Storage gaps in Topaz today are Account / Service SAS token validation on the data plane (the ARM endpoints that &lt;em&gt;generate&lt;/em&gt; SAS strings exist; the data plane providers do not yet recognise the &lt;code&gt;?sv=...&amp;amp;sig=...&lt;/code&gt; parameters in incoming requests) and full RA-GRS secondary endpoint emulation. Both are scheduled — SAS validation is on the v1.4-beta milestone, RA-GRS is on v1.4-beta as well. More on that further down.&lt;/p&gt;

&lt;p&gt;The Storage feature that Azurite still does not have is &lt;em&gt;multiple named storage accounts as a first-class concept&lt;/em&gt;. Azurite's default account is &lt;code&gt;devstoreaccount1&lt;/code&gt;. Adding more requires editing your hosts file, exporting &lt;code&gt;AZURITE_ACCOUNTS&lt;/code&gt; with &lt;code&gt;name:key1:key2;name:key1:key2&lt;/code&gt;, and restarting the emulator. Topaz has a real ARM control plane: &lt;code&gt;az storage account create --name sa-orders --resource-group rg-local&lt;/code&gt; works the same way it works against Azure, registers a DNS entry automatically, and gives you a real connection string you can paste into Azure Storage Explorer or hand to Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Azurite stops: Key Vault
&lt;/h2&gt;

&lt;p&gt;This is the largest single delta between the two emulators, and it is the reason most developers eventually try to replace Azurite.&lt;/p&gt;

&lt;p&gt;Azurite has no Key Vault. Not a partial one, not a stubbed one — none. The standard workarounds are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mock the &lt;code&gt;SecretClient&lt;/code&gt; in tests (loses any coverage of the actual auth layer).&lt;/li&gt;
&lt;li&gt;Read secrets from &lt;code&gt;appsettings.Development.json&lt;/code&gt; instead (creates a code-path divergence between local and production).&lt;/li&gt;
&lt;li&gt;Spin up a real Key Vault per developer (works, but now CI needs Azure credentials and the developer onboarding doc has a "before you can run tests, ask the platform team for..." section).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Topaz has the most complete Key Vault emulation of any service it ships. The control plane covers the full vault lifecycle — create, update, soft-delete, list deleted, recover, purge, name availability, access policy management — and the data plane runs on its own port (&lt;code&gt;8898&lt;/code&gt;) reachable at &lt;code&gt;https://&amp;lt;vault-name&amp;gt;.vault.topaz.local.dev:8898/&lt;/code&gt;, which is the URL pattern the Azure SDK already constructs.&lt;/p&gt;

&lt;p&gt;What works today on the data plane:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Surface&lt;/th&gt;
&lt;th&gt;Operations&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Secrets&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Set, Get (by name and version), List, Update, Delete, Get Versions, Backup, Restore, full soft-delete surface (Get Deleted, List Deleted, Recover, Purge)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Keys&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Create, Import (RSA + EC), Get, List, Update, Delete, Backup, Restore, Rotate, Get/Update Rotation Policy, full soft-delete surface, &lt;code&gt;encrypt&lt;/code&gt;, &lt;code&gt;decrypt&lt;/code&gt;, &lt;code&gt;sign&lt;/code&gt;, &lt;code&gt;verify&lt;/code&gt;, &lt;code&gt;wrapKey&lt;/code&gt;, &lt;code&gt;unwrapKey&lt;/code&gt;, &lt;code&gt;release&lt;/code&gt;, &lt;code&gt;Get Random Bytes&lt;/code&gt;, &lt;code&gt;Get Key Attestation&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Certificates&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Not yet — full surface scheduled for v1.3-beta&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The soft-delete and recovery surface deserves a specific call-out because it is what most teams accidentally depend on. If your application catches &lt;code&gt;KeyVaultErrorException&lt;/code&gt; on a soft-deleted secret, recovers it, and continues — that code path is unreachable in any other local emulator. Topaz exercises it end-to-end. Tokens are real signed JWTs. The vault URL works with &lt;code&gt;DefaultAzureCredential&lt;/code&gt; exactly as in production.&lt;/p&gt;

&lt;p&gt;What is not there yet is the certificates data plane (CRUD, soft-delete surface, issuers, contacts, merge — all scheduled for v1.3-beta). If your application reads certificates from Key Vault — for example, an ASP.NET Core app loading its TLS cert via the Key Vault provider — that path is not yet covered. For secrets and keys, including the cryptographic operations, it is.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Azurite stops: Service Bus
&lt;/h2&gt;

&lt;p&gt;Service Bus is the second most common reason teams outgrow Azurite. The community workarounds — RabbitMQ, ActiveMQ, in-memory test doubles — all break the same invariant: the AMQP wire format is not the same, so any code that reaches into broker-specific behaviour (dead-letter queues, peek-lock vs receive-and-delete, deferred messages, session state) drifts away from Service Bus reality.&lt;/p&gt;

&lt;p&gt;Topaz runs a real AMQP 1.0 broker on ports &lt;code&gt;8889&lt;/code&gt; (plain) and &lt;code&gt;5671&lt;/code&gt; (AMQP/TLS), implemented natively in the host process. The Azure Service Bus SDK connects to it without modification. MassTransit, NServiceBus, and any other library that speaks AMQP work because they are speaking AMQP, not because Topaz pretends to be Service Bus through a thin shim.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Surface&lt;/th&gt;
&lt;th&gt;What works&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Namespaces (control plane)&lt;/td&gt;
&lt;td&gt;Create / Update / Delete / Get / List By Resource Group&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Queues (control plane)&lt;/td&gt;
&lt;td&gt;Create / Update / Delete / Get / List By Namespace&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Topics (control plane)&lt;/td&gt;
&lt;td&gt;Create / Update / Delete / Get / List By Namespace&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Subscriptions&lt;/td&gt;
&lt;td&gt;Create / Update / Delete / Get (via AMQP management node)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Messaging (data plane, AMQP)&lt;/td&gt;
&lt;td&gt;Send / Receive on queues and topics, Complete / Abandon / Dead-letter&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AMQP entity management&lt;/td&gt;
&lt;td&gt;Create / Get / Delete queues, topics, subscriptions — used by MassTransit&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Specifically &lt;em&gt;not&lt;/em&gt; implemented yet: subscription rule management (filters / actions), namespace authorization rules, list-keys / regenerate-keys, disaster recovery configs, migration configs, private endpoints. For most application development workflows this is a non-issue; if you are building tooling that programmatically rotates Service Bus SAS keys or manages DR pairing, real Azure is still the right target for those tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Azurite stops: Container Registry
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;docker push myregistry.azurecr.io/myapp:latest&lt;/code&gt; against a local emulator is the kind of thing that historically required either a real ACR (and therefore Azure credentials in CI) or running a generic Docker registry and pretending it is ACR (works for &lt;code&gt;pull&lt;/code&gt;, breaks the moment anything touches the ARM control plane or the ACR-specific OAuth2 exchange).&lt;/p&gt;

&lt;p&gt;Topaz emulates both layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Control plane&lt;/strong&gt; (ARM): create, update, delete, list registries; manage admin credentials; toggle &lt;code&gt;adminUserEnabled&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data plane&lt;/strong&gt; (OCI Distribution Spec, port &lt;code&gt;8892&lt;/code&gt;): manifest CRUD, blob upload (&lt;code&gt;POST&lt;/code&gt; / &lt;code&gt;PATCH&lt;/code&gt; / &lt;code&gt;PUT&lt;/code&gt; chunked uploads), blob download and existence checks, repository catalog, tag listing, and the ACR-specific &lt;code&gt;/oauth2/exchange&lt;/code&gt; endpoint that makes &lt;code&gt;az acr login&lt;/code&gt; work.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The full authentication flow — &lt;code&gt;GET /v2/&lt;/code&gt; returning a &lt;code&gt;Www-Authenticate&lt;/code&gt; challenge, &lt;code&gt;POST /oauth2/exchange&lt;/code&gt; swapping the Entra token for an ACR refresh token, and the bearer token round-trip — is implemented end-to-end. There is a &lt;a href="https://topaz.thecloudtheory.com/blog/acr-data-plane" rel="noopener noreferrer"&gt;separate post on how this works&lt;/a&gt; that goes into the design tradeoffs.&lt;/p&gt;

&lt;p&gt;What this unlocks: you can run &lt;code&gt;docker push&lt;/code&gt;, &lt;code&gt;helm push&lt;/code&gt;, and any OCI-compliant client against Topaz without modifying anything in your build pipeline. The registry hostname follows real Azure conventions (&lt;code&gt;myregistry.cr.topaz.local.dev:8892&lt;/code&gt;). Image promotion workflows, CI builds that publish images, and Terraform configurations that create ACRs all work locally. &lt;code&gt;docker pull&lt;/code&gt; and end-to-end image pull-through is on the roadmap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Azurite stops: Entra ID and Managed Identity
&lt;/h2&gt;

&lt;p&gt;This is the silent one. Azurite does not need an identity layer because it accepts SharedKey for authentication and falls back to "no auth" when nothing is provided. The moment your application uses &lt;code&gt;DefaultAzureCredential&lt;/code&gt; — which is the Microsoft-recommended pattern for everything except Storage — Azurite cannot help. Your local code either talks to a real Entra tenant (forcing every developer onto a corporate identity) or you replace &lt;code&gt;DefaultAzureCredential&lt;/code&gt; with a custom test double (creating a code-path divergence with production).&lt;/p&gt;

&lt;p&gt;Topaz ships a working Entra ID emulation layer. A local tenant (&lt;code&gt;topaz.local.dev&lt;/code&gt;, tenant ID &lt;code&gt;50717675-3E5E-4A1E-8CB5-C62D8BE8CA48&lt;/code&gt;) is provisioned at startup with a built-in superadmin account. Every token is a real, signed JWT — same format Azure issues, same claims layout, signed with HMAC-SHA256, one-hour lifetime. The OIDC discovery endpoint (&lt;code&gt;/.well-known/openid-configuration&lt;/code&gt;) is fully functional, both &lt;code&gt;/organizations/&lt;/code&gt; and &lt;code&gt;/{tenantId}/&lt;/code&gt; variants are served, and four grant types are supported: &lt;code&gt;client_credentials&lt;/code&gt;, &lt;code&gt;password&lt;/code&gt;, &lt;code&gt;authorization_code&lt;/code&gt;, and &lt;code&gt;refresh_token&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Tied to that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Microsoft Graph API&lt;/strong&gt; for users, applications, service principals, and groups — enough to script a full identity setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Managed Identity&lt;/strong&gt; — user-assigned and system-assigned, including federated identity credentials. The same &lt;code&gt;DefaultAzureCredential&lt;/code&gt; chain that works in production works locally, no special credential type required.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RBAC&lt;/strong&gt; — role assignments and role definitions at any ARM scope, with the standard built-in roles (Owner, Contributor, Reader, plus the service-specific data-plane roles like &lt;code&gt;Storage Blob Data Contributor&lt;/code&gt;) pre-loaded.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the difference between "I can read a blob locally" and "I can run my entire authn/authz path locally". The latter catches a class of bug — token audience mismatches, role assignment drift, scope errors — that mocks never catch.&lt;/p&gt;

&lt;p&gt;There is a &lt;a href="https://topaz.thecloudtheory.com/blog/entra-id-emulation" rel="noopener noreferrer"&gt;dedicated post on the Entra emulation layer&lt;/a&gt; if you want the design details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Azurite stops: ARM, Terraform, and the Azure CLI
&lt;/h2&gt;

&lt;p&gt;Azurite has no ARM control plane, which means there is no &lt;code&gt;az group create&lt;/code&gt;, no &lt;code&gt;azurerm_resource_group&lt;/code&gt;, no Bicep deployment, no way to express the resources your application needs in the same infrastructure-as-code language you use in production. Local development and production end up using two different definitions of "what infrastructure exists".&lt;/p&gt;

&lt;p&gt;Topaz has a working ARM emulation. The Resource Manager port (&lt;code&gt;8899&lt;/code&gt;) accepts &lt;code&gt;az&lt;/code&gt; and &lt;code&gt;azurerm&lt;/code&gt; provider traffic, exposes a metadata document at the discovery endpoint that points every Azure API URL at the local emulator, and accepts ARM template / Bicep deployments end-to-end. The same &lt;code&gt;terraform apply&lt;/code&gt; that creates resources in Azure can create them in Topaz with one provider setting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"azurerm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;features&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="nx"&gt;metadata_host&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"topaz.local.dev:8899"&lt;/span&gt;
  &lt;span class="nx"&gt;resource_provider_registrations&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"none"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the entire integration. &lt;code&gt;metadata_host&lt;/code&gt; redirects endpoint discovery; the AzureRM provider then constructs every subsequent URL — Storage, Key Vault, Service Bus, Container Registry — from Topaz's metadata document. Detail in the &lt;a href="https://topaz.thecloudtheory.com/blog/terraform-local-azure-no-subscription" rel="noopener noreferrer"&gt;Terraform integration post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The Azure CLI works the same way: register Topaz as a cloud environment with &lt;code&gt;az cloud register&lt;/code&gt;, switch to it, and &lt;code&gt;az login&lt;/code&gt; issues a token from the local Entra layer. From there, &lt;code&gt;az keyvault secret set&lt;/code&gt;, &lt;code&gt;az servicebus queue create&lt;/code&gt;, &lt;code&gt;az acr login&lt;/code&gt;, &lt;code&gt;az group deployment create&lt;/code&gt; all work as they do against Azure. Azurite supports &lt;code&gt;az storage&lt;/code&gt; and only &lt;code&gt;az storage&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Azurite still does better
&lt;/h2&gt;

&lt;p&gt;A genuinely honest post needs this section. Azurite is not strictly worse — it is narrower, more mature in its narrow scope, and the right call for some workloads.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Microsoft maintenance.&lt;/strong&gt; Azurite is shipped and supported by Microsoft. Topaz is open source and maintained by an independent team. If your organisation requires vendor-supported software for local development tooling, that distinction matters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RA-GRS secondary endpoints.&lt;/strong&gt; Azurite emulates the read-access geo-redundant secondary URL pattern today. Topaz has partial support; full secondary endpoint semantics — secondary DNS hostnames, &lt;code&gt;GeoReplicationStats&lt;/code&gt; payloads, and read-only enforcement — are scheduled for v1.4-beta.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The &lt;code&gt;UseDevelopmentStorage=true&lt;/code&gt; shortcut.&lt;/strong&gt; Azurite is hardcoded to the Azure SDK's &lt;code&gt;UseDevelopmentStorage=true&lt;/code&gt; connection string. Topaz uses real connection strings — the same format Azure issues — which is more flexible but loses the one-line shortcut.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual Studio integration.&lt;/strong&gt; Azurite ships with Visual Studio and Storage Explorer has a built-in "Local Emulator" entry. Topaz works with Storage Explorer, but you connect via a connection string rather than the dedicated emulator option.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maturity.&lt;/strong&gt; Azurite has been in production CI pipelines for years. Topaz is in beta as of v1.0 and is moving fast — that is also the reason this post needs to list which surfaces are not yet covered.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your application uses only Storage, a single account is enough, and you do not need ARM or Terraform integration locally, Azurite is genuinely the simpler choice and there is no reason to switch.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is coming for Storage in Topaz
&lt;/h2&gt;

&lt;p&gt;The Storage roadmap is publicly tracked in &lt;a href="https://github.com/TheCloudTheory/Topaz/blob/main/BACKLOG.md" rel="noopener noreferrer"&gt;&lt;code&gt;BACKLOG.md&lt;/code&gt;&lt;/a&gt; and mirrored on the &lt;a href="https://topaz.thecloudtheory.com/docs/roadmap" rel="noopener noreferrer"&gt;website roadmap&lt;/a&gt;. The near-term Storage work in flight:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v1.4-beta — SAS token validation on the data plane.&lt;/strong&gt;&lt;br&gt;
The control plane already generates SAS tokens via &lt;code&gt;ListAccountSas&lt;/code&gt; and &lt;code&gt;ListServiceSas&lt;/code&gt;. The data-plane security providers (Blob, Queue, Table) currently only recognise the &lt;code&gt;Authorization:&lt;/code&gt; header — incoming requests with &lt;code&gt;?sv=...&amp;amp;sig=...&lt;/code&gt; query strings hit the missing-header path and return &lt;code&gt;401&lt;/code&gt;. The work in progress adds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Account SAS validation: detects &lt;code&gt;sv&lt;/code&gt;, &lt;code&gt;ss&lt;/code&gt;, &lt;code&gt;srt&lt;/code&gt;, &lt;code&gt;sp&lt;/code&gt;, &lt;code&gt;se&lt;/code&gt;, &lt;code&gt;st&lt;/code&gt;, &lt;code&gt;spr&lt;/code&gt;, &lt;code&gt;sip&lt;/code&gt;, &lt;code&gt;sig&lt;/code&gt;, builds the canonical Account SAS string-to-sign, HMAC-SHA256 against the account key, validates expiry / service / resource type / HTTP method.&lt;/li&gt;
&lt;li&gt;Service SAS validation: per-service string-to-sign, including the canonicalized resource and the response-header overrides (&lt;code&gt;rscc&lt;/code&gt;, &lt;code&gt;rscd&lt;/code&gt;, etc.), with stored access policy lookup when &lt;code&gt;si=&amp;lt;policyId&amp;gt;&lt;/code&gt; is present.&lt;/li&gt;
&lt;li&gt;Stored access policy enforcement: the Container ACL / Queue ACL / Table ACL endpoints already round-trip &lt;code&gt;&amp;lt;SignedIdentifiers&amp;gt;&lt;/code&gt; XML to disk; v1.4 wires them into the SAS validation path so revoking a named policy actually revokes the tokens that reference it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;v1.4-beta — anonymous / public-access reads for Blob containers.&lt;/strong&gt;&lt;br&gt;
Real Azure allows containers created with &lt;code&gt;x-ms-blob-public-access: container&lt;/code&gt; (list + read) or &lt;code&gt;blob&lt;/code&gt; (read only) to permit unauthenticated reads. Topaz currently rejects every request without an &lt;code&gt;Authorization&lt;/code&gt; header. The fix is to store the public-access level on the container, look it up in the security provider when no auth is present, and permit the request when the level allows the operation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v1.4-beta — RA-GRS secondary endpoint semantics.&lt;/strong&gt;&lt;br&gt;
Secondary DNS hostnames (&lt;code&gt;{accountName}-secondary.*&lt;/code&gt;), &lt;code&gt;secondaryEndpoints.blob/.table/.queue/.file&lt;/code&gt; in the storage account ARM response, the &lt;code&gt;?restype=service&amp;amp;comp=stats&lt;/code&gt; endpoint returning a realistic &lt;code&gt;GeoReplicationStats&lt;/code&gt; payload, and read-only enforcement that returns &lt;code&gt;403 WriteOperationNotSupportedOnSecondary&lt;/code&gt; on mutating requests against a secondary endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v1.5-beta — User Delegation SAS for Blob.&lt;/strong&gt;&lt;br&gt;
The Entra-derived SAS variant. Two coordinated pieces: an ARM endpoint that mints a user delegation key bounded by &lt;code&gt;(start, expiry, oid, tid)&lt;/code&gt;, and Blob data-plane validation that recomputes the key from the stored fields and validates the signed token. This is the only SAS variant that needs the local Entra layer to be coherent with the storage layer — which is part of why Topaz is built as one process rather than five.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v1.6-beta — unified data-plane port.&lt;/strong&gt;&lt;br&gt;
Real Azure exposes Blob / Table / Queue / File on port 443 with subdomain-based routing (&lt;code&gt;{account}.blob.core.windows.net&lt;/code&gt;, etc.). Topaz currently uses separate ports per sub-service (8891 / 8890 / 8893 / 8894) for routing simplicity. Some Azure CLI / SDK code paths construct storage URLs via &lt;code&gt;get_account_url()&lt;/code&gt;, which builds a single &lt;code&gt;https://&lt;/code&gt; URL from the cloud-suffix &lt;code&gt;storage_endpoint&lt;/code&gt; — encoding only one port. Consolidating onto a single port behind subdomain routing fixes that and makes the local URL pattern identical to production.&lt;/p&gt;

&lt;p&gt;These are all in the &lt;a href="https://github.com/TheCloudTheory/Topaz/blob/main/BACKLOG.md" rel="noopener noreferrer"&gt;public backlog&lt;/a&gt; with milestone labels — there is no roadmap kept in someone's head.&lt;/p&gt;
&lt;h2&gt;
  
  
  Developer experience: where the differences compound
&lt;/h2&gt;

&lt;p&gt;The feature comparison above is the headline. The day-to-day experience matters more.&lt;/p&gt;
&lt;h3&gt;
  
  
  Single binary, single process, single working directory
&lt;/h3&gt;

&lt;p&gt;Azurite is one process for Storage. Replicating Topaz's surface with separate emulators means orchestrating Azurite + a Service Bus emulator + a registry + a mock identity server, all writing to different directories, all logging in different formats, all started from different scripts. Every team that goes down this path eventually writes a Docker Compose file that nobody is happy with.&lt;/p&gt;

&lt;p&gt;Topaz is one binary or one Docker image. State lives in &lt;code&gt;.topaz/&lt;/code&gt; next to your project. Stop the host, the state is preserved. Start it again, the state comes back. Delete the directory, the slate is clean. There is one log stream and one health check (&lt;code&gt;GET /health&lt;/code&gt;).&lt;/p&gt;
&lt;h3&gt;
  
  
  MCP server for AI tooling
&lt;/h3&gt;

&lt;p&gt;Topaz ships an MCP server (&lt;code&gt;Topaz.MCP&lt;/code&gt;) that exposes the host as a set of tools any MCP-compatible client can call. The intended use case is GitHub Copilot in VS Code — drop a &lt;code&gt;.vscode/mcp.json&lt;/code&gt; in your workspace, point it at the MCP binary, and Copilot can run &lt;code&gt;RunTopazAsContainer&lt;/code&gt;, &lt;code&gt;CreateSubscription&lt;/code&gt;, &lt;code&gt;CreateResourceGroup&lt;/code&gt;, &lt;code&gt;CreateKeyVault&lt;/code&gt;, &lt;code&gt;CreateStorageAccount&lt;/code&gt;, &lt;code&gt;CreateServiceBusNamespace&lt;/code&gt;, &lt;code&gt;CreateContainerRegistry&lt;/code&gt;, and a &lt;code&gt;GetConnectionStrings&lt;/code&gt; tool that returns ready-to-paste connection strings for everything it just provisioned.&lt;/p&gt;

&lt;p&gt;In practice this means you can describe a local environment in natural language:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Start Topaz. Create a subscription, a resource group &lt;code&gt;rg-dev&lt;/code&gt; in &lt;code&gt;westeurope&lt;/code&gt;, then provision a Storage account, a Service Bus namespace with a queue named &lt;code&gt;orders&lt;/code&gt;, and a Key Vault with a secret &lt;code&gt;db-password&lt;/code&gt;. Output the connection strings.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;…and the assistant runs the whole sequence. There is also a &lt;code&gt;GetTopazStatus&lt;/code&gt; diagnostics tool for the case where a setup fails partway through and you want to know which ports are bound. Full details in the &lt;a href="https://topaz.thecloudtheory.com/docs/mcp-server" rel="noopener noreferrer"&gt;MCP server documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Azurite has no equivalent. There is no MCP integration, no programmatic way for an AI assistant to provision a Storage account, no notion of provisioning at all — the tool exists at the data plane and stops there.&lt;/p&gt;
&lt;h3&gt;
  
  
  Editor and IDE integration
&lt;/h3&gt;

&lt;p&gt;GitHub Copilot in VS Code works through the MCP server. The Topaz CLI (&lt;code&gt;topaz&lt;/code&gt;) is a thin client over the host process — every command is &lt;code&gt;topaz &amp;lt;verb&amp;gt;&lt;/code&gt; with consistent JSON output, which means it scripts cleanly from a terminal, a shell hook, a Makefile, or a CI step. A native VS Code extension is on the roadmap; for the moment, the MCP path covers the AI-assisted workflows and the CLI covers everything else.&lt;/p&gt;
&lt;h3&gt;
  
  
  Scaling out: CI and Docker
&lt;/h3&gt;

&lt;p&gt;Topaz publishes a Docker image (&lt;code&gt;topaz/host&lt;/code&gt;) that runs as a sidecar in any CI environment. The pattern in GitHub Actions is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;topaz&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;topaz/host:latest&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8899:8899&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8898:8898&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8891:8891&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8889:8889&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8892:8892&lt;/span&gt;
    &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;--health-cmd="curl -f http://localhost:8899/health || exit 1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then &lt;code&gt;terraform apply&lt;/code&gt; against &lt;code&gt;metadata_host = "topaz:8899"&lt;/code&gt; from any step. No Azure credentials in the pipeline secrets, no rate-limited subscription, no per-run cost. The same image runs as a Testcontainer inside .NET integration tests via the Topaz Testcontainers helper. The &lt;a href="https://topaz.thecloudtheory.com/docs/ecosystem/ci-cd" rel="noopener noreferrer"&gt;CI/CD integration guide&lt;/a&gt; covers GitHub Actions and Azure DevOps.&lt;/p&gt;

&lt;p&gt;Azurite has a Docker image too. The difference is what the image emulates — Topaz's image covers the same surface this post described (Storage + Key Vault + Service Bus + Event Hubs + ACR + Entra + ARM); Azurite's covers Storage. If your CI only needs Storage, that is fine. If it needs more, Azurite forces you back to the multi-emulator orchestration problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  ASP.NET Core integration
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;AddTopaz()&lt;/code&gt; is an extension method on &lt;code&gt;IServiceCollection&lt;/code&gt; that provisions local Azure infrastructure at application startup — declaratively, in the same &lt;code&gt;Program.cs&lt;/code&gt; where you wire up DI. Spin up a resource group, a Service Bus namespace, a Key Vault with seed secrets, a Storage account, all in code, all conditional on environment so the same &lt;code&gt;Program.cs&lt;/code&gt; runs unchanged in production. Detail in the &lt;a href="https://topaz.thecloudtheory.com/docs/ecosystem/aspnet-core" rel="noopener noreferrer"&gt;ASP.NET Core integration guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to keep Azurite
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Your application uses only Azure Storage and a single account is sufficient.&lt;/li&gt;
&lt;li&gt;You need RA-GRS secondary endpoint emulation today.&lt;/li&gt;
&lt;li&gt;You need a Microsoft-supported emulator with vendor backing.&lt;/li&gt;
&lt;li&gt;Your toolchain is built around Azurite and migration is not worth the effort.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to switch to Topaz
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Your application uses any service beyond Storage — Key Vault, Service Bus, Event Hubs, Container Registry, Managed Identity, RBAC.&lt;/li&gt;
&lt;li&gt;You need multiple named storage accounts in local or CI environments without manual hosts file edits.&lt;/li&gt;
&lt;li&gt;You want a single process to replace multiple emulators and the Docker Compose file that holds them together.&lt;/li&gt;
&lt;li&gt;You use Terraform with the &lt;code&gt;azurerm&lt;/code&gt; provider and want a local target for &lt;code&gt;terraform apply&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;You want the full Azure CLI (&lt;code&gt;az keyvault&lt;/code&gt;, &lt;code&gt;az servicebus&lt;/code&gt;, &lt;code&gt;az acr&lt;/code&gt;, &lt;code&gt;az deployment&lt;/code&gt;) to work locally, not just &lt;code&gt;az storage&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;You want ARM-level resource management (resource groups, subscriptions, ARM templates, Bicep) in CI without a real subscription.&lt;/li&gt;
&lt;li&gt;You want &lt;code&gt;DefaultAzureCredential&lt;/code&gt; to work end-to-end locally without code-path divergence from production.&lt;/li&gt;
&lt;li&gt;You want AI-assisted environment provisioning through MCP.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Migrating
&lt;/h2&gt;

&lt;p&gt;For Storage specifically, Topaz implements the same data-plane APIs Azurite does. Point your existing Azure SDK clients at Topaz's endpoints and they connect without code changes — only the endpoint hostname, port, and credentials change. The one item to check during migration is authentication: Topaz always enforces SharedKey signatures on Table and Queue requests, so any request that Azurite silently accepted with a missing or invalid signature will be rejected. This is intentional — it is the same behaviour real Azure has, and catching the divergence locally is the whole point.&lt;/p&gt;

&lt;p&gt;Beyond Storage, the &lt;a href="https://topaz.thecloudtheory.com/docs/api-coverage/" rel="noopener noreferrer"&gt;API coverage docs&lt;/a&gt; list which operations are implemented per service. If you hit something that is not yet supported, &lt;a href="https://github.com/TheCloudTheory/Topaz/issues" rel="noopener noreferrer"&gt;open an issue&lt;/a&gt; — the backlog is publicly tracked and feedback shapes priorities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Azurite is a good Storage emulator. Topaz is a Storage emulator that also covers Key Vault, Service Bus, Event Hubs, Container Registry, Managed Identity, RBAC, ARM, and Entra ID — in one binary, with one log stream, one working directory, and one Docker image. The Storage parity is essentially complete; the gaps that remain (SAS validation, RA-GRS, public-access reads) are scheduled and tracked in the open backlog.&lt;/p&gt;

&lt;p&gt;If your application is Storage-only, stay on Azurite. If it is anything else — and most real Azure applications are — Topaz exists to remove the orchestration tax of running five different local emulators that were never designed to work together.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>emulator</category>
    </item>
    <item>
      <title>Two days chasing a SharedKey signature mismatch: fixing azurerm_storage_table_entity in Topaz</title>
      <dc:creator>Kamil Mrzygłód</dc:creator>
      <pubDate>Thu, 30 Apr 2026 10:42:05 +0000</pubDate>
      <link>https://dev.to/kamil-mrzyglod/two-days-chasing-a-sharedkey-signature-mismatch-fixing-azurermstoragetableentity-in-topaz-15ag</link>
      <guid>https://dev.to/kamil-mrzyglod/two-days-chasing-a-sharedkey-signature-mismatch-fixing-azurermstoragetableentity-in-topaz-15ag</guid>
      <description>&lt;p&gt;Some bugs announce themselves loudly. A null pointer in a hot path, a missing route that returns 404 to every request — the kind of thing that fails immediately and points straight at the cause. Others are quieter. They let most of the stack work correctly and only reveal themselves at the intersection of two independently correct but mutually incompatible assumptions. The &lt;code&gt;azurerm_storage_table_entity&lt;/code&gt; failure was the second kind.&lt;/p&gt;

&lt;p&gt;This post is an account of a two-day investigation into a persistent &lt;code&gt;401 Unauthorized&lt;/code&gt; response on Terraform table entity operations, the four separate bugs we found along the way, and how pairing with GitHub Copilot shaped the investigation. The fix touched authentication, HTTP routing, upsert semantics, and stream lifecycle — each uncovered only after the previous one was resolved.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can read more about Topaz here: &lt;a href="https://topaz.thecloudtheory.com/" rel="noopener noreferrer"&gt;https://topaz.thecloudtheory.com/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The starting point
&lt;/h2&gt;

&lt;p&gt;Topaz already had table storage support: accounts, tables, entity insert and query. The &lt;code&gt;azurerm_storage_table&lt;/code&gt; Terraform resource worked. What did not work was &lt;code&gt;azurerm_storage_table_entity&lt;/code&gt;. Any Terraform run that tried to create a table entity failed immediately:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: creating Entity (Partition Key "pk1" / Row Key "rk1" / ...):
  executing request: unexpected status 401 (401 Unauthorized) with EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The 401 was puzzling because the same storage account, created by the same Terraform run, authenticated correctly for every ARM-level operation. Listing keys worked. Creating the table worked. Only the data-plane entity operation failed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The investigation begins: is the key the problem?
&lt;/h2&gt;

&lt;p&gt;The first hypothesis was key mismatch. Terraform calls &lt;code&gt;listKeys&lt;/code&gt; to get a storage account key and then uses it to sign data-plane requests with HMAC-SHA256. If Topaz returned a different key than it used to verify the signature, every request would fail.&lt;/p&gt;

&lt;p&gt;To confirm or rule this out, we added diagnostic logging across the authentication path. The &lt;code&gt;TableStorageSecurityProvider&lt;/code&gt; was modified to emit the full stored keys (as base64 and as raw bytes in hex), the received signature, the computed signature for both keys, the full &lt;code&gt;Authorization&lt;/code&gt; header, every request header with its raw bytes, and the exact string-to-sign in hex. &lt;code&gt;ListStorageAccountKeysEndpoint&lt;/code&gt; was modified to log a prefix of both keys on every call, so we could correlate what Terraform received with what Topaz verified against.&lt;/p&gt;

&lt;p&gt;The first finding was that the keys were stable. Terraform's &lt;code&gt;listKeys&lt;/code&gt; calls (there were four of them per apply — the provider is aggressive about refreshing credentials) consistently received the same key1 prefix. The key logged at verification time matched. The key mismatch hypothesis was eliminated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ruling out Topaz restarts
&lt;/h2&gt;

&lt;p&gt;One suspicious pattern appeared in the logs: at a certain point during a test run, ARM requests started returning &lt;code&gt;no route to host&lt;/code&gt; on port 8899. This looked like the Topaz container crashing and restarting mid-test — which would regenerate the storage keys and make Terraform's cached key stale.&lt;/p&gt;

&lt;p&gt;We checked the Docker container lifecycle carefully. The logs showed two AMQP listener errors at startup (a known benign issue with port reuse in the Docker test environment) but no crash or restart. The &lt;code&gt;no route to host&lt;/code&gt; was a red herring — it appeared after Terraform had already failed and was in its cleanup phase, not before. Cross-referencing the &lt;code&gt;listKeys&lt;/code&gt; timestamps with the entity request timestamps confirmed: the key prefix was identical across all four calls in the same test run. No restart, no key regeneration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Narrowing the scope: the isolated test
&lt;/h2&gt;

&lt;p&gt;At this point we were spending time waiting for the full storage batch test to complete — roughly thirty minutes per run because Terraform retries failing requests with exponential backoff before giving up. To eliminate noise from other resources in the same test, we created a minimal isolated scenario: a single resource group, a single storage account, a single table, and a single entity. Nothing else.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_storage_table_entity"&lt;/span&gt; &lt;span class="s2"&gt;"entity"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;storage_table_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_storage_table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tbl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;partition_key&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"pk1"&lt;/span&gt;
  &lt;span class="nx"&gt;row_key&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"rk1"&lt;/span&gt;
  &lt;span class="nx"&gt;entity&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"isolated-entity"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This reduced the iteration cycle and made the logs dramatically easier to read.&lt;/p&gt;

&lt;h2&gt;
  
  
  The actual bug: URL encoding
&lt;/h2&gt;

&lt;p&gt;With a clean log, the problem became visible. Here is what Topaz logged as the string-to-sign it computed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;GET\n\n\nThu, 30 Apr 2026 06:43:27 GMT\n/tfisoentityacct/isoentities(PartitionKey='pk1',%20RowKey='rk1')
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the TF debug log showed the URL Terraform actually sent the request to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;GET https://tfisoentityacct.table.storage.topaz.local.dev:8890/isoentities%28PartitionKey=%27pk1%27,%20RowKey=%27rk1%27%29
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The entity key lookup path contains parentheses and single quotes. Terraform's go-azure-sdk encodes those: &lt;code&gt;(&lt;/code&gt; → &lt;code&gt;%28&lt;/code&gt;, &lt;code&gt;'&lt;/code&gt; → &lt;code&gt;%27&lt;/code&gt;. It signs the request using the raw encoded URL. ASP.NET Core receives the request and makes the decoded path available through &lt;code&gt;HttpRequest.Path&lt;/code&gt;. When Topaz called &lt;code&gt;ToString()&lt;/code&gt; on that path, it got the decoded form — &lt;code&gt;(PartitionKey='pk1',&lt;/code&gt; — which is not what was signed. The HMAC computation was based on a different string than what the client used.&lt;/p&gt;

&lt;p&gt;The fix was to read the raw request target before ASP.NET Core's decoding step using &lt;code&gt;IHttpRequestFeature.RawTarget&lt;/code&gt;, which preserves the wire-format path exactly as the client sent it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;rawTarget&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Features&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IHttpRequestFeature&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;()?.&lt;/span&gt;&lt;span class="n"&gt;RawTarget&lt;/span&gt;
                &lt;span class="p"&gt;??&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Value&lt;/span&gt;
                &lt;span class="p"&gt;??&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Empty&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;queryIndex&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rawTarget&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;IndexOf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sc"&gt;'?'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;rawPath&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;queryIndex&lt;/span&gt; &lt;span class="p"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="n"&gt;rawTarget&lt;/span&gt;&lt;span class="p"&gt;[..&lt;/span&gt;&lt;span class="n"&gt;queryIndex&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;rawTarget&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This was a one-line conceptual fix that required updating the &lt;code&gt;IsRequestAuthorized&lt;/code&gt; call in all fourteen table endpoint classes. The base class &lt;code&gt;TableDataPlaneEndpointBase&lt;/code&gt; was refactored to accept an &lt;code&gt;HttpContext&lt;/code&gt; and extract the raw path internally, so all call sites became a single-argument call.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bug two: the MERGE verb
&lt;/h2&gt;

&lt;p&gt;With auth fixed, the next run reached Terraform's actual entity creation step — and hit a new error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Request MERGE /isoentities(PartitionKey='pk1',%20RowKey='rk1') has no corresponding endpoint assigned.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Azure Table Storage REST API uses HTTP &lt;code&gt;MERGE&lt;/code&gt; for Insert-or-Merge operations. Topaz's &lt;code&gt;InsertOrMergeTableEntityEndpoint&lt;/code&gt; only declared &lt;code&gt;POST&lt;/code&gt; in its &lt;code&gt;Endpoints&lt;/code&gt; array:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;Endpoints&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;@"POST /^.*?\(PartitionKey='.*?',(%20|\s)?RowKey='.*?'\)$"&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform's go-azure-sdk uses &lt;code&gt;MERGE&lt;/code&gt; as the verb. The fix was straightforward — add &lt;code&gt;MERGE&lt;/code&gt; to the array:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;Endpoints&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s"&gt;@"POST /^.*?\(PartitionKey='.*?',(%20|\s)?RowKey='.*?'\)$"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;@"MERGE /^.*?\(PartitionKey='.*?',(%20|\s)?RowKey='.*?'\)$"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bug three: Insert-or-Merge semantics
&lt;/h2&gt;

&lt;p&gt;With routing fixed, the next run returned &lt;code&gt;404 Not Found&lt;/code&gt;. The &lt;code&gt;HandleUpdateEntityRequest&lt;/code&gt; path calls &lt;code&gt;DataPlane.UpdateEntity&lt;/code&gt;, which throws &lt;code&gt;EntityNotFoundException&lt;/code&gt; when the entity file does not exist on disk:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;File&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Exists&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;entityPath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;EntityNotFoundException&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a &lt;code&gt;PUT&lt;/code&gt; (Replace) or &lt;code&gt;PATCH&lt;/code&gt; (Merge), throwing here is correct — updating a non-existent entity should fail. But &lt;code&gt;MERGE&lt;/code&gt; in Table Storage semantics means &lt;em&gt;Insert-or-Merge&lt;/em&gt;: create the entity if it does not exist, merge the properties if it does. The existing code had no path for that case.&lt;/p&gt;

&lt;p&gt;We added an &lt;code&gt;UpsertEntity&lt;/code&gt; method to &lt;code&gt;TableServiceDataPlane&lt;/code&gt; that writes a new entity unconditionally, and a &lt;code&gt;upsert&lt;/code&gt; parameter to &lt;code&gt;HandleUpdateEntityRequest&lt;/code&gt; that catches &lt;code&gt;EntityNotFoundException&lt;/code&gt; and falls through to the upsert path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;EntityNotFoundException&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;upsert&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;!.&lt;/span&gt;&lt;span class="n"&gt;Position&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;DataPlane&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;UpsertEntity&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;subscriptionIdentifier&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;resourceGroupIdentifier&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;tableName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;storageAccountName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;partitionKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rowKey&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StatusCode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;HttpStatusCode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NoContent&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bug four: the disposed stream
&lt;/h2&gt;

&lt;p&gt;That &lt;code&gt;buffered!.Position = 0&lt;/code&gt; line is significant. The first implementation of the upsert fallback passed the original &lt;code&gt;input&lt;/code&gt; stream directly to &lt;code&gt;UpsertEntity&lt;/code&gt;. The next test run returned &lt;code&gt;500 Internal Server Error&lt;/code&gt; with &lt;code&gt;Cannot access a disposed object&lt;/code&gt; in the Topaz logs.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;UpdateEntity&lt;/code&gt; reads the body with a &lt;code&gt;StreamReader&lt;/code&gt; before throwing &lt;code&gt;EntityNotFoundException&lt;/code&gt;. &lt;code&gt;StreamReader&lt;/code&gt; disposes its underlying stream by default when it is disposed — which happens when the &lt;code&gt;using var sr&lt;/code&gt; block exits after the throw. By the time the &lt;code&gt;catch (EntityNotFoundException) when (upsert)&lt;/code&gt; block ran, the stream was already closed.&lt;/p&gt;

&lt;p&gt;The fix was to buffer the request body into a &lt;code&gt;MemoryStream&lt;/code&gt; before entering the &lt;code&gt;UpdateEntity&lt;/code&gt; call when &lt;code&gt;upsert&lt;/code&gt; is true, and use &lt;code&gt;leaveOpen: true&lt;/code&gt; in &lt;code&gt;UpdateEntity&lt;/code&gt;'s &lt;code&gt;StreamReader&lt;/code&gt; to avoid closing the memory stream on an unexpected throw:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;MemoryStream&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="n"&gt;buffered&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;upsert&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;buffered&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;MemoryStream&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CopyTo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Position&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;input&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The test assertion
&lt;/h2&gt;

&lt;p&gt;After all four bugs were fixed, Terraform reported &lt;code&gt;Apply complete! Resources: 4 added, 0 changed, 0 destroyed.&lt;/code&gt; The test itself still failed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;System.InvalidOperationException : The node must be of type 'JsonValue'.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;terraform output -json&lt;/code&gt; response wraps each output in an envelope: &lt;code&gt;{ "partition_key": { "value": "pk1", "type": "string" } }&lt;/code&gt;. The initial test assertion called &lt;code&gt;.GetValue&amp;lt;string&amp;gt;()&lt;/code&gt; directly on &lt;code&gt;outputs["partition_key"]&lt;/code&gt;, which is a &lt;code&gt;JsonObject&lt;/code&gt;, not a &lt;code&gt;JsonValue&lt;/code&gt;. The fix was a one-character change — navigate to &lt;code&gt;["value"]&lt;/code&gt; first, matching every other test in the suite.&lt;/p&gt;

&lt;h2&gt;
  
  
  The role of Copilot in the investigation
&lt;/h2&gt;

&lt;p&gt;Copilot assisted throughout both days: adding the diagnostic logging, generating the isolated Terraform scenario and test class, and suggesting fixes as each root cause was identified. The investigation benefited from being able to express hypotheses in natural language — "could the key bytes differ even if the base64 strings match?" led immediately to logging the raw bytes in hex — and from having a second reader of the logs who could cross-reference the wire trace in the TF debug log against the canonicalization logic in the Topaz source.&lt;/p&gt;

&lt;p&gt;The most valuable contribution was narrowing the investigation down to the URL encoding issue. The logs included a &lt;code&gt;DecodedPathSTS&lt;/code&gt; variant that computed the signature using &lt;code&gt;Uri.UnescapeDataString(absolutePath)&lt;/code&gt;. When that also failed to match, it confirmed the issue was not simply percent-encoded spaces — it was the full encoding of the path characters. Tracing &lt;code&gt;IHttpRequestFeature.RawTarget&lt;/code&gt; was the correct escape hatch once the root cause was understood.&lt;/p&gt;

&lt;h2&gt;
  
  
  What changed
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Area&lt;/th&gt;
&lt;th&gt;Change&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;TableStorageSecurityProvider&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Signs against &lt;code&gt;IHttpRequestFeature.RawTarget&lt;/code&gt; (raw wire path) instead of &lt;code&gt;HttpRequest.Path&lt;/code&gt; (decoded)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;InsertOrMergeTableEntityEndpoint&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Added &lt;code&gt;MERGE&lt;/code&gt; verb alongside &lt;code&gt;POST&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;TableServiceDataPlane&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Added &lt;code&gt;UpsertEntity&lt;/code&gt;; &lt;code&gt;UpdateEntity&lt;/code&gt; uses &lt;code&gt;leaveOpen: true&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;TableDataPlaneEndpointBase&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;HandleUpdateEntityRequest&lt;/code&gt; accepts &lt;code&gt;upsert&lt;/code&gt; flag; buffers body into &lt;code&gt;MemoryStream&lt;/code&gt; for fallback&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;All 14 table endpoints&lt;/td&gt;
&lt;td&gt;Updated to pass &lt;code&gt;HttpContext&lt;/code&gt; to &lt;code&gt;IsRequestAuthorized&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The storage API coverage page now marks &lt;code&gt;azurerm_storage_table_entity&lt;/code&gt; create, read, and delete as implemented. The full Terraform scenario — resource group, storage account, table, entity — runs end-to-end in roughly two minutes with no manual steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;Each of the four bugs was independently plausible and independently fixable. But they were invisible until the previous one was resolved. You cannot see that &lt;code&gt;MERGE&lt;/code&gt; is unrouted if you never get past the 401. You cannot see the upsert semantics gap if &lt;code&gt;MERGE&lt;/code&gt; returns 404 before reaching the data plane. You cannot see the disposed stream if the upsert path is never exercised. The debugging process was necessarily sequential — each fix revealed the next layer.&lt;/p&gt;

&lt;p&gt;The URL encoding issue is worth calling out specifically. The Azure Table Storage SharedKey algorithm requires signing the &lt;em&gt;canonicalized resource&lt;/em&gt;, which is derived from the request URL. Whether that URL is in its raw percent-encoded form or its decoded form is not a detail the specification makes obvious. ASP.NET Core's routing infrastructure silently decodes the path before most code ever sees it. &lt;code&gt;IHttpRequestFeature.RawTarget&lt;/code&gt; is the correct place to read the unmodified wire path, and it is not the first thing you reach for. Getting there required enough diagnostic signal to rule out every other explanation first.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>terraform</category>
      <category>emulator</category>
      <category>devops</category>
    </item>
    <item>
      <title>Running Terraform against Azure locally, without a subscription</title>
      <dc:creator>Kamil Mrzygłód</dc:creator>
      <pubDate>Tue, 14 Apr 2026 10:16:23 +0000</pubDate>
      <link>https://dev.to/kamil-mrzyglod/running-terraform-against-azure-locally-without-a-subscription-1lpf</link>
      <guid>https://dev.to/kamil-mrzyglod/running-terraform-against-azure-locally-without-a-subscription-1lpf</guid>
      <description>&lt;p&gt;&lt;em&gt;This posts explains how Terraform is integrated with Topaz - Azure emulator for local development and testing. You can check it &lt;a href="https://topaz.thecloudtheory.com/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Every Terraform workflow that targets Azure needs the same things before it can do anything useful: an Azure subscription, a service principal or user account with the right permissions, and a network path to the Azure APIs. In a team setting you also need to make sure those credentials are available wherever terraform apply runs — local machines, CI agents, staging pipelines. The feedback loop is slow, and the blast radius for a misconfigured apply is real.&lt;/p&gt;

&lt;p&gt;Topaz removes all of that. The same terraform apply that would create resources in Azure can instead create them in a local emulator, with no subscription, no credentials to rotate, and no cloud charges. This post explains how the integration works and how to set it up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the standard AzureRM provider works at all
&lt;/h2&gt;

&lt;p&gt;The key insight is that the AzureRM provider does not have Azure's API endpoints hardcoded. When it initialises, it fetches a metadata document from a discovery endpoint that describes where each Azure API lives. In a normal setup, that discovery endpoint is the Azure Resource Manager metadata endpoint at management.azure.com. Once the provider has that document, it constructs every subsequent request URL from it.&lt;/p&gt;

&lt;p&gt;The metadata_host setting exists precisely to point that discovery step somewhere else. Set it to Topaz's ARM port, and the provider fetches Topaz's metadata document instead. That document points every API URL — authentication, resource management, Key Vault, Storage — at the local emulator. The provider never knows it is not talking to Azure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"azurerm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;features&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="nx"&gt;metadata_host&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"topaz.local.dev:8899"&lt;/span&gt;
  &lt;span class="nx"&gt;resource_provider_registrations&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"none"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two settings are doing the work here. metadata_host redirects endpoint discovery to Topaz. resource_provider_registrations = "none" tells the provider not to attempt registering resource providers on startup — Topaz does not emulate the full registration flow, and it is not needed for local development anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  DNS setup
&lt;/h2&gt;

&lt;p&gt;The hostname topaz.local.dev needs to resolve to 127.0.0.1 on your machine. Topaz ships install scripts that configure this using dnsmasq, which handles the wildcard subdomains that services like Container Registry depend on. The getting started guide covers installation and the one-time DNS and certificate setup.&lt;/p&gt;

&lt;p&gt;For containerised environments, the same configuration can be handled at the container network level — the Terraform integration guide covers that path as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Authentication
&lt;/h2&gt;

&lt;p&gt;AzureRM needs a credential to authenticate against whatever it thinks Azure is. With Topaz, that means authenticating against Topaz's local Entra ID emulation layer, which responds to the same OAuth2 endpoints the real Azure uses.&lt;/p&gt;

&lt;p&gt;The simplest approach for local development is to use the Azure CLI configured for Topaz's local cloud. Once configured, az login issues a token from the local Entra layer, and the AzureRM provider picks it up through DefaultAzureCredential exactly as it would in production. No service principal, no environment variables, no secrets.&lt;/p&gt;

&lt;p&gt;A fixed subscription ID avoids drift between runs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;topaz start &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--default-subscription&lt;/span&gt; 00000000-0000-0000-0000-000000000001 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--log-level&lt;/span&gt; Information
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pair that with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;export ARM_SUBSCRIPTION_ID=00000000-0000-0000-0000-000000000001&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And every terraform plan and apply targets the same subscription, the same resource IDs, and the same state — which matters when you want your CI runs to behave identically to your laptop.&lt;/p&gt;

&lt;h2&gt;
  
  
  A complete example
&lt;/h2&gt;

&lt;p&gt;With Topaz running, this is all it takes to create a resource group and a Key Vault locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;azurerm&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/azurerm"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"= 4.67.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"azurerm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;features&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="nx"&gt;metadata_host&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"topaz.local.dev:8899"&lt;/span&gt;
  &lt;span class="nx"&gt;resource_provider_registrations&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"none"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_resource_group"&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"rg-local"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"westeurope"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_key_vault"&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kv-local"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;tenant_id&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"50717675-3e5e-4a1e-8cb5-c62d8be8ca48"&lt;/span&gt;
  &lt;span class="nx"&gt;sku_name&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"standard"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
terraform apply &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;
terraform destroy &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The tenant ID above is Topaz's built-in local tenant — the same one az login uses when configured for the local cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  AzAPI provider
&lt;/h2&gt;

&lt;p&gt;If your Terraform configuration uses azapi resources alongside azurerm, the setup is straightforward. Keep the azurerm provider configured for Topaz metadata and add the azapi provider declaration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;azurerm&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/azurerm"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"= 4.67.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;azapi&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Azure/azapi"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"azurerm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;features&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="nx"&gt;metadata_host&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"topaz.local.dev:8899"&lt;/span&gt;
  &lt;span class="nx"&gt;resource_provider_registrations&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"none"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"azapi"&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because azapi inherits its endpoint configuration from the same environment, and Topaz's metadata document covers the resource management endpoints that azapi targets, no additional configuration is needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using it in CI
&lt;/h2&gt;

&lt;p&gt;The setup above works identically in a CI pipeline. Run Topaz as a service container or a background step, set ARM_SUBSCRIPTION_ID, and run terraform apply as normal. No Azure credentials in the pipeline, no cost per run, no rate limiting from the Azure APIs. The CI/CD integration guide has ready-to-use examples for GitHub Actions and Azure DevOps.&lt;/p&gt;

&lt;h2&gt;
  
  
  What works today
&lt;/h2&gt;

&lt;p&gt;Topaz currently supports Terraform workflows for Azure Storage, Key Vault, Service Bus, Event Hubs, Container Registry, and Resource Manager operations including resource groups and ARM template deployments. The API coverage docs list which operations are implemented per service.&lt;/p&gt;

&lt;p&gt;Not every AzureRM resource type is emulated yet. If you hit a resource that Topaz does not support, the provider will return a 404 or an unsupported operation error. Check the API coverage page for current status, and open an issue if something you need is missing.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>terraform</category>
      <category>testing</category>
    </item>
  </channel>
</rss>
