When you run a mobile API (like us at libelo), you quickly reach a point where you want to understand how people are actually using it. Not just error counts or latency histograms, but the real detail: which operations are called, from which platform, with what location data, at what time. We log all requests with Azure Application Insights, but that is mostly for troubleshooting purposes and comes with sampling risks, and a retention window.
In our backend, I have added an extra option: write every incoming API request directly to Azure Table Storage, right from the APIM policy, in a fire-and-forget way. The cost is minimal, the data is ours to query whenever we like, and the extra latency added to your API responses is zero. And the most important thing is that we are storing this data for further analytics and platform usage insights.
This post walks through how to set it up.
Why Azure Table Storage
Table Storage gives you cheap, durable, schemaless rows with a partition and row key. That is all you need for an audit log. You can partition by date so that querying a single day is fast, you can set a retention policy to purge old data automatically, and you can query it directly from Azure Storage Explorer or export it to Power BI without any extra infrastructure.
The catch is that you need to give APIM a way to authenticate against the Table Storage endpoint. The cleanest option for write-only access from a policy is a Shared Access Signature (SAS) token stored as a Named Value. More on that below.
The send-one-way-request policy
APIM has a policy called send-one-way-request that issues an HTTP request and does not wait for a response before continuing. This is exactly what we want here. The client calling your API never experiences the round-trip to Table Storage, and the policy does not block the outbound flow.
You can place this policy in both the <outbound> section (after a successful response) and the <on-error> section (when the backend or the policy itself fails). Covering both sections means you get a complete picture of every request, regardless of outcome.
Step by step
Setting up Table Storage
First, you need a storage account and a table. Use an existing, if you have one. Otherwise, create a standard StorageV2 account with Standard_LRS redundancy in the same region as your APIM instance.
Inside the storage account, create a table. In the Azure Portal, navigate to your storage account, open the Tables blade, and click + Table. Give it a name like apirequests. That is the table name you will reference in the policy URL.
Generating the SAS token
APIM needs write access to insert new rows. Rather than giving it a full account key, you create a SAS token scoped to the table service with only the permissions it needs.
In the Azure Portal, navigate to your storage account and open the Shared access signature blade. Configure the following settings:
- Allowed services: Table only
- Allowed resource types: Object (this covers individual table entities)
- Allowed permissions: Add (this is the only permission APIM needs to insert rows)
- Expiry: Set this to something practical for your rotation cadence. One year is common for internal tooling, but make sure you have a process to rotate it before it expires.
- Allowed protocols: HTTPS only
Click Generate SAS and connection string. Copy the value in the SAS token field. It starts with sv= and will look something like this:
sv=2022-11-02&ss=t&srt=o&sp=a&se=2026-12-31T00:00:00Z&st=2025-01-01T00:00:00Z&spr=https&sig=<signature>
You will use this value in the next step.
Storing the SAS token as a Named Value in APIM
APIM Named Values are the right place to store configuration that policies reference but that should not be hardcoded in the policy XML. They also support Key Vault references if you want to manage the secret lifecycle there.
In the Azure Portal, navigate to your APIM instance and open the Named values blade. Click + Add and fill in the following:
-
Name:
tablestorage-sas -
Display name:
tablestorage-sas - Type: Secret
- Value: the SAS token you generated above
Save it. The policy will reference this value as {{tablestorage-sas}}.
Creating a policy fragment
The tracking logic needs to run in two places: the <outbound> section fires after a successful response, and the <on-error> section fires when something goes wrong. Obviously, we don't want to duplicate the entire send-one-way-request block. APIM solves this with policy fragments: reusable snippets that you define once and reference by name from any policy.
The only structural difference between the success and error cases is the ErrorReason field. context.LastError?.Reason is safe to evaluate in both contexts. In the <outbound> path there is no last error, so it returns an empty string. That means the fragment body is identical for both cases, and you do not need any branching inside it.
In the Azure Portal, navigate to your APIM instance, open the APIs section, and click Policy fragments in the left menu. Click + Create and give the fragment the name track-api-request. Paste the following XML as the fragment body:
<fragment>
<send-one-way-request mode="new" timeout="10">
<set-url>https://<your-storage-account>.table.core.windows.net/apirequests?{{tablestorage-sas}}</set-url>
<set-method>POST</set-method>
<set-header name="Content-Type" exists-action="override">
<value>application/json</value>
</set-header>
<set-header name="Accept" exists-action="override">
<value>application/json;odata=nometadata</value>
</set-header>
<set-header name="x-ms-version" exists-action="override">
<value>2019-02-02</value>
</set-header>
<set-body>
@{
var now = DateTime.UtcNow;
var requestId = context.RequestId;
var rowKey = (DateTime.MaxValue.Ticks - now.Ticks).ToString("D19")
+ "_" + requestId;
return new JObject(
new JProperty("PartitionKey", now.ToString("yyyy-MM-dd")),
new JProperty("RowKey", rowKey),
new JProperty("Timestamp", now),
new JProperty("OperationId", requestId),
new JProperty("Operation", context.Operation.Name),
new JProperty("Path", context.Request.Url.Path),
new JProperty("Query", context.Request.Url.QueryString),
new JProperty("UserId", context.Request.Headers.GetValueOrDefault("app-user-id","")),
new JProperty("UserName", context.Request.Headers.GetValueOrDefault("app-user-name","")),
new JProperty("Lat", context.Request.Headers.GetValueOrDefault("x-lat","")),
new JProperty("Lon", context.Request.Headers.GetValueOrDefault("x-lon","")),
new JProperty("Language", context.Request.Headers.GetValueOrDefault("Accept-Language","")),
new JProperty("Platform", context.Request.Headers.GetValueOrDefault("x-platform","")),
new JProperty("Status", context.Response?.StatusCode ?? 0),
new JProperty("Ip", context.Request.IpAddress),
new JProperty("ErrorReason", context.LastError?.Reason ?? "")
).ToString();
}
</set-body>
</send-one-way-request>
</fragment>
A few things worth pointing out in the body expression.
The RowKey.
The row key is constructed by subtracting the current ticks from DateTime.MaxValue.Ticks and zero-padding the result to 19 digits, then appending the request ID. Table Storage sorts rows alphabetically within a partition, so this descending tick value means that when you open a day's partition in Storage Explorer or query it via the REST API, the most recent requests appear at the top. Without this trick you would have to sort results in the client.
The PartitionKey.
Using the date in yyyy-MM-dd format keeps each day's data in its own partition. Queries scoped to a single day will only scan that partition, which keeps them fast even when the table accumulates months of data.
The fields.
The policy captures the APIM operation name, the full path and query string, user identity fields that were extracted from the JWT earlier in the inbound policy, device location and platform headers sent by the client, the HTTP status code from the response, and the client IP address. The ErrorReason field is populated from context.LastError?.Reason, which returns an empty string on the success path and the actual failure reason on the error path.
The Accept header.
The value application/json;odata=nometadata tells the Table Storage REST API to return the minimal OData envelope. For a write operation this does not change the response, but it is good practice and avoids any surprises if you later add response handling.
The APIM policy
With the fragment in place, the policy itself becomes lightweight. Both sections reference the fragment by name using <include-fragment>, and APIM expands it at runtime:
<outbound>
<base />
<include-fragment fragment-id="track-api-request" />
</outbound>
<on-error>
<base />
<include-fragment fragment-id="track-api-request" />
</on-error>
This is the policy you attach at the API or product level. The fragment handles all the tracking logic, and any future change to what gets recorded only needs to happen in one place.
Deploying with Bicep
If you are managing your APIM configuration with Bicep (or azd), the policy fragment maps to a Microsoft.ApiManagement/service/policyFragments child resource. Define the fragment body as a variable, substitute the storage account name at deploy time, and declare the resource as a dependency of any API that uses it:
var trackApiRequestFragment = '''
<fragment>
<send-one-way-request mode="new" timeout="10">
<set-url>https://__STORAGE_ACCOUNT_NAME__.table.core.windows.net/apirequests?{{tablestorage-sas}}</set-url>
...
</send-one-way-request>
</fragment>
'''
resource apimService 'Microsoft.ApiManagement/service@2023-05-01-preview' existing = {
name: apiManagementName
}
resource trackApiRequestPolicyFragment 'Microsoft.ApiManagement/service/policyFragments@2023-05-01-preview' = {
name: 'track-api-request'
parent: apimService
properties: {
description: 'Tracks every API request to Table Storage in a fire-and-forget way'
format: 'rawxml'
value: replace(trackApiRequestFragment, '__STORAGE_ACCOUNT_NAME__', storageAccount.name)
}
}
The main API policy then only needs the two <include-fragment> references, and the storage account name placeholder is resolved without touching the policy string itself. Storing the SAS token as a Named Value can be linked with an Azure KeyVault secret. The {{tablestorage-sas}} placeholder in the fragment simply needs to match the name of the Named Value you created in the portal.
What you end up with
Once this is running, every API request that passes through APIM writes a row to the apirequests table. You can open Azure Storage Explorer and browse by date partition, export a day's worth of data to CSV, or query directly via the Table Storage REST API. For more structured analysis you can connect the table to Power BI or Azure Data Factory.
The setup has been running in production on libelo without any noticeable overhead on API response times, and the cost for the storage itself is negligible at consumer app scale. It gives a lightweight, always-on audit trail that complements Application Insights rather than replacing it, and it is easy to extend by adding fields to the JSON body whenever you need to track something new.
The main thing to watch out for is the SAS token expiry. Set a calendar reminder to rotate it before the expiry date, or move the secret into Key Vault and reference it from the Named Value so that rotation becomes a Key Vault operation rather than a manual portal step.
And as a result we are now having a nice overview of the requests in our own platform:

Top comments (0)