If you're running Logic Apps Standard with Application Insights enabled, there's a good chance you're paying more than you need to. Getting this right means crossing multiple doc pages, filtering out configurations that silently do nothing, and eventually landing on a setting most people never reach.
I ran a controlled experiment across three configurations over 36 hours to find out exactly what works — and what doesn't. Here's what the data showed.
The Problem
Logic Apps Standard emits a lot of telemetry. Every workflow run, every action, every outbound HTTP call — all of it flows into Application Insights by default. At scale, that adds up fast.
The official docs tell you to look at host.json for telemetry control, but the Logic Apps Standard reference page doesn't enumerate the applicationInsights properties — it defers to the Azure Functions host.json reference. That's actually correct (Logic Apps Standard runs on the Functions v4 host), but it leaves a gap where you're piecing together settings from two different doc pages, community blogs, and Stack Overflow answers.
The Experiment
I set up a Logic Apps Standard instance (logicappshub, West Europe) with a timer workflow firing every 5 minutes. Each run made 7 parallel HTTP calls — a mix of valid endpoints, 404s, 500s, DNS failures, and a call that cascaded into a second stateful workflow. This generated realistic, high-volume telemetry similar to what you'd see in a busy integration environment.
I then ran three phases (compared over equal 9-hour windows):
| Phase | Schema | Sampling | Config |
|---|---|---|---|
| 1 — Baseline | v1 (default) | Off | Bare host.json
|
| 2 — Full config | v2 | On | Sampling + dependency tracking off |
| 3 — Schema only | v2 | Off | v2 schema only, nothing else |
All three phases used the same workflow, same infrastructure, same traffic pattern. The only variable was host.json.
The Numbers
Billable ingestion — equal 9-hour windows per phase
To ensure a fair comparison, each phase was queried over an identical 9-hour window:
| Phase | Window (UTC) |
|---|---|
| Phase 1 | 2026-05-08 20:00 → 2026-05-09 05:00 |
| Phase 2 | 2026-05-09 10:00 → 2026-05-09 19:00 |
| Phase 3 | 2026-05-09 22:30 → 2026-05-10 07:30 |
| Data Type | Phase 1 (v1, no sampling) | Phase 2 (v2 + sampling) | Phase 3 (v2, no sampling) |
|---|---|---|---|
AppTraces |
0.135 GB | 0.004 GB | 0.003 GB |
AppDependencies |
0.101 GB | 0.000 GB | 0.004 GB |
AppMetrics |
0.049 GB | 0.010 GB | 0.007 GB |
AppRequests |
0.018 GB | 0.039 GB | 0.039 GB |
AppPerformanceCounters |
0.006 GB | 0.004 GB | 0.002 GB |
AppExceptions |
0.000 GB | 0.001 GB | 0.001 GB |
| Total | 0.309 GB | 0.058 GB | 0.056 GB |
| vs baseline | — | -81% | -82% |
Phase 2 and Phase 3 are essentially identical — 0.058 GB vs 0.056 GB — despite Phase 3 having no sampling at all.
That's the headline finding.
Finding 1: v2 Schema Does Most of the Work
Switching Runtime.ApplicationInsightTelemetryVersion from v1 (the default) to v2 reduced ingestion by ~80% on its own. No sampling required.
v2 is the GA-recommended telemetry schema and emits fewer duplicate rows across the Traces and Requests tables. The difference in practice was dramatic:
| Item Type | Phase 1 (v1) | Phase 3 (v2, no sampling) |
|---|---|---|
traces per 5-min window |
414–2,328 | 6–12 |
dependencies per 5-min window |
452–3,385 | 28–45 |
Traces dropped from thousands per window to single digits. Dependencies dropped by two orders of magnitude. Just from a schema flag.
The config:
{
"extensions": {
"workflow": {
"Settings": {
"Runtime.ApplicationInsightTelemetryVersion": "v2"
}
}
}
}
One setting. No trade-offs. Do this first.
Finding 2: enableDependencyTracking: false Eliminates AppDependencies Completely
AppDependencies was 33% of total ingestion in Phase 1 (0.101 GB). Phase 3 (v2 schema alone) brought it down to 0.004 GB. Disabling dependency tracking entirely dropped it to zero.
"applicationInsights": {
"enableDependencyTracking": false
}
The trade-off: You lose all HTTP call detail in App Insights — no URL, no duration, no status code per outbound call. If you need that data for day-to-day monitoring, keep it on. If you're only looking at run-level success/failure, turn it off.
Important: This setting must be a sibling of samplingSettings, not nested inside it. Several community blog posts (AzureTechInsider among them) show it nested inside samplingSettings. The Functions host silently ignores unknown properties inside samplingSettings, so the setting does nothing — which is why you see "I disabled dependency tracking and it didn't work" reports. The correct placement:
"applicationInsights": {
"samplingSettings": { "isEnabled": true, "maxTelemetryItemsPerSecond": 5 },
"enableDependencyTracking": false // ← sibling, not inside samplingSettings
}
Finding 3: Sampling Adds Only Marginal Extra Reduction
Phase 2 (v2 + sampling) vs Phase 3 (v2, no sampling): 0.058 GB vs 0.056 GB. A difference of just 0.002 GB — essentially the same cost.
Sampling does reduce AppTraces and AppRequests slightly further, but the cost is real: detailed action-level logs get dropped. You'll know a workflow run failed, but you may not know which action failed or what the input was. That makes root cause analysis harder.
If you do enable sampling, always exclude exceptions:
"samplingSettings": {
"isEnabled": true,
"maxTelemetryItemsPerSecond": 5,
"excludedTypes": "Exception"
}
This ensures errors are never dropped. We confirmed in Phase 2 that exceptions bypass sampling correctly — 43–77 exceptions per 5-minute window were captured consistently despite the sampling cap.
Warning: Don't touch initialSamplingPercentage, minSamplingPercentage, or maxSamplingPercentage unless you know what you're doing. A community report (Azure/logicapps Discussion #682) showed setting those values together caused all telemetry to vanish from App Insights entirely. Stick to maxTelemetryItemsPerSecond and excludedTypes.
Finding 4: Some Telemetry Bypasses Sampling — By Design
AppRequests actually increased as a percentage of total ingestion after enabling sampling. In Phase 1 it was 6% of total; in Phase 2 it was 67%.
This isn't a bug. Logic Apps Standard emits certain records — Host.Results for /flowruns, Host.Workflow for /flowhistories — directly from the Logic Apps extension rather than through the Functions host's logger pipeline. Those records carry no LogLevel property and are not subject to samplingSettings or logLevel filters.
Treat samplingSettings as controlling ~80–90% of telemetry, not 100%. There's currently no host.json knob to suppress these extension-emitted records.
Finding 5: enablePerformanceCountersCollection: false Had Minimal Effect
We expected this to eliminate AppPerformanceCounters entirely. Instead it went from 0.007 GB → 0.004 GB — a real but marginal reduction. Not a significant cost lever in this environment. Your mileage may vary depending on scale.
Finding 6: Disabling Dependency Tracking Changes How Errors Appear
In Phase 1, failed HTTP calls (404, 500, DNS failures) appeared as failed dependencies. In Phase 2 with dependency tracking disabled, those same failures appeared as exceptions instead.
This is worth knowing before you make the change. If you have dashboards or alerts based on AppDependencies failure rates, those will need updating. The failure data is still there — it's just in AppExceptions now.
The Right host.json — Ordered by Impact
Apply these changes one at a time so you can measure each one's effect.
Step 1 — Switch to v2 schema (do this first, always)
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
"version": "[1.*, 2.0.0)"
},
"extensions": {
"workflow": {
"Settings": {
"Runtime.ApplicationInsightTelemetryVersion": "v2"
}
}
}
}
Expected result: ~80% ingestion reduction. No troubleshooting impact.
Step 2 — Disable dependency tracking if you don't need per-call detail
"applicationInsights": {
"enableDependencyTracking": false
}
Expected result: AppDependencies drops to zero. Trade-off: no HTTP call detail.
Step 3 — Add sampling only if further reduction is needed
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"maxTelemetryItemsPerSecond": 5,
"excludedTypes": "Exception"
},
"enableDependencyTracking": false,
"enablePerformanceCountersCollection": false,
"enableLiveMetrics": true
}
Expected result: additional ~10% reduction. Trade-off: action-level traces may be dropped.
The complete cost-control host.json
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
"version": "[1.*, 2.0.0)"
},
"logging": {
"logLevel": {
"default": "Warning",
"Workflow.Host": "Warning",
"Workflow.Jobs": "Warning",
"Workflow.Runtime": "Warning",
"Workflow.Operations.Runs": "Information",
"Workflow.Operations.Actions": "Information",
"Workflow.Operations.Triggers": "Information",
"Host.Aggregator": "Error"
},
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"maxTelemetryItemsPerSecond": 5,
"excludedTypes": "Exception"
},
"enableDependencyTracking": false,
"enablePerformanceCountersCollection": false,
"enableLiveMetrics": true
}
},
"extensions": {
"workflow": {
"Settings": {
"Runtime.ApplicationInsightTelemetryVersion": "v2"
}
}
}
}
Troubleshooting Without Redeployment
When you need full telemetry to debug an issue, disable sampling instantly via an app setting — no redeployment needed:
AzureFunctionsJobHost__logging__applicationInsights__samplingSettings__isEnabled = false
Set it in the Azure portal, reproduce the issue, capture what you need, then remove it. The setting overrides host.json at runtime because Logic Apps Standard uses the standard ASP.NET Core configuration system where environment variables take precedence.
Now What — 3 Queries for the Trade-offs
1. After disabling dependency tracking, find failed outbound calls in AppExceptions
Failed HTTP calls (404, 500, DNS errors) move from AppDependencies to AppExceptions when enableDependencyTracking: false. Update any existing failure alerts to look here instead:
exceptions
| where timestamp > ago(1h)
| project timestamp, outerMessage, type, operation_Id
| order by timestamp desc
2. Confirm failed runs are always captured despite sampling
Run-level records bypass samplingSettings — you should always see failed runs even with aggressive sampling on:
requests
| where timestamp > ago(1h)
| where success == false
| project timestamp, name, resultCode, duration, operation_Id
| order by timestamp desc
3. Measure your actual cost reduction (Log Analytics)
Allow 1-2 hours after a config change for the Usage table to aggregate, then compare before and after:
Usage
| where TimeGenerated > ago(6h)
| where IsBillable == true
| summarize IngestedGB = round(sum(Quantity) / 1024, 3) by DataType
| order by IngestedGB desc
Summary
| Change | Expected reduction | Risk |
|---|---|---|
| Switch to v2 schema | ~80% | None |
enableDependencyTracking: false |
Eliminates AppDependencies
|
Lose per-call HTTP detail |
| Add adaptive sampling | Additional ~10% | May lose action trace detail |
The surprise finding: adaptive sampling is the least impactful of the three. Most teams should stop at Step 1 (v2 schema) and only go further if their bill still warrants it. v2 schema alone cuts 80% of ingestion with zero observability trade-off — that's the change to make today.
Top comments (0)