DEV Community

Cover image for Why Your ASP.NET Core LogLevel 'Warning' Still Sends Information Logs to Application Insights
Eelco Los
Eelco Los

Posted on

Why Your ASP.NET Core LogLevel 'Warning' Still Sends Information Logs to Application Insights

I first noticed this pattern while helping a teammate trim noisy telemetry costs. We had set the global logging level in an ASP.NET Core app to Warning. We redeployed. Yet the Azure Portal continued to show a steady stream of Information traces arriving from the same service. It felt like the platform was ignoring us. It wasn’t. We were ignoring a subtle layering rule.

The puzzle

Configuration (simplified):

"Logging": {
  "LogLevel": {
    "Default": "Warning"
  },
  "ApplicationInsights": {
    "LogLevel": {
      "Default": "Information"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Everyone expected Warning to be the minimum loglevel. Application Insights kept getting Information entries. The instinct is to blame the SDK. The real cause is that provider specific configuration lowers the threshold just for that provider. The Application Insights logger never re‑applies a minimum. It faithfully forwards what the Microsoft.Extensions.Logging infrastructure lets through. That infrastructure had already been told: for the provider whose alias is ApplicationInsights(Abbreviated: AI), allow Information.

How the layers actually line up

Think of the journey of a log:

  1. Your code calls logger.LogInformation("User logged in").
  2. The generic logging infrastructure consults all configured filters. A global default says Warning. A provider override for ApplicationInsights says Information. Because the target is that provider, Information is allowed.
  3. The Application Insights logger receives the entry. It calls its own IsEnabled. That only checks two things: the level is not None and telemetry has not been globally disabled.
  4. The logger maps LogLevel.Information to SeverityLevel.Information and sends a TraceTelemetry.
  5. Optional sampling or processors may still drop it downstream, but cost has already been incurred in your process and often in ingestion.

No secret minimum. No hidden widening. Just configuration precedence.

The confirmation in code

IsEnabled inside the AI logger:

public bool IsEnabled(LogLevel logLevel)
{
    return logLevel != LogLevel.None && this.telemetryClient.IsEnabled();
}
Enter fullscreen mode Exit fullscreen mode

Level mapping (translation only):

private static SeverityLevel GetSeverityLevel(LogLevel logLevel)
{
    switch (logLevel)
    {
        case LogLevel.Critical: return SeverityLevel.Critical;
        case LogLevel.Error: return SeverityLevel.Error;
        case LogLevel.Warning: return SeverityLevel.Warning;
        case LogLevel.Information: return SeverityLevel.Information;
        case LogLevel.Debug:
        case LogLevel.Trace:
        default: return SeverityLevel.Verbose;
    }
}
Enter fullscreen mode Exit fullscreen mode

TelemetryClient.IsEnabled():

public bool IsEnabled()
{
    return !this.configuration.DisableTelemetry;
}
Enter fullscreen mode Exit fullscreen mode

So if telemetry is not disabled, everything that passed filtering is shipped.

Where teams stumble

A provider override left behind by a template or copy paste. A category override intended for console logs but applied broadly. Or sampling giving the illusion that filtering is working because only some Information entries survive. All of these hide the fact that the AI provider was fed the lower level in the first place.

Making your minimum level actually stick

Remove the provider override if you do not want it:

"Logging": {
  "LogLevel": {
    "Default": "Warning"
  }
}
Enter fullscreen mode Exit fullscreen mode

Or explicitly align it so future contributors see intent:

"Logging": {
  "LogLevel": {
    "Default": "Warning"
  },
  "ApplicationInsights": {
    "LogLevel": {
      "Default": "Warning"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

If you prefer a programmatic assertion:

builder.Logging.AddFilter<
    Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>(
    category: string.Empty,
    level: LogLevel.Warning);
Enter fullscreen mode Exit fullscreen mode

Fine grained category tuning still works:

builder.Logging
    .AddFilter("Microsoft", LogLevel.Warning)
    .AddFilter("MyApp.NoisyComponent", LogLevel.Error)
    .AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>(
        "MyApp.Important", LogLevel.Information);
Enter fullscreen mode Exit fullscreen mode

If you must drop lower severity traces after they enter the pipeline (for example, during a temporary diagnostic burst), a telemetry processor can discard them, but remember you are paying the cost of generating them:

public class MinimumSeverityProcessor : ITelemetryProcessor
{
    private readonly ITelemetryProcessor _next;
    public MinimumSeverityProcessor(ITelemetryProcessor next) => _next = next;

    public void Process(ITelemetry item)
    {
        if (item is TraceTelemetry tt &&
            tt.SeverityLevel.HasValue &&
            tt.SeverityLevel < SeverityLevel.Warning)
        {
            return;
        }
        _next.Process(item);
    }
}
Enter fullscreen mode Exit fullscreen mode

Registering a small factory (pattern varies by version) wires it in. Use this sparingly.

How to verify instead of assuming

Add console logging side by side. Emit one log at every level:

logger.LogTrace("T");
logger.LogDebug("D");
logger.LogInformation("I");
logger.LogWarning("W");
logger.LogError("E");
logger.LogCritical("C");
Enter fullscreen mode Exit fullscreen mode

In the Azure Logs query editor:

traces
| where message in ("T","D","I","W","E","C")
| project timestamp, message, severityLevel
| order by timestamp desc
Enter fullscreen mode Exit fullscreen mode

If you see Information rows and you did not intend to, search your configuration for a provider or category override before blaming sampling or the SDK.

A minimal repro and its repair

Broken:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddApplicationInsightsTelemetry();

builder.Logging.ClearProviders();
builder.Logging.AddApplicationInsights();
builder.Logging.AddConsole();

builder.Configuration.AddInMemoryCollection(new Dictionary<string,string?>
{
    ["Logging:LogLevel:Default"] = "Warning",
    ["Logging:ApplicationInsights:LogLevel:Default"] = "Information"] // silent lowering
});

var app = builder.Build();

app.MapGet("/", (ILogger<Program> log) =>
{
    log.LogInformation("This still goes to AI");
    log.LogWarning("This also goes to AI");
    return "Hi";
});

app.Run();
Enter fullscreen mode Exit fullscreen mode

Repaired:

builder.Configuration.AddInMemoryCollection(new Dictionary<string,string?>
{
    ["Logging:LogLevel:Default"] = "Warning"
});
Enter fullscreen mode Exit fullscreen mode

Or declare the intended loglevel again via AddFilter.

Useful quick snippets

An appsettings template:

{
  "ApplicationInsights": {
    "ConnectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000"
  },
  "Logging": {
    "LogLevel": {
      "Default": "Warning",
      "MyApp.ImportantArea": "Information"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

A runtime toggle (handy for experiments, not production best practice):

app.MapGet("/toggle-ai", (TelemetryConfiguration cfg) =>
{
    cfg.DisableTelemetry = !cfg.DisableTelemetry;
    return $"Telemetry enabled: {!cfg.DisableTelemetry}";
});
Enter fullscreen mode Exit fullscreen mode

When a provider override is the right choice

You might intentionally gather richer telemetry centrally while keeping local console lean. You might temporarily elevate verbosity during an incident. You might selectively keep a high value category at Information in AI while leaving everything else at Warning. All valid, provided the difference is deliberate and documented.

Final pre deploy pass

Read through your combined logging configuration. Search for ApplicationInsights under Logging. Confirm any category lines that lower severity truly need to. Confirm sampling configuration matches what you expect. Issue a burst of test logs and verify what surfaces in the portal.

Further reading

Application Insights .NET SDK repository:

https://github.com/microsoft/ApplicationInsights-dotnet

Official logging docs:

https://learn.microsoft.com/aspnet/core/fundamentals/logging/

Sampling guidance:

https://learn.microsoft.com/azure/azure-monitor/app/sampling

Closing

Nothing magical forced those Information traces through. Configuration precedence invited them. Once you internalize the path, debugging level mismatches becomes quick and boring. That is exactly what you want in an observability foundation.

What are your experiences? Let them know in the comments below

Top comments (0)