DEV Community

Cover image for Exploring Logging in Caddy
Daniel Pepuho
Daniel Pepuho

Posted on • Originally published at danielcristho.site

Exploring Logging in Caddy

At some point, I realized that ‘it works’ is not enough. I need to know how it works too.”

When something slows down or fails, the question is no longer “is the service running?” but:

- Which component handled this request?

- How long did it take?

- Did the proxy retry or switch upstreams?

Without clear answers, even a simple issue can turn into a long time debugging😆. This is where logging stops being a checkbox feature and starts becoming a core part of system design.

Logs Meme

Why Default Logs Are Not Enough?

By default, Caddy already gives you access logs. They show incoming requests, response status codes, and basic metadata. For simple setups, that might be sufficient.

But once a system grows even slightly those logs start to fall short.

The first issue is noise. Default logs tend to mix everything together: health checks, internal probes, real user traffic. Important signals get buried under routine requests.

The second issue is missing context. When a request goes through a reverse proxy with load balancing, the most interesting part is often not the request itself, but what happens behind the scenes:

- How long did the upstream take to respond?

- Was there a retry or a fallback?

Without this information, debugging becomes guesswork. A 502 response tells you that something failed, but not where or why.

This post explores how to make Caddy logs actually useful especially in a Dockerized setup with reverse proxying and load-balanced Go APIs.

Understanding Caddy Logging Model

Like most web servers, Caddy separates access logs from error logs. Access logs tell you what happened to a request. Error logs tell you why something went wrong.

1. Access Logs

Access logs describe the lifecycle of an incoming request:

- who made the request

- what was requested

- how the server responded

- how long it took

This is where most operational insights come from. When properly configured, access logs can tell you:

- which upstream handled a request

- how traffic is distributed

- where latency is introduced

2. Error Logs

Error logs focus on failures and exceptional cases:

- upstream connection errors

- timeouts

- TLS or protocol issues

These logs are usually quieter, but they become critical when things break. They complement access logs by explaining why a request failed, not just that it failed.

Enabling JSON Access Logs

For this setup, I’m using a simple Go application as the backend worker. Each instance responds with its hostname, making it easy to see how requests are distributed by the load balancer.

func handler(w http.ResponseWriter, r *http.Request) {
    hostname, _ := os.Hostname()
    fmt.Fprintf(w, "Hello from %s\n", hostname)
}
Enter fullscreen mode Exit fullscreen mode

The full load-balancing setup, Caddy in front of three Go workers running in Docker has already been covered in a previous post.
If you need the full context, you can refer to here:

https://danielcristho.site/blog/go-caddy-load-balancer#the-project-setup

Logging in a Docker-Based Load Balancer

When running Caddy inside Docker, the most practical logging strategy is to write logs to stdout and let Docker handle log collection.

When you first run the stack:

$ docker compose up -d --build 

$ docker logs load_balancer
Enter fullscreen mode Exit fullscreen mode

You’ll mostly see logs like:

{"level":"info","ts":1765883971.5864136,"msg":"maxprocs: Leaving GOMAXPROCS=4: CPU quota undefined"}
{"level":"info","ts":1765883971.586685,"msg":"GOMEMLIMIT is updated","package":"github.com/KimMachineGun/automemlimit/memlimit","GOMEMLIMIT":16911289958,"previous":9223372036854775807}
{"level":"info","ts":1765883971.586744,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
{"level":"info","ts":1765883971.5883265,"msg":"adapted config to JSON","adapter":"caddyfile"}
{"level":"warn","ts":1765883971.5883417,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
{"level":"info","ts":1765883971.5897346,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"warn","ts":1765883971.5899262,"logger":"http.auto_https","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv0","http_port":80}
{"level":"warn","ts":1765883971.5901978,"logger":"http","msg":"HTTP/2 skipped because it requires TLS","network":"tcp","addr":":80"}
{"level":"warn","ts":1765883971.590206,"logger":"http","msg":"HTTP/3 skipped because it requires TLS","network":"tcp","addr":":80"}
{"level":"info","ts":1765883971.5902092,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1765883971.5905173,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00070c200"}
{"level":"info","ts":1765883971.5906706,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1765883971.591785,"msg":"serving initial configuration"}
{"level":"info","ts":1765883971.5926607,"logger":"tls","msg":"cleaning storage unit","storage":"FileStorage:/data/caddy"}
{"level":"info","ts":1765883971.5949264,"logger":"tls","msg":"finished cleaning storage units"}
Enter fullscreen mode Exit fullscreen mode

These are startup and internal runtime logs, not HTTP access logs. Access logs only appear after real HTTP traffic flows through Caddy.

To make access logs enable JSON logging globally in the Caddyfile you can put output stdout like this.

{
    log {
        level INFO
        format json
    }
}

:80 {
    log {
        output stdout
        format json
    }

    reverse_proxy worker_1:8081 worker_2:8081 worker_3:8081 {
        lb_policy random
        health_uri /
        health_interval 3s
    }
}
Enter fullscreen mode Exit fullscreen mode

At this point:

- logs are emitted to stdout

- Docker captures them automatically

Once the stack is running, send a few requests:

curl http://localhost:8082
curl http://localhost:8082
curl http://localhost:8082
Enter fullscreen mode Exit fullscreen mode

You should see responses like:

Hello from worker_1
Hello from worker_2
Hello from worker_3
Enter fullscreen mode Exit fullscreen mode

Now check the logs again:

{"level":"info","ts":1765884414.2170205,"logger":"tls","msg":"cleaning storage unit","storage":"FileStorage:/data/caddy"}
{"level":"info","ts":1765884414.2187994,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1765884423.2440372,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"172.21.0.1","remote_port":"51346","client_ip":"172.21.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:8082","uri":"/","headers":{"Sec-Fetch-Mode":["navigate"],"Sec-Gpc":["1"],"Connection":["keep-alive"],"Priority":["u=0, i"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"],"Sec-Fetch-Dest":["document"],"Sec-Fetch-Site":["none"],"User-Agent":["Mozilla/5.0 (X11; Linux x86_64; rv:145.0) Gecko/20100101 Firefox/145.0"],"Sec-Fetch-User":["?1"],"Accept-Language":["en-US,en;q=0.5"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Upgrade-Insecure-Requests":["1"]}},"bytes_read":0,"user_id":"","duration":0.000733253,"size":20,"status":200,"resp_headers":{"Content-Length":["20"],"Via":["1.1 Caddy"],"Content-Type":["text/plain; charset=utf-8"],"Date":["Tue, 16 Dec 2025 11:27:03 GMT"]}}
{"level":"info","ts":1765884423.7748682,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"172.21.0.1","remote_port":"51346","client_ip":"172.21.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:8082","uri":"/","headers":{"User-Agent":["Mozilla/5.0 (X11; Linux x86_64; rv:145.0) Gecko/20100101 Firefox/145.0"],"Sec-Fetch-User":["?1"],"Sec-Gpc":["1"],"Upgrade-Insecure-Requests":["1"],"Sec-Fetch-Dest":["document"],"Sec-Fetch-Site":["none"],"Accept-Language":["en-US,en;q=0.5"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Priority":["u=0, i"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"],"Connection":["keep-alive"],"Sec-Fetch-Mode":["navigate"]}},"bytes_read":0,"user_id":"","duration":0.000783659,"size":20,"status":200,"resp_headers":{"Via":["1.1 Caddy"],"Date":["Tue, 16 Dec 2025 11:27:03 GMT"],"Content-Length":["20"],"Content-Type":["text/plain; charset=utf-8"]}}
{"level":"info","ts":1765884424.1060758,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"172.21.0.1","remote_port":"51346","client_ip":"172.21.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:8082","uri":"/","headers":{"Upgrade-Insecure-Requests":["1"],"Sec-Fetch-Site":["none"],"Accept-Language":["en-US,en;q=0.5"],"Sec-Gpc":["1"],"Sec-Fetch-Dest":["document"],"Sec-Fetch-User":["?1"],"User-Agent":["Mozilla/5.0 (X11; Linux x86_64; rv:145.0) Gecko/20100101 Firefox/145.0"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Sec-Fetch-Mode":["navigate"],"Priority":["u=0, i"],"Connection":["keep-alive"]}},"bytes_read":0,"user_id":"","duration":0.002117571,"size":20,"status":200,"resp_headers":{"Via":["1.1 Caddy"],"Date":["Tue, 16 Dec 2025 11:27:04 GMT"],"Content-Length":["20"],"Content-Type":["text/plain; charset=utf-8"]}}
{"level":"info","ts":1765884424.456112,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"172.21.0.1","remote_port":"51346","client_ip":"172.21.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:8082","uri":"/","headers":{"Connection":["keep-alive"],"Upgrade-Insecure-Requests":["1"],"Sec-Fetch-Site":["none"],"Sec-Fetch-User":["?1"],"Priority":["u=0, i"],"Accept-Language":["en-US,en;q=0.5"],"Sec-Fetch-Mode":["navigate"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Sec-Fetch-Dest":["document"],"User-Agent":["Mozilla/5.0 (X11; Linux x86_64; rv:145.0) Gecko/20100101 Firefox/145.0"],"Sec-Gpc":["1"]}},"bytes_read":0,"user_id":"","duration":0.000959646,"size":20,"status":200,"resp_headers":{"Content-Type":["text/plain; charset=utf-8"],"Via":["1.1 Caddy"],"Date":["Tue, 16 Dec 2025 11:27:04 GMT"],"Content-Length":["20"]}}
{"level":"info","ts":1765884440.3606126,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"172.21.0.1","remote_port":"49074","client_ip":"172.21.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:8082","uri":"/","headers":{"User-Agent":["curl/7.81.0"],"Accept":["*/*"]}},"bytes_read":0,"user_id":"","duration":0.000425884,"size":20,"status":200,"resp_headers":{"Date":["Tue, 16 Dec 2025 11:27:20 GMT"],"Content-Length":["20"],"Content-Type":["text/plain; charset=utf-8"],"Via":["1.1 Caddy"]}}
{"level":"info","ts":1765884442.148407,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"172.21.0.1","remote_port":"49090","client_ip":"172.21.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:8082","uri":"/","headers":{"Accept":["*/*"],"User-Agent":["curl/7.81.0"]}},"bytes_read":0,"user_id":"","duration":0.000592974,"size":20,"status":200,"resp_headers":{"Via":["1.1 Caddy"],"Date":["Tue, 16 Dec 2025 11:27:22 GMT"],"Content-Length":["20"],"Content-Type":["text/plain; charset=utf-8"]}}
{"level":"info","ts":1765884442.6677225,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"172.21.0.1","remote_port":"49102","client_ip":"172.21.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:8082","uri":"/","headers":{"User-Agent":["curl/7.81.0"],"Accept":["*/*"]}},"bytes_read":0,"user_id":"","duration":0.000432397,"size":20,"status":200,"resp_headers":{"Via":["1.1 Caddy"],"Content-Length":["20"],"Content-Type":["text/plain; charset=utf-8"],"Date":["Tue, 16 Dec 2025 11:27:22 GMT"]}}
{"level":"info","ts":1765884443.2411807,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"172.21.0.1","remote_port":"49106","client_ip":"172.21.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:8082","uri":"/","headers":{"Accept":["*/*"],"User-Agent":["curl/7.81.0"]}},"bytes_read":0,"user_id":"","duration":0.000496529,"size":20,"status":200,"resp_headers":{"Content-Type":["text/plain; charset=utf-8"],"Date":["Tue, 16 Dec 2025 11:27:23 GMT"],"Via":["1.1 Caddy"],"Content-Length":["20"]}}
Enter fullscreen mode Exit fullscreen mode

To make the logs more readable you can use jq:

docker logs -f load_balancer 2>&1 | jq -R 'fromjson? | {level, logger, msg, request: {method: .request.method, uri: .request.uri}, status, duration}'
Enter fullscreen mode Exit fullscreen mode
...

{
  "level": "info",
  "logger": "tls",
  "msg": "finished cleaning storage units",
  "request": {
    "method": null,
    "uri": null
  },
  "status": null,
  "duration": null
}
{
  "level": "info",
  "logger": "http.log.access.log0",
  "msg": "handled request",
  "request": {
    "method": "GET",
    "uri": "/"
  },
  "status": 200,
  "duration": 0.000733253
}

...
Enter fullscreen mode Exit fullscreen mode

Which Upstream Handled This Request?

Using Debug Logs to Expose Load Balancer Decisions

Earlier, we saw that standard access logs don’t explicitly show which backend handled a request. Caddy can expose upstream selection details, just not in access logs. The information lives in debug-level logs, specifically inside the reverse proxy handler.

By enabling debug mode, Caddy reveals how routing decisions are made internally.

{
    log {
        level DEBUG
        format json
    }
}
Enter fullscreen mode Exit fullscreen mode

In this mode, a single request produces logs like this, you can see which upstream that handled request "dial":"worker_3:8081":

{"level":"debug","logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"worker_3:8081","total_upstreams":3}
Enter fullscreen mode Exit fullscreen mode

Simulating Failure (Error Logs)

So far, everything looks fine. Requests succees and access logs show no anomalies.

Real systems rarely fail all at once. They fail partially one backend goes down, a health check starts failing, or a connection becomes unstable. In this setup, stopping one worker is enough:

$ docker stop worker_2
Enter fullscreen mode Exit fullscreen mode

Try to hit the URL again:

$ curl http://localhost:8082
Enter fullscreen mode Exit fullscreen mode
Hello from worker_3
Hello from worker_1
Hello from worker_3
Enter fullscreen mode Exit fullscreen mode

From the client’s perspective, nothing breaks. Requests are still served, and responses keep coming back from the remaining healthy workers. This is exactly what we expect from a load balancer.

Now if you see the logs again, there is an unhealthy upstream:

{: dial tcp: lookup worker_2 on 127.0.0.11:53: no such host"}
{"level":"info","ts":1765992098.8407533,"logger":"http.handlers.reverse_proxy.health_checker.active","msg":"HTTP request failed","host":"worker_2:8081","error":"Get \"http://worker_2:8081/\": dial tcp: lookup worker_2 on 127.0.0.11:53: no such host"}
{"level":"info","ts":1765992101.6597724,"logger":"http.handlers.reverse_proxy.health_checker.active","msg":"HTTP request failed","host":"worker_2:8081","error":"Get \"http://worker_2:8081/\": dial tcp: lookup worker_2 on 127.0.0.11:53: no such host"}
{"level":"info","ts":1765992104.640808,"logger":"http.handlers.reverse_proxy.health_checker.active","msg":"HTTP request failed","host":"worker_2:8081","error":"Get \"http://worker_2:8081/\": dial tcp: lookup worker_2 on 127.0.0.11:53: no such host"}
Enter fullscreen mode Exit fullscreen mode

This message is emitted by Caddy’s active health checker. It indicates that:

- worker_2 is no longer reachable

- the failure happens during health check probes

Customizing Log Output

Up to this point, all examples use JSON access logs. They are structured, machine-friendly, and easy to filter.

Caddy allows you to control how logs are encoded, not just what gets logged. This is useful when the goal shifts from processing logs to reading them.

Console Logs

Besides json, Caddy also provides a console encoder. It formats logs in a more friendly way while still preserving structure.

{
    log {
        level INFO
        format json
    }
}

:80 {
    log {
        output stdout
        format console
    }

    reverse_proxy worker_1:8081 worker_2:8081 worker_3:8081 {
        lb_policy random
        health_uri /
        health_interval 3s
    }
}
Enter fullscreen mode Exit fullscreen mode
$ docker restart load_balancer 
Enter fullscreen mode Exit fullscreen mode

After restarting the container, access logs will look like this:

$ docker logs load_balancer
Enter fullscreen mode Exit fullscreen mode
2025/12/17 17:31:44.788 INFO    http.log.access.log0    handled request {"request": {"remote_ip": "172.21.0.1", "remote_port": "36598", "client_ip": "172.21.0.1", "proto": "HTTP/1.1", "method": "GET", "host": "localhost:8082", "uri": "/", "headers": {"User-Agent": ["curl/7.81.0"], "Accept": ["*/*"]}}, "bytes_read": 0, "user_id": "", "duration": 0.001321876, "size": 20, "status": 200, "resp_headers": {"Date": ["Wed, 17 Dec 2025 17:31:44 GMT"], "Content-Length": ["20"], "Content-Type": ["text/plain; charset=utf-8"], "Via": ["1.1 Caddy"]}}
Enter fullscreen mode Exit fullscreen mode

Choosing the Right Format

There is no universally “correct” log format.

- JSON works best when logs are consumed by tools or pipelines.

- Console works better when you are actively watching logs and debugging in real time.

Shaping Logs Without Changing Their Meaning

Caddy also allows fine-grained control over how log fields are named and formatted. This is not about adding new information, but about making existing information easier to work with.

For example, you can:

- rename common fields

- control timestamp formats

- standardize duration units

- normalize log levels

... 
{
    log {
        output stdout
        format json {
            time_format     "2006-01-02 15:04:05 MST"
            time_local
            duration_format "ms"
            level_format    "upper"
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Here’s the updated access log after applying the custom JSON format:

{sp_headers":{"Via":["1.1 Caddy"],"Content-Type":["text/plain; charset=utf-8"],"Date":["Wed, 17 Dec 2025 17:36:47 GMT"],"Content-Length":["20"]}}
{"level":"debug","ts":1765993007.632117,"logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"worker_1:8081","total_upstreams":3}
{"level":"debug","ts":1765993007.6339025,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"worker_1:8081","duration":0.001633852,"request":{"remote_ip":"172.21.0.1","remote_port":"36378","client_ip":"172.21.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:8082","uri":"/","headers":{"X-Forwarded-For":["172.21.0.1"],"X-Forwarded-Proto":["http"],"X-Forwarded-Host":["localhost:8082"],"Via":["1.1 Caddy"],"User-Agent":["curl/7.81.0"],"Accept":["*/*"]}},"headers":{"Date":["Wed, 17 Dec 2025 17:36:47 GMT"],"Content-Length":["20"],"Content-Type":["text/plain; charset=utf-8"]},"status":200}
{"level":"INFO","ts":"2025-12-17 17:36:47 UTC","logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"172.21.0.1","remote_port":"36378","client_ip":"172.21.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:8082","uri":"/","headers":{"User-Agent":["curl/7.81.0"],"Accept":["*/*"]}},"bytes_read":0,"user_id":"","duration":2,"size":20,"status":200,"resp_headers":{"Via":["1.1 Caddy"],"Date":["Wed, 17 Dec 2025 17:36:47 GMT"],"Content-Length":["20"],"Content-Type":["text/plain; charset=utf-8"]}}
Enter fullscreen mode Exit fullscreen mode

What Changed?

The structure of the log is still JSON, but key fields are now normalized:

- level is uppercase (INFO)

- readable timestamps and localized

- duration is expressed in milliseconds instead of fractional seconds

This makes the log easier to read during debugging.

Reducing Noise and Risk in Logs

Once logs are readable and structured, the next concern is volume and exposure. Not every field in an access log is equally useful, and some of them are better left out entirely.

Caddy allows selective filtering at the log level, making it possible to reduce noise without changing how requests are handled.

Reducing Noise by Removing Fields

Access logs often include fields that are technically correct but not relevant, like request headers.

If headers are not part of your debugging workflow, you can remove them:

...
log {
    format filter {
        request delete
        wrap json
    }
}
Enter fullscreen mode Exit fullscreen mode

With the request field removed entirely, access logs become significantly smaller:

{"level":"info","msg":"handled request","duration":0.001026852,"size":20,"status":200}
Enter fullscreen mode Exit fullscreen mode

In other cases, it’s enough to remove only specific nested fields:

...
log {
    format filter {
        request>headers>Authorization delete
        resp_headers>Server delete
        wrap json
    }
}
Enter fullscreen mode Exit fullscreen mode

Instead of deleting the entire request, selectively removing fields gives a better balance.

"headers": {"User-Agent": ["curl/7.81.0"], "Accept": ["*/*"]}
Enter fullscreen mode Exit fullscreen mode

Obscuring Sensitive Information

Some fields should not be stored in plain text at all. Rather than relying on downstream redaction, Caddy can obscure sensitive data at the source:

log {
    format filter {
        request>client_ip ip_mask {
            ipv4 24
            ipv6 56
        }
        wrap json
    }
}
Enter fullscreen mode Exit fullscreen mode

With IP masking enabled, the change is subtle but important:

{"level":"info","ts":1765994419.4105134,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"172.21.0.1","remote_port":"58562","client_ip":"172.21.0.0","proto":"HTTP/1.1","method":"GET","host":"localhost:8082","uri":"/","headers":{"Accept":["*/*"],"User-Agent":["curl/7.81.0"]}},"bytes_read":0,"user_id":"","duration":0.00146917,"size":20,"status":200,"resp_headers":{"Content-Type":["text/plain; charset=utf-8"],"Date":["Wed, 17 Dec 2025 18:00:19 GMT"],"Via":["1.1 Caddy"],"Content-Length":["20"]}}
Enter fullscreen mode Exit fullscreen mode
"client_ip": "172.21.0.0"
Enter fullscreen mode Exit fullscreen mode

The request remains traceable across logs, but the exact client address is no longer exposed.

Filtering is not about producing minimal logs. It’s about producing intentional logs.

I only apply filters after understanding:

- which fields I actually use

- which ones are never read

TL;DR

- Enabling logs is easy. Making them useful takes effort.

- Access logs show what happened to a request.

- Debug logs show how requests are routed.

- Error logs show what fails behind the scenes.

- Filtering and formatting reduce noise without hiding behavior.

Aight, that's all. Thank you for taking your time to read this post.

Feel free to give me feedback, tips, or a different perspective. I’d love to hear yours and continue the discussion.

Happy logging!🧐

References:

- Caddy Documentation: log

- Caddy Documentation: How Logging Works

Top comments (0)