<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: knspar</title>
    <description>The latest articles on DEV Community by knspar (@knspar).</description>
    <link>https://dev.to/knspar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/knspar"/>
    <language>en</language>
    <item>
      <title>TCP Observability for Microservices (Part II)</title>
      <dc:creator>knspar</dc:creator>
      <pubDate>Tue, 12 May 2026 09:37:44 +0000</pubDate>
      <link>https://dev.to/knspar/tcp-observability-for-microservices-part-ii-h49</link>
      <guid>https://dev.to/knspar/tcp-observability-for-microservices-part-ii-h49</guid>
      <description>&lt;p&gt;In a microservices architecture, application performance is not determined solely by how fast your code executes. It is equally dependent on the health of the network plumbing that connects your services. When an API call traverses a mesh of dozens of independent services, the lifecycle of TCP connections, from establishment to teardown, becomes a critical pillar of observability.&lt;/p&gt;

&lt;p&gt;The most damaging issues in these environments belong to a class of &lt;strong&gt;"invisible problems"&lt;/strong&gt;: performance degradation, connection pool exhaustion, and tail latency. Their impact often remains hidden until system pressure increases, at which point they trigger cascading failures. Key metrics such as &lt;strong&gt;connection idle time&lt;/strong&gt; and &lt;strong&gt;termination origin&lt;/strong&gt; (did the client or server close first?) are often the difference between a five-minute fix and a week-long debugging nightmare.&lt;/p&gt;

&lt;p&gt;When we talk about connection pool exhaustion or tail latency, we are not merely talking about application code. We are talking about &lt;strong&gt;TCP state management&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Application-Level Monitoring Isn't Enough
&lt;/h2&gt;

&lt;p&gt;HTTP status codes are useful for spotting surface-level application errors, but they often mask deeper transport and timing failures in distributed systems. In persistent protocols and modern microservices, many decisive performance signals do not live in the response code; they live in the &lt;strong&gt;transport flow&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  WebSockets: Managing Persistence
&lt;/h3&gt;

&lt;p&gt;WebSockets are designed to be long-lived, yet without TCP-level monitoring they can degrade into &lt;strong&gt;"zombie" connections&lt;/strong&gt;, sockets that are technically open but no longer transmitting meaningful data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Idle Time:&lt;/strong&gt; Continuous monitoring reveals whether a connection has been idle for too long. Excessive idle time often leads to silent drops by load balancers, firewalls, or NAT gateways long before the application notices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Termination Origin:&lt;/strong&gt; Distinguishing whether the client or server closed the connection is vital. It helps you diagnose whether a frontend app is crashing, a backend service is hitting a timeout, or an intermediary is forcefully resetting the flow.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  HTTP/2: Multiplexing and Stream Management
&lt;/h3&gt;

&lt;p&gt;The most efficient pattern for HTTP messaging is the reuse of a single connection for multiple exchanges—whether via standard request-response cycles or pipelined requests. This principle is even more central to HTTP/2, which employs multiplexing to handle multiple concurrent streams over a single TCP connection.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Keep-Alives and PINGs:&lt;/strong&gt; HTTP/2 uses PING frames to maintain connection liveness and measure latency. TCP-level monitoring helps confirm whether long-lived connections are truly stable or being pruned by intermediaries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connection Reuse:&lt;/strong&gt; Analyzing TCP behavior ensures that clients are actually reusing existing connections instead of repeatedly opening new ones, which would negate one of HTTP/2's main performance benefits.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Three Pillars of Connection Health
&lt;/h2&gt;

&lt;p&gt;To achieve reliable, high-performance systems, your observability stack must be able to answer three fundamental questions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Connection Utilization&lt;/strong&gt;: Is the connection being efficiently reused across multiple requests, or is churn degrading performance?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Termination Source&lt;/strong&gt;: Which side initiated the shutdown, the client or the server?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Closure Timing&lt;/strong&gt;: When was the connection closed, and did it happen earlier than expected?&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Hands-On: Tracking TCP with Justniffer
&lt;/h2&gt;

&lt;p&gt;One powerful tool for this level of visibility is &lt;strong&gt;&lt;a href="https://github.com/onotelli/justniffer" rel="noopener noreferrer"&gt;Justniffer&lt;/a&gt;&lt;/strong&gt;. It captures network traffic and produces logs focused on connection lifecycle events, making it useful for diagnosing transport-layer behavior without application instrumentation.&lt;/p&gt;

&lt;p&gt;You can use the following command to track connection timing, idle periods, and the side that closed the connection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;justniffer &lt;span class="nt"&gt;-i&lt;/span&gt; lo &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s2"&gt;"%request.timestamp %dest.ip %dest.port &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
    c=%connection.time i=%idle.time.0 req=%request.time resp=%response.time &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
    i2=%idle.time.1 s=%session.time %connection %close.originator &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
    req_h=%request.header.connection resp_h=%response.header.connection"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Field Reference
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Meaning&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;c&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;TCP connection setup/handshake time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;i&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Idle time before the request&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;req&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Request transfer time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;resp&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Response transfer time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;i2&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Idle time after the response&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;s&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Total session time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;connection&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Connection state (&lt;code&gt;start&lt;/code&gt;, &lt;code&gt;continue&lt;/code&gt;, &lt;code&gt;last&lt;/code&gt;, &lt;code&gt;unique&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;close.originator&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Which side closed the connection (&lt;code&gt;client&lt;/code&gt;, &lt;code&gt;server&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;req_h&lt;/code&gt; / &lt;code&gt;resp_h&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;The &lt;code&gt;Connection&lt;/code&gt; header values&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h3&gt;
  
  
  Scenario 1: Requests Over Non-Persistent Connections
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2026-05-12 08:57:54.651504 127.0.0.1 8080 c=0.000070 i=0.000074 req=0.000000 resp=0.005880 i2=0.000376 s=0.006400 unique server req_h=close resp_h=close
2026-05-12 08:57:56.172010 127.0.0.1 8080 c=0.000074 i=0.000073 req=0.000000 resp=0.005958 i2=0.000448 s=0.006553 unique server req_h=close resp_h=close
2026-05-12 08:57:56.891523 127.0.0.1 8080 c=0.000049 i=0.000055 req=0.000000 resp=0.006007 i2=0.000379 s=0.006490 unique server req_h=close resp_h=close
2026-05-12 08:57:58.791617 127.0.0.1 8080 c=0.000067 i=0.000056 req=0.000000 resp=0.005404 i2=0.000367 s=0.005885 unique server req_h=close resp_h=close
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this tells us:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Connection churn:&lt;/strong&gt; Every request is marked &lt;code&gt;unique&lt;/code&gt;, meaning a brand-new TCP connection is established and torn down for each HTTP request.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero reuse:&lt;/strong&gt; The &lt;code&gt;c&lt;/code&gt; value (connection setup time) is present on every line, confirming repeated TCP handshakes. This is inefficient in high-throughput paths.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explicit closure:&lt;/strong&gt; Both request and response headers specify &lt;code&gt;Connection: close&lt;/code&gt;, and the log shows the &lt;strong&gt;server&lt;/strong&gt; as the closure originator (&lt;code&gt;server&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance impact:&lt;/strong&gt; Individual request durations are low, but repeated setup/teardown adds avoidable overhead that can inflate tail latency under load.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Scenario 2: Keep-Alive Requested, But No Reuse Observed
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2026-05-12 09:10:23.468279 127.0.0.1 8080 c=0.000088 i=0.000068 req=0.000000 resp=0.005478 i2=0.000459 s=0.006093 unique client req_h=keep-alive resp_h=-
2026-05-12 09:10:24.766013 127.0.0.1 8080 c=0.000046 i=0.000060 req=0.000000 resp=0.005507 i2=0.000818 s=0.006431 unique client req_h=keep-alive resp_h=-
2026-05-12 09:10:26.701355 127.0.0.1 8080 c=0.000079 i=0.000066 req=0.000000 resp=0.005523 i2=0.000399 s=0.006067 unique client req_h=keep-alive resp_h=-
2026-05-12 09:10:27.285707 127.0.0.1 8080 c=0.000070 i=0.000063 req=0.000000 resp=0.005881 i2=0.000490 s=0.006504 unique client req_h=keep-alive resp_h=-
2026-05-12 09:10:28.261914 127.0.0.1 8080 c=0.000081 i=0.000078 req=0.000000 resp=0.006131 i2=0.000628 s=0.006918 unique client req_h=keep-alive resp_h=-
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this tells us:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Observed mismatch:&lt;/strong&gt; The client requests &lt;code&gt;Connection: keep-alive&lt;/code&gt; (&lt;code&gt;req_h=keep-alive&lt;/code&gt;), yet every entry is still &lt;code&gt;unique&lt;/code&gt;, so the connection is not reused.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Important protocol nuance:&lt;/strong&gt; &lt;code&gt;resp_h=-&lt;/code&gt; means no explicit &lt;code&gt;Connection&lt;/code&gt; header was captured in the response. In HTTP/1.1, that does &lt;strong&gt;not&lt;/strong&gt; automatically mean keep-alive was refused, because persistence is the default unless &lt;code&gt;Connection: close&lt;/code&gt; is sent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client-side closure:&lt;/strong&gt; The &lt;strong&gt;client&lt;/strong&gt; closes each connection. This can happen when the client does not pool connections, intentionally closes after each request, or uses short idle/age limits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Likely causes to verify:&lt;/strong&gt; client library behavior, reverse proxy connection policy, protocol version negotiation (HTTP/1.0 vs HTTP/1.1), and intermediary timeouts.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Scenario 3: Requests Over a Functional Keep-Alive Connection
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2026-05-12 09:11:22.933485 127.0.0.1 8080 c=0.000065 i=0.000071 req=0.000000 resp=0.006105 i2=1.410593 s=0.006241 start - req_h=keep-alive resp_h=-
2026-05-12 09:11:24.350183 127.0.0.1 8080 c=- i=- req=0.000000 resp=0.005272 i2=1.940395 s=1.422106 continue - req_h=keep-alive resp_h=-
2026-05-12 09:11:26.295850 127.0.0.1 8080 c=- i=- req=0.000000 resp=0.006187 i2=1.887490 s=3.368688 continue - req_h=keep-alive resp_h=-
2026-05-12 09:11:28.189527 127.0.0.1 8080 c=- i=- req=0.000000 resp=0.005752 i2=0.466600 s=5.261930 continue - req_h=keep-alive resp_h=-
2026-05-12 09:11:28.661879 127.0.0.1 8080 c=- i=- req=0.000000 resp=0.005523 i2=1.417703 s=7.151756 last client req_h=keep-alive resp_h=-
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this tells us:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The &lt;code&gt;start&lt;/code&gt; entry:&lt;/strong&gt; The first request bears the TCP handshake cost (&lt;code&gt;c=0.000065&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Effective reuse:&lt;/strong&gt; The &lt;code&gt;continue&lt;/code&gt; entries show &lt;code&gt;c=-&lt;/code&gt;, confirming that subsequent requests reused the same TCP connection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Idle time analysis:&lt;/strong&gt; The &lt;code&gt;i2&lt;/code&gt; values (post-response idle time) show roughly 0.47 to 1.94 seconds of wait time between requests. This is reasonable, but it should remain below intermediary idle timeouts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session-time consistency check:&lt;/strong&gt; The final &lt;code&gt;s=7.151756&lt;/code&gt; is consistent with an accumulated session timeline, matching the roughly 7.15-second span between the first and last timestamps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The &lt;code&gt;last&lt;/code&gt; entry:&lt;/strong&gt; The final line marks session end (&lt;code&gt;last&lt;/code&gt;), and the &lt;strong&gt;client&lt;/strong&gt; initiated closure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Diagnosis:&lt;/strong&gt; This pattern is typically healthy keep-alive reuse followed by normal client-side shutdown&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  From Logs to Action: A Troubleshooting Matrix
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pattern&lt;/th&gt;
&lt;th&gt;Observation&lt;/th&gt;
&lt;th&gt;Likely Cause&lt;/th&gt;
&lt;th&gt;Action&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Connection Churn&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Every request is &lt;code&gt;unique&lt;/code&gt; with high &lt;code&gt;c&lt;/code&gt; values&lt;/td&gt;
&lt;td&gt;Keep-alive disabled on client or server&lt;/td&gt;
&lt;td&gt;Enable &lt;code&gt;keep-alive&lt;/code&gt; in both application and proxy configs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;No Observed Reuse&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Client requests keep-alive but connection remains &lt;code&gt;unique&lt;/code&gt; and client closes&lt;/td&gt;
&lt;td&gt;Client not pooling, short client timeout, intermediary policy, or protocol mismatch&lt;/td&gt;
&lt;td&gt;Verify client pool settings, HTTP version, and proxy timeout/connection policy&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;TCP observability bridges the gap between vague complaints like "the network is slow" and precise conclusions such as "connections are not being reused because the client pool closes them after 5 seconds." Application-level metrics tell you &lt;em&gt;that&lt;/em&gt; a request failed; transport-layer observability tells you &lt;em&gt;why&lt;/em&gt; the underlying path behaved that way.&lt;/p&gt;

&lt;p&gt;By tracking idle times, handshake costs, and termination origins with tools like Justniffer, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Optimize keep-alive strategies to reduce connection churn.&lt;/li&gt;
&lt;li&gt;Validate that your HTTP/2 clients are actually reusing long-lived connections.&lt;/li&gt;
&lt;li&gt;Ensure WebSockets remain alive and are not silently dropped by intermediaries.&lt;/li&gt;
&lt;li&gt;Tune client and server pool configurations based on measured connection lifecycles instead of guesswork.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In modern distributed systems, if you are not observing the transport layer, you are flying blind.&lt;/p&gt;




</description>
      <category>webdev</category>
      <category>devops</category>
      <category>performance</category>
      <category>networking</category>
    </item>
    <item>
      <title>Capturing Critical Traffic in Microservices</title>
      <dc:creator>knspar</dc:creator>
      <pubDate>Mon, 11 May 2026 13:53:13 +0000</pubDate>
      <link>https://dev.to/knspar/capturing-critical-traffic-in-microservices-305h</link>
      <guid>https://dev.to/knspar/capturing-critical-traffic-in-microservices-305h</guid>
      <description>&lt;p&gt;As architectures shift toward microservices, &lt;strong&gt;visibility&lt;/strong&gt; becomes a primary challenge. Logging is often the first line of defense, yet traditional approaches frequently miss the mark during critical system moments.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge with REST APIs
&lt;/h2&gt;

&lt;p&gt;When important data is tucked inside various headers and request bodies, logging every full exchange becomes far too verbose. Developers must constantly balance &lt;strong&gt;missing data&lt;/strong&gt; against &lt;strong&gt;drowning in it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That's why we need to &lt;strong&gt;capture traffic "just when needed"&lt;/strong&gt;. However, typical network sniffers won't help if you're looking for HTTP headers or SMTP commands. You need a tool that can &lt;strong&gt;reconstruct the TCP flow&lt;/strong&gt; - not just capture fragmented packets.&lt;/p&gt;

&lt;p&gt;We will introduce &lt;strong&gt;&lt;a href="https://github.com/onotelli/justniffer" rel="noopener noreferrer"&gt;justniffer&lt;/a&gt;&lt;/strong&gt; - a powerful tool that traces TCP traffic by rebuilding the stream flow. This gives us the flexibility to inspect application-layer protocols without struggling with post-processing TCP stream reconstruction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example: Tracing Application Content
&lt;/h2&gt;

&lt;p&gt;Let's look at a few scenarios where a TCP sniffer proves useful, starting with capturing HTTP headers from a web application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Capturing HTTP Headers
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;justniffer &lt;span class="nt"&gt;-i&lt;/span&gt; lo &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"port 8080"&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; %request.header
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output (truncated for brevity):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="nf"&gt;GET&lt;/span&gt; &lt;span class="nn"&gt;/static/custom.css&lt;/span&gt; &lt;span class="k"&gt;HTTP&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="m"&gt;1.1&lt;/span&gt;
&lt;span class="na"&gt;Host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:8080&lt;/span&gt;
&lt;span class="na"&gt;Connection&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keep-alive&lt;/span&gt;
&lt;span class="na"&gt;Origin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://localhost:8080&lt;/span&gt;
&lt;span class="na"&gt;sec-ch-ua-platform&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Linux"&lt;/span&gt;
&lt;span class="na"&gt;User-Agent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/147.0.0.0 Safari/537.36&lt;/span&gt;
&lt;span class="s"&gt;...&lt;/span&gt;
&lt;span class="na"&gt;Cookie&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_gcl_au=...; token=eyJhbG...&lt;/span&gt;
&lt;span class="na"&gt;If-None-Match&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"0dfdc2d3d075cb813ea2bf73ed8854a8"&lt;/span&gt;
&lt;span class="na"&gt;If-Modified-Since&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Mon, 11 May 2026 10:43:01 GMT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Adding Connection Information
&lt;/h3&gt;

&lt;p&gt;Now let's enhance the output with connection details: IP addresses and ports.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;justniffer &lt;span class="nt"&gt;-i&lt;/span&gt; lo &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"port 8080"&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; %request.timestamp %source.ip:%source.port %dest.ip:%dest.port %newline%request.header
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;2026-05-11 15:05:37.179793 127.0.0.1:56768 127.0.0.1:8080

&lt;/span&gt;&lt;span class="nf"&gt;GET&lt;/span&gt; &lt;span class="nn"&gt;/api/v1/chats/all/tags&lt;/span&gt; &lt;span class="k"&gt;HTTP&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="m"&gt;1.1&lt;/span&gt;
&lt;span class="na"&gt;Host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:8080&lt;/span&gt;
&lt;span class="na"&gt;authorization&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Bearer eyJhbG...&lt;/span&gt;
&lt;span class="na"&gt;Cookie&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_gcl_au=...; token=...&lt;/span&gt;
&lt;span class="s"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Filtering for a Specific Header (e.g., Cookie)
&lt;/h3&gt;

&lt;p&gt;You can pipe the output to &lt;code&gt;grep&lt;/code&gt; to focus on a particular header:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;justniffer &lt;span class="nt"&gt;-i&lt;/span&gt; lo &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"port 8080"&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; INFO %request.timestamp %source.ip:%source.port %dest.ip:%dest.port %newline%request.header | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s2"&gt;"Cookie|INFO"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;INFO 2026-05-11 15:17:38.819638 127.0.0.1:49308 127.0.0.1:8080 
Cookie: _gcl_au=1.1.1932929889.1770841665; _ga=GA1.1.555273468.1770841666; ... token=eyJhbG...
INFO 2026-05-11 15:17:42.960283 127.0.0.1:49308 127.0.0.1:8080 
Cookie: _gcl_au=1.1.1932929889.1770841665; _ga=GA1.1.555273468.1770841666; ... token=eyJhbG...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Logging Both Request and Response
&lt;/h3&gt;

&lt;p&gt;To capture the full conversation (request headers + response headers and body), use &lt;code&gt;%response&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;justniffer &lt;span class="nt"&gt;-i&lt;/span&gt; lo &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"port 8080"&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; INFO %request.timestamp %source.ip:%source.port %dest.ip:%dest.port %newline%response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output (request part shown; response body may be binary):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;INFO 2026-05-11 15:25:40.585069 127.0.0.1:54076 127.0.0.1:8080 
GET /api/v1/functions/ HTTP/1.1.
Host: localhost:8080.
authorization: Bearer eyJhbG...
...
POST /api/v1/tasks/active/chats HTTP/1.1.
Content-Length: 2276.
...
{"chat_ids":["30022d80-0f5a-..."]}
INFO 2026-05-11 15:26:30.382447 127.0.0.1:54030 127.0.0.1:8080 
..iE..[C....B/.   # ← binary response body (e.g., gzipped or protobuf)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Response bodies are often compressed or binary. For text responses (JSON, XML, plain text), &lt;code&gt;justniffer&lt;/code&gt; will show them cleanly. For binary data, you'll see raw bytes - consider using &lt;code&gt;%response.header&lt;/code&gt; alone if you only need headers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;With &lt;code&gt;justniffer&lt;/code&gt;, you get application‑level insight without overwhelming log volumes. By selectively capturing headers, filtering with &lt;code&gt;grep&lt;/code&gt;, or tracing full request-response pairs, you can debug tricky issues in microservices-like authentication failures, malformed payloads, without drowning in packet dumps.&lt;/p&gt;

</description>
      <category>networking</category>
      <category>api</category>
      <category>opensource</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
