<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jan</title>
    <description>The latest articles on DEV Community by Jan (@janwiesner).</description>
    <link>https://dev.to/janwiesner</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/janwiesner"/>
    <language>en</language>
    <item>
      <title>Hybrid Chatbots and REST API: Custom Development vs. Available Solutions</title>
      <dc:creator>Jan</dc:creator>
      <pubDate>Thu, 13 Mar 2025 10:07:43 +0000</pubDate>
      <link>https://dev.to/janwiesner/hybrid-chatbots-and-rest-api-custom-development-vs-available-solutions-58hc</link>
      <guid>https://dev.to/janwiesner/hybrid-chatbots-and-rest-api-custom-development-vs-available-solutions-58hc</guid>
      <description>&lt;p&gt;Introduction&lt;br&gt;
In today's fast-paced technological landscape, hybrid chatbots are becoming increasingly important tools for communication and automation. These systems combine small language models with vector databases, enabling efficient processing and retrieval of information. A crucial element for successful integration of these components is REST API, which acts as an intermediary between:&lt;/p&gt;

&lt;p&gt;Language models (e.g., LLaMA 2, Mistral) that process natural language and generate responses.&lt;br&gt;
Vector databases (e.g., Pinecone, Weaviate, Qdrant) that store and retrieve data based on vector representations.&lt;br&gt;
Chatbot frontend (web or mobile applications) that facilitate user interaction.&lt;br&gt;
The REST API receives user queries, routes them to the appropriate components, retrieves relevant information, and returns it to the user. This process ensures smooth and effective communication within the system.&lt;/p&gt;

&lt;p&gt;Options: Custom REST API vs. Available Solutions&lt;br&gt;
Custom REST API Development&lt;br&gt;
Developing a custom REST API offers several advantages that can be crucial for complex or specific projects:&lt;/p&gt;

&lt;p&gt;Customizability: A custom API can be tailored precisely to the needs of your system, allowing optimization for specific use cases and requirements.&lt;br&gt;
Security: Full control over data and its processing enables the implementation of custom security measures to protect sensitive information.&lt;br&gt;
Flexibility: Easier integration with other services or specific features that may be critical for your application.&lt;br&gt;
Performance Optimization: A custom API can be optimized for maximum performance and efficiency, which is important for demanding applications.&lt;br&gt;
Disadvantages:&lt;/p&gt;

&lt;p&gt;Time-Consuming: Developing and testing a custom API can be time-consuming and resource-intensive.&lt;br&gt;
Maintenance Costs: Regular updates and code management can be financially and time-consuming.&lt;br&gt;
Using Available APIs&lt;br&gt;
Using existing APIs, such as OpenAI API, Hugging Face Inference API, or services from Pinecone, can be a suitable solution for projects that require rapid deployment:&lt;/p&gt;

&lt;p&gt;Quick Deployment: Existing APIs are ready to use almost immediately, speeding up development and deployment of the application.&lt;br&gt;
Reduced Technical Burden: No need to manage backend infrastructure, reducing technical overhead and allowing focus on the core application.&lt;br&gt;
Support and Documentation: Detailed documentation and support from providers facilitate integration and problem-solving.&lt;br&gt;
Disadvantages:&lt;/p&gt;

&lt;p&gt;Dependency on External Providers: Your solution is tied to the availability and terms of the API, which can be a risk in case of changes or service outages.&lt;br&gt;
Cost: Some APIs can be expensive, especially with higher query volumes, increasing operational costs.&lt;br&gt;
Limited Customization: Functionality is determined by the API provider, which may limit customization options.&lt;br&gt;
Decision Factors&lt;br&gt;
The decision between custom development and using existing APIs depends on several key factors:&lt;/p&gt;

&lt;p&gt;Project Size: For smaller projects or prototypes, it is more suitable to use existing APIs that allow for quick deployment and reduced initial costs.&lt;br&gt;
Budget and Resources: If financial or technical resources are limited, it may be better to utilize available solutions.&lt;br&gt;
Need for Control and Flexibility: For larger projects that require a high degree of control and flexibility, it is more suitable to invest in a custom REST API.&lt;br&gt;
Long-Term Goals: If the goal is to reduce costs and gain full control over the system in the long term, it is worth considering developing a custom API.&lt;br&gt;
Conclusion: Tailored Hybrid Chatbots&lt;br&gt;
By combining small language models, vector databases, and a well-integrated REST API, a robust and efficient system is created that can handle current communication and data processing demands. The hybrid approach allows achieving an optimal balance between performance, cost, and flexibility, which is crucial for successful chatbot deployment in real-world environments.&lt;/p&gt;

&lt;p&gt;If you are planning to implement your own chatbot, consider all aspects and decide based on the specific needs of your project.&lt;/p&gt;

&lt;p&gt;How to Try It Out&lt;br&gt;
If you want to experiment with creating a hybrid chatbot, you can follow this basic guide:&lt;/p&gt;

&lt;p&gt;Choose a Language Model:&lt;/p&gt;

&lt;p&gt;Start with a pre-trained model like LLaMA 2 or Mistral, available through platforms like Hugging Face.&lt;br&gt;
Set Up a Vector Database:&lt;/p&gt;

&lt;p&gt;Choose a vector database like Pinecone, Weaviate, or Qdrant and integrate it into your system.&lt;br&gt;
Create a REST API:&lt;/p&gt;

&lt;p&gt;Decide whether to use an existing API or develop your own. For a start, you can use OpenAI API or Hugging Face Inference API.&lt;br&gt;
If developing your own API, use frameworks like Flask or FastAPI for quick deployment.&lt;br&gt;
Integrate with the Frontend:&lt;/p&gt;

&lt;p&gt;Create a simple frontend for the chatbot using technologies like React or Vue.js.&lt;br&gt;
Connect the frontend with the REST API for communication with the language model and vector database.&lt;br&gt;
Testing and Optimization:&lt;/p&gt;

&lt;p&gt;Test your application with various user scenarios and optimize performance and response accuracy.&lt;br&gt;
This guide provides basic directions for creating a hybrid chatbot. If you need more details or assistance, feel free to reach out!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Dynamic throttle</title>
      <dc:creator>Jan</dc:creator>
      <pubDate>Fri, 17 Jan 2025 08:39:01 +0000</pubDate>
      <link>https://dev.to/janwiesner/dynamic-throttle-2mhb</link>
      <guid>https://dev.to/janwiesner/dynamic-throttle-2mhb</guid>
      <description>&lt;p&gt;Imagine this scenario: You have a router, switch, or wireless gateway that goes into a sort of “sleep mode” after a period of inactivity. When you send the first batch of real data, there’s a small delay because the network is “waking up.” Or your VPN tunnel loses its “steam” over time, and the next connection suffers unnecessary lag.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explore a punk-style solution called “dynamic throttle.” The idea is simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Monitor the network state (latency, and optionally other indicators).
&lt;/li&gt;
&lt;li&gt;When the network starts slowing down (latency rising, low real traffic), inject a bit of “artificial” traffic: **dummy (empty) packets.
&lt;/li&gt;
&lt;li&gt;Once the network picks back up with real traffic, we reduce dummy traffic to minimize overhead.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The goal is to keep the network active* and avoid “sleepy” states in routers, NAT tables, VPN tunnels, and other elements that tend to idle when there’s no data flowing.&lt;/p&gt;

&lt;p&gt;Where Can This Help?&lt;/p&gt;

&lt;p&gt;Modern Wi-Fi* (e.g., Wi-Fi 6, 6E): These devices can cleverly save power, but occasional traffic might cause extra latency when the radio module reactivates from dozing. Sending a few dummy packets could reduce that wake-up delay.&lt;br&gt;&lt;br&gt;
Mobile and 5G Networks: The same phenomenon can happen — a device may transition to a lower power state (RRC_IDLE) and needs reactivation time. A dynamic throttle might help keep the connection in a more active state if latency starts spiking.&lt;br&gt;&lt;br&gt;
VPN Tunnels (OpenVPN, WireGuard, etc.): If you have prolonged inactivity, the router, NAT, or even the VPN daemon may consider the link dormant. Then the first real packet gets delayed while everything re-initializes.&lt;br&gt;&lt;br&gt;
Edge Computing: Many gateways gather sensor data and send it to the cloud in bursts. A dynamic throttle can maintain the upstream link in a ready state, preventing brief dropouts or delays when establishing a connection.&lt;/p&gt;

&lt;p&gt;How It Works: A Simple Outline&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Measure latency (and optionally throughput, packet loss, etc.)

&lt;ul&gt;
&lt;li&gt;For example, run a small ping every few seconds to a target (DNS server, cloud endpoint, VPN gateway).
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Decision Logic

&lt;ul&gt;
&lt;li&gt;If latency is low, keep dummy traffic to a minimum.
&lt;/li&gt;
&lt;li&gt;If latency/jitter increases and there’s little real traffic, crank up dummy packets to “revive” the link.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Send Dummy Packets

&lt;ul&gt;
&lt;li&gt;Usually small UDP packets (50–200 bytes). Send them at some rate (e.g., 2–10 per second), dynamically adjusted according to network conditions.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Check and Repeat

&lt;ul&gt;
&lt;li&gt;Once real traffic flows again, reduce the dummy packets so you’re not wasting bandwidth or power.
&lt;/li&gt;
&lt;li&gt;Periodically re-evaluate.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A Functional Python Example (Less Theoretical)&lt;/p&gt;

&lt;p&gt;Below is a working (though not production-ready) example of how you can implement a “dynamic throttle” on Linux (or any OS with &lt;code&gt;ping&lt;/code&gt; and socket support). This script:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Periodically pings (&lt;code&gt;PING_TARGET&lt;/code&gt;) to measure latency.
&lt;/li&gt;
&lt;li&gt;Optionally monitors real traffic via &lt;code&gt;tc&lt;/code&gt; command (Traffic Control) — this is a simplified demonstration of how you might read outgoing traffic on an interface.
&lt;/li&gt;
&lt;li&gt;Dynamically adjusts the amount of dummy packets (UDP) sent to a given target.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: The &lt;code&gt;tc&lt;/code&gt; part is optional and might require elevated privileges (sudo). If you don’t need it, you can remove that logic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;``python&lt;br&gt;
import subprocess&lt;br&gt;
import re&lt;br&gt;
import socket&lt;br&gt;
import time&lt;br&gt;
import math&lt;/p&gt;

&lt;h1&gt;
  
  
  -----------------------------
&lt;/h1&gt;

&lt;h1&gt;
  
  
  CONFIG
&lt;/h1&gt;

&lt;h1&gt;
  
  
  -----------------------------
&lt;/h1&gt;

&lt;p&gt;PING_TARGET = "8.8.8.8"            # Where we ping to measure latency&lt;br&gt;
DUMMY_TARGET = ("8.8.8.8", 9999)   # Where we send dummy packets&lt;br&gt;
NETWORK_INTERFACE = "eth0"         # (Optional) Interface to measure real traffic&lt;br&gt;
PACKET_SIZE = 64                   # Size of dummy packets in bytes&lt;br&gt;
PING_INTERVAL = 2.0                # Interval between pings (s)&lt;br&gt;
REGULATION_INTERVAL = 2.0          # Interval for adjusting dummy traffic (s)&lt;br&gt;
LOW_LATENCY_THRESHOLD = 50.0       # (ms)&lt;br&gt;
HIGH_LATENCY_THRESHOLD = 120.0     # (ms)&lt;br&gt;
MAX_DUMMY_PACKETS_PER_SEC = 10     # Max number of dummy packets per second&lt;/p&gt;

&lt;h1&gt;
  
  
  If ping fails, we assume a default (high) latency
&lt;/h1&gt;

&lt;p&gt;DEFAULT_LATENCY = 999.0&lt;/p&gt;

&lt;h1&gt;
  
  
  Global variable for the number of dummy packets per second
&lt;/h1&gt;

&lt;p&gt;dummy_packets_per_sec = 0&lt;/p&gt;

&lt;h1&gt;
  
  
  -----------------------------
&lt;/h1&gt;

&lt;h1&gt;
  
  
  FUNCTIONS
&lt;/h1&gt;

&lt;h1&gt;
  
  
  -----------------------------
&lt;/h1&gt;

&lt;p&gt;def measure_latency(target: str) -&amp;gt; float:&lt;br&gt;
    """&lt;br&gt;
    Measure latency to the target using one ping.&lt;br&gt;
    Returns the average RTT in milliseconds (float).&lt;br&gt;
    If measurement fails, returns DEFAULT_LATENCY.&lt;br&gt;
    """&lt;br&gt;
    try:&lt;br&gt;
        output = subprocess.check_output(&lt;br&gt;
            ["ping", "-c", "1", "-W", "1", target],&lt;br&gt;
            stderr=subprocess.STDOUT,&lt;br&gt;
            text=True&lt;br&gt;
        )&lt;br&gt;
        match = re.search(r"rtt min/avg/max/mdev = [\d.]+/([\d.]+)/", output)&lt;br&gt;
        if match:&lt;br&gt;
            return float(match.group(1))&lt;br&gt;
    except subprocess.CalledProcessError:&lt;br&gt;
        pass&lt;br&gt;
    return DEFAULT_LATENCY&lt;/p&gt;

&lt;p&gt;def measure_real_traffic(iface: str) -&amp;gt; float:&lt;br&gt;
    """&lt;br&gt;
    (Optional) function to measure outgoing real traffic (in Mb/s).&lt;br&gt;
    In reality, you might parse /proc/net/dev, bmon, or ifstat, etc.&lt;br&gt;
    Here, we do a simplified example with &lt;code&gt;tc -s qdisc show dev ...&lt;/code&gt;.&lt;br&gt;
    NOTE: This requires sudo and is very naive for demonstration.&lt;br&gt;
    """&lt;br&gt;
    try:&lt;br&gt;
        output = subprocess.check_output(&lt;br&gt;
            ["tc", "-s", "qdisc", "show", "dev", iface],&lt;br&gt;
            stderr=subprocess.STDOUT,&lt;br&gt;
            text=True&lt;br&gt;
        )&lt;br&gt;
        # Super simplified parsing - we look for 'Sent x bytes ...'&lt;br&gt;
        match = re.search(r"Sent\s+(\d+)\s+bytes", output)&lt;br&gt;
        if match:&lt;br&gt;
            sent_bytes = float(match.group(1))&lt;br&gt;
            # Real measurement would store the previous value, compare time deltas, etc.&lt;br&gt;
            # We'll just return a "fake" value&lt;br&gt;
            return sent_bytes / 1e6  # pseudo MB =&amp;gt; not exactly Mb/s&lt;br&gt;
    except subprocess.CalledProcessError:&lt;br&gt;
        pass&lt;br&gt;
    return 0.0&lt;/p&gt;

&lt;p&gt;def regulation_logic(latency: float, real_traffic_mbps: float) -&amp;gt; int:&lt;br&gt;
    """&lt;br&gt;
    Decide how many dummy packets to send:&lt;br&gt;
    - If latency &amp;lt; LOW_LATENCY_THRESHOLD, we minimize dummy packets.&lt;br&gt;
      If real traffic is flowing, we hardly need any dummy packets.&lt;br&gt;
    - If latency &amp;gt; LOW_LATENCY_THRESHOLD and real traffic is low,&lt;br&gt;
      we increase dummy packets to "wake the link up."&lt;br&gt;
    - Above HIGH_LATENCY_THRESHOLD, we cut dummy packets so we don't make congestion worse.&lt;br&gt;
    """&lt;br&gt;
    # Rough heuristic:&lt;br&gt;
    # latency &amp;lt; 50 ms =&amp;gt; 0 to 2 p/s&lt;br&gt;
    # latency 50-120 ms =&amp;gt; 2 to 7 p/s, depending on real traffic&lt;br&gt;
    # latency &amp;gt; 120 ms =&amp;gt; 0 p/s (network is overloaded)&lt;br&gt;
    if latency &amp;gt;= HIGH_LATENCY_THRESHOLD:&lt;br&gt;
        return 0  # Network is congested, don't add dummy traffic&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if latency &amp;lt;= LOW_LATENCY_THRESHOLD:
    # Network is OK
    if real_traffic_mbps &amp;gt; 0.5:
        # Enough real traffic is flowing
        return 0
    else:
        return 2
else:
    # Medium latency
    if real_traffic_mbps &amp;gt; 0.5:
        # Some real load, don't overdo dummy
        return 2
    else:
        # Very little real traffic, but latency is rising =&amp;gt; add more dummy
        return 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;def send_dummy_packet(sock: socket.socket, size=64):&lt;br&gt;
    """&lt;br&gt;
    Send one dummy UDP packet of given size.&lt;br&gt;
    """&lt;br&gt;
    data = b"x" * size&lt;br&gt;
    sock.sendto(data, DUMMY_TARGET)&lt;/p&gt;

&lt;h1&gt;
  
  
  -----------------------------
&lt;/h1&gt;

&lt;h1&gt;
  
  
  MAIN LOOP
&lt;/h1&gt;

&lt;h1&gt;
  
  
  -----------------------------
&lt;/h1&gt;

&lt;p&gt;def main():&lt;br&gt;
    global dummy_packets_per_sec&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setblocking(False)

last_ping_time = time.time()
last_regulation_time = time.time()
last_dummy_send_time = time.time()

interval_between_dummy = 1.0

while True:
    now = time.time()

    # Ping every PING_INTERVAL seconds
    if now - last_ping_time &amp;gt;= PING_INTERVAL:
        last_ping_time = now
        latency = measure_latency(PING_TARGET)
        print(f"[PING] Latency = {latency:.1f} ms")

    # Adjust dummy traffic every REGULATION_INTERVAL seconds
    if now - last_regulation_time &amp;gt;= REGULATION_INTERVAL:
        last_regulation_time = now

        real_traffic_mbps = measure_real_traffic(NETWORK_INTERFACE)
        current_latency = measure_latency(PING_TARGET)

        dummy_packets_per_sec = regulation_logic(current_latency, real_traffic_mbps)
        dummy_packets_per_sec = min(dummy_packets_per_sec, MAX_DUMMY_PACKETS_PER_SEC)

        if dummy_packets_per_sec &amp;gt; 0:
            interval_between_dummy = 1.0 / dummy_packets_per_sec
        else:
            interval_between_dummy = 9999.0  # effectively off

        print(f"[REGULATION] Setting dummy p/s to: {dummy_packets_per_sec}, "
              f"real traffic ~ {real_traffic_mbps:.2f} Mb/s")

    # Send dummy packets spaced out in time
    if dummy_packets_per_sec &amp;gt; 0:
        if now - last_dummy_send_time &amp;gt;= interval_between_dummy:
            last_dummy_send_time = now
            send_dummy_packet(sock, PACKET_SIZE)

    time.sleep(0.01)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;if &lt;strong&gt;name&lt;/strong&gt; == "&lt;strong&gt;main&lt;/strong&gt;":&lt;br&gt;
    main()&lt;br&gt;
``&lt;/p&gt;

&lt;p&gt;Implementation Notes&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;measure_latency: Classic ping. If it fails, we set the latency to &lt;code&gt;DEFAULT_LATENCY&lt;/code&gt;.
&lt;/li&gt;
&lt;li&gt;measure_real_traffic: A demonstration of how you might detect whether real traffic is flowing. In a real implementation, you’d parse byte counters (e.g., from &lt;code&gt;/proc/net/dev&lt;/code&gt;), track deltas over time, and calculate actual Mbps.
&lt;/li&gt;
&lt;li&gt;regulation_logic: A simple heuristic. If latency is good, we minimize dummy. If latency goes up and real traffic is low, we add more dummy. Above a high latency threshold, we stop sending dummy to avoid making congestion worse.
&lt;/li&gt;
&lt;li&gt;Main Loop:

&lt;ul&gt;
&lt;li&gt;Periodically pings at &lt;code&gt;PING_INTERVAL&lt;/code&gt;.
&lt;/li&gt;
&lt;li&gt;Periodically recalculates (every &lt;code&gt;REGULATION_INTERVAL&lt;/code&gt;) and adjusts dummy packets.
&lt;/li&gt;
&lt;li&gt;Sends dummy packets at the computed interval, so we don’t create big bursts.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: This is still a &lt;em&gt;proof-of-concept&lt;/em&gt;, but it’s &lt;strong&gt;already functional&lt;/strong&gt;. If you run this on Linux (with &lt;code&gt;tc&lt;/code&gt; installed), you’ll see it react to latency and (roughly) to real network traffic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Use cases for modern devices&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Edge AI / IoT Gateways

&lt;ul&gt;
&lt;li&gt;Running on a Raspberry Pi or NUC, sending data to the cloud sporadically. A dynamic throttle helps keep your ISP router or local switch from dropping into idle.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Mobile Hotspots

&lt;ul&gt;
&lt;li&gt;If your device is operating as a hotspot with LTE/5G, a dynamic throttle might reduce response time. But beware of extra battery drain.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;VPN (OpenVPN/WireGuard)

&lt;ul&gt;
&lt;li&gt;Point &lt;code&gt;DUMMY_TARGET&lt;/code&gt; to the other end of the tunnel, generating keep-alive traffic that keeps NAT/firewalls from dropping the VPN session.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Modern Wi-Fi Mesh

&lt;ul&gt;
&lt;li&gt;Small dummy keep-alive packets can help ensure mesh nodes don’t discard routing information during low-traffic periods.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Dynamic throttle is an unconventional (some might say “punk”) technique that uses adaptive dummy packets to keep network traffic slightly active. In modern devices (with various power-saving and QoS features), it may help in scenarios where you dislike “startup delays,” and you’re okay with a slight overhead in bandwidth or power.&lt;/p&gt;

&lt;p&gt;Key is dynamism: you only increase dummy traffic when latency is rising and real traffic is idle. Otherwise, you scale dummy down to minimize waste.&lt;br&gt;&lt;br&gt;
In practice, always measure carefully — sometimes it helps; sometimes it’s negligible.&lt;br&gt;&lt;br&gt;
The code above is a real example of how you might start on Linux. You could adapt it, add more refined metrics, or integrate a more sophisticated control algorithm (e.g., PID or eBPF-based solutions).  &lt;/p&gt;

&lt;p&gt;If you try this, have fun experimenting! Let us know if it actually improves anything in your setup — or if you hit constraints that make it purely theoretical. Even negative results can be valuable, as they push our engineering understanding further.  &lt;/p&gt;

&lt;p&gt;Tip: Want to take it further? Check out PID controllers or eBPF. You can dynamically respond to latency in even more precise ways in rea&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
