<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hari Prakash</title>
    <description>The latest articles on DEV Community by Hari Prakash (@hari_prakash_b0a882ec9225).</description>
    <link>https://dev.to/hari_prakash_b0a882ec9225</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hari_prakash_b0a882ec9225"/>
    <language>en</language>
    <item>
      <title>Vibe Coding Security: 69 Vulnerabilities Found in AI-Generated Apps — Is Yours Safe?</title>
      <dc:creator>Hari Prakash</dc:creator>
      <pubDate>Thu, 19 Mar 2026 06:17:02 +0000</pubDate>
      <link>https://dev.to/hari_prakash_b0a882ec9225/vibe-coding-security-69-vulnerabilities-found-in-ai-generated-apps-is-yours-safe-5hcn</link>
      <guid>https://dev.to/hari_prakash_b0a882ec9225/vibe-coding-security-69-vulnerabilities-found-in-ai-generated-apps-is-yours-safe-5hcn</guid>
      <description>&lt;p&gt;Vibe coding security risks are no longer theoretical. A December 2025 study by Tenzai tested 15 applications built by the five most popular AI coding tools — Cursor, Claude Code, Replit, Devin, and OpenAI Codex — and found &lt;strong&gt;69 security vulnerabilities&lt;/strong&gt; across them. Every single tool introduced Server-Side Request Forgery. Zero of the 15 apps had CSRF protection. Zero set any security headers. If you shipped a vibe-coded app to production this year, there is a near-certain chance it has exploitable holes right now.&lt;/p&gt;

&lt;p&gt;I have been building developer tools at &lt;a href="https://tools.pinusx.com" rel="noopener noreferrer"&gt;PinusX&lt;/a&gt; for a while now, and the volume of insecure AI-generated code I see passing through our &lt;a href="https://tools.pinusx.com/vibescan" rel="noopener noreferrer"&gt;VibeScan security scanner&lt;/a&gt; has tripled in the last six months. This is not a niche problem anymore. This is the default state of how software gets built in 2026.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tenzai Study: 69 Vulnerabilities Across 5 AI Coding Tools
&lt;/h2&gt;

&lt;p&gt;The research methodology was straightforward. Tenzai asked each of the five major AI coding tools to build three web applications — a task manager, an e-commerce app, and a social media clone. Standard web apps. Nothing exotic. Then they ran security audits on all 15 resulting codebases.&lt;/p&gt;

&lt;p&gt;The results were ugly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Claude Code:&lt;/strong&gt; 16 vulnerabilities, 4 critical&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Devin:&lt;/strong&gt; 14 vulnerabilities&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cursor:&lt;/strong&gt; 13 vulnerabilities&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;OpenAI Codex:&lt;/strong&gt; 13 vulnerabilities&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Replit:&lt;/strong&gt; 13 vulnerabilities&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The distribution tells you something important: this is not one bad tool. Every AI coding assistant produced code with serious security flaws. The problem is systemic — it lives in the training data, the optimization targets, and the fundamental way these models understand "working code."&lt;/p&gt;

&lt;h3&gt;
  
  
  100% SSRF Rate
&lt;/h3&gt;

&lt;p&gt;Every single AI coding tool — all five, across all three apps — introduced Server-Side Request Forgery vulnerabilities. That is a 100% failure rate on one of the most dangerous vulnerability classes in web applications.&lt;/p&gt;

&lt;p&gt;SSRF lets an attacker make the server send requests to internal services, cloud metadata endpoints, and other resources that should never be reachable from the outside. In a cloud environment, that often means access to &lt;code&gt;169.254.169.254&lt;/code&gt; — the instance metadata service — which hands over IAM credentials, API keys, and everything else needed to own the infrastructure.&lt;/p&gt;

&lt;p&gt;Here is what AI-generated SSRF-vulnerable code typically looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// AI-generated: fetches a URL provided by the user&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/api/fetch-url&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="c1"&gt;// No validation. No allowlist. Just fetches whatever you give it.&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fix requires URL validation, protocol restrictions, and ideally an allowlist of permitted domains:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Secure: validates URL before fetching&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/api/fetch-url&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;// Parse and validate&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;parsed&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;parsed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Invalid URL&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Protocol allowlist&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;parsed&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;protocol&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Invalid protocol&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Block internal/metadata IPs&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;blocked&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;169.254.169.254&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;127.0.0.1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0.0.0.0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;localhost&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;metadata.google.internal&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;blocked&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;parsed&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hostname&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;403&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Blocked host&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;parsed&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Zero Security Headers. Zero CSRF Protection.
&lt;/h3&gt;

&lt;p&gt;Not a single one of the 15 applications set &lt;code&gt;Content-Security-Policy&lt;/code&gt;, &lt;code&gt;Strict-Transport-Security&lt;/code&gt;, &lt;code&gt;X-Frame-Options&lt;/code&gt;, or any other security header. None. This is the HTTP equivalent of leaving every door and window unlocked because you forgot buildings need locks.&lt;/p&gt;

&lt;p&gt;Similarly, zero apps implemented CSRF protection. Forms could be submitted from any origin. That means any website could make authenticated requests on behalf of your users — transferring money, changing passwords, deleting accounts — with a simple hidden form.&lt;/p&gt;

&lt;p&gt;Only 1 of the 15 apps even attempted rate limiting. And that one implementation was bypassable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Broader Vibe Coding Security Crisis in Numbers
&lt;/h2&gt;

&lt;p&gt;The Tenzai study is damning on its own, but it is consistent with a wave of research all pointing the same direction:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;45% vulnerability rate&lt;/strong&gt; — Veracode's 2025 GenAI Code Security Report found that 45% of AI-generated code across 80 coding tasks contained security vulnerabilities&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Only 10.5% is actually secure&lt;/strong&gt; — Carnegie Mellon University found that while 61% of AI-generated code is functionally correct, only 10.5% passes a security review&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;2.74x more XSS&lt;/strong&gt; — CodeRabbit's analysis showed AI-generated code contains 2.74 times more cross-site scripting vulnerabilities than human-written code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;400+ exposed secrets&lt;/strong&gt; — Escape.tech scanned 5,600 publicly deployed vibe-coded applications and found over 400 exposed API keys and secrets&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CVEs doubling annually&lt;/strong&gt; — AI-related CVEs jumped from 168 in 2024 to 330 in 2025, nearly doubling year-over-year as agentic development scales&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Read that Carnegie Mellon number again. Fewer than 11 out of every 100 AI-generated code snippets are secure. The other 89 either have exploitable vulnerabilities or fail to follow security best practices. And most developers never run a security scan before deploying.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI Coding Tools Keep Producing Insecure Code
&lt;/h2&gt;

&lt;p&gt;Understanding the vibe coding security risks requires understanding why the models fail in predictable ways. It is not random — the failures cluster around specific patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  Training Data Reflects Public Code (Which Is Mostly Insecure)
&lt;/h3&gt;

&lt;p&gt;AI models learn from public repositories. Most code on GitHub is tutorial code, prototype code, or code written without security review. A March 2026 analysis from Wits University highlighted this directly: AI models absorb both secure and insecure patterns from public repos, perpetuating legacy practices and deprecated standards.&lt;/p&gt;

&lt;p&gt;When you ask an AI to write a SQL query, it will default to string concatenation — because that is what most of the training examples look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;What&lt;/span&gt; &lt;span class="n"&gt;AI&lt;/span&gt; &lt;span class="n"&gt;generates&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt;
&lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;"SELECT * FROM users WHERE email = '"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nv"&gt;"'"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;What&lt;/span&gt; &lt;span class="n"&gt;you&lt;/span&gt; &lt;span class="n"&gt;actually&lt;/span&gt; &lt;span class="n"&gt;need&lt;/span&gt;
&lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;"SELECT * FROM users WHERE email = $1"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="k"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;await&lt;/span&gt; &lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Optimized for "Works" Not "Secure"
&lt;/h3&gt;

&lt;p&gt;AI coding tools are evaluated on functional correctness. Does the code run? Does it pass the tests? Does the app load? Security is almost never part of the evaluation loop. The result is code that works perfectly and is riddled with vulnerabilities — the software equivalent of a car with no brakes that accelerates beautifully.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hallucinated Dependencies
&lt;/h3&gt;

&lt;p&gt;The Wits University researchers highlighted another vibe coding security risk: hallucinated package names. AI models sometimes reference packages that do not exist. Attackers register those nonexistent package names on npm or PyPI and fill them with malicious code. When a developer installs dependencies from their AI-generated &lt;code&gt;package.json&lt;/code&gt;, they pull in the attacker's payload. It is supply chain compromise via hallucination.&lt;/p&gt;

&lt;h2&gt;
  
  
  CVEs in the Tools Themselves
&lt;/h2&gt;

&lt;p&gt;It gets worse. The vibe coding security risks extend beyond generated code to the AI coding tools themselves. Over 30 vulnerabilities across 24 CVEs have been identified in the tools developers use to write code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CVE-2025-54135 (Cursor):&lt;/strong&gt; An MCP-related vulnerability that could allow malicious Model Context Protocol servers to execute arbitrary actions through the Cursor IDE&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CVE-2025-55284 (Claude Code):&lt;/strong&gt; A DNS exfiltration vulnerability where Claude Code could be tricked into leaking sensitive data through DNS lookups embedded in generated code&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the tools generating insecure code also have their own exploitable attack surfaces. If you are running an AI coding tool with MCP servers connected, you are extending your trust boundary to every MCP server in the chain.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Scan and Fix Vibe-Coded Applications
&lt;/h2&gt;

&lt;p&gt;Here is the practical part. You have a vibe-coded app in production, or you are about to ship one. What do you do?&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Run an Automated Security Scan
&lt;/h3&gt;

&lt;p&gt;Before anything else, run your codebase through a security scanner built specifically for AI-generated code patterns. You can scan your app for free at &lt;a href="https://tools.pinusx.com/vibescan" rel="noopener noreferrer"&gt;tools.pinusx.com/vibescan&lt;/a&gt; — it checks for the exact vulnerability classes that AI coding tools produce most frequently: SSRF, XSS, SQL injection, missing security headers, exposed secrets, and CSRF gaps.&lt;/p&gt;

&lt;p&gt;VibeScan Pro goes deeper with full OWASP Top 10 coverage, dependency vulnerability scanning, and continuous monitoring so new vulnerabilities get flagged before they reach production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Add Security Headers
&lt;/h3&gt;

&lt;p&gt;Since zero AI coding tools set security headers automatically, add them yourself. At minimum:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Content-Security-Policy&lt;/code&gt; — prevents XSS by controlling which scripts can execute&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Strict-Transport-Security&lt;/code&gt; — forces HTTPS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;X-Frame-Options: DENY&lt;/code&gt; — prevents clickjacking&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;X-Content-Type-Options: nosniff&lt;/code&gt; — prevents MIME-type sniffing&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Audit Authentication and API Endpoints
&lt;/h3&gt;

&lt;p&gt;AI-generated auth code is particularly dangerous. Check for hardcoded secrets, missing token validation, and overly permissive CORS. If your app uses JWTs, decode them at &lt;a href="https://tools.pinusx.com/jwt" rel="noopener noreferrer"&gt;tools.pinusx.com/jwt&lt;/a&gt; to verify the algorithm, expiration, and claims are configured correctly. I wrote about &lt;a href="https://tools.pinusx.com/blog/jwt-security-best-practices-2026" rel="noopener noreferrer"&gt;JWT security best practices&lt;/a&gt; separately — the short version is: never trust the algorithm header from the token itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Test Your Endpoints
&lt;/h3&gt;

&lt;p&gt;Use an &lt;a href="https://tools.pinusx.com/api-tester" rel="noopener noreferrer"&gt;API testing tool&lt;/a&gt; to manually probe your endpoints. Try sending requests without authentication tokens. Try sending requests with modified payloads. Try SSRF payloads against any endpoint that accepts URLs. If your app accepts &lt;a href="https://tools.pinusx.com/webhooks" rel="noopener noreferrer"&gt;webhook callbacks&lt;/a&gt;, verify those endpoints validate signatures and reject replayed requests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Lock Down Dependencies
&lt;/h3&gt;

&lt;p&gt;Run &lt;code&gt;npm audit&lt;/code&gt; or your language equivalent. Cross-reference your dependency list against known packages. If any dependency name looks unusual or has zero downloads, investigate — it may be a hallucinated package name that was squatted by an attacker.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Vibe Coding Security Checklist
&lt;/h2&gt;

&lt;p&gt;CheckWhat to Look ForAI Failure Rate&lt;/p&gt;

&lt;p&gt;SSRF ProtectionURL validation, IP blocking on fetch/request endpoints100% fail&lt;br&gt;
CSRF TokensAnti-CSRF tokens on all state-changing forms100% fail&lt;br&gt;
Security HeadersCSP, HSTS, X-Frame-Options, X-Content-Type-Options100% fail&lt;br&gt;
Rate LimitingRequest throttling on auth and API endpoints93% fail&lt;br&gt;
SQL InjectionParameterized queries, no string concatenationHigh&lt;br&gt;
XSS PreventionOutput encoding, CSP, sanitized user input2.74x vs human&lt;br&gt;
Secret ManagementNo hardcoded keys, env vars properly loaded400+ exposed in 5,600 apps&lt;br&gt;
Auth ImplementationProper token validation, secure session handlingHigh&lt;/p&gt;

&lt;h2&gt;
  
  
  Will AI Coding Tools Get More Secure?
&lt;/h2&gt;

&lt;p&gt;Probably. Eventually. The tool vendors are aware of these reports and are investing in security guardrails. But the timeline for "AI generates secure code by default" is measured in years, not months. The models need security-focused fine-tuning, the evaluation benchmarks need to include security metrics, and the training data problem has no quick fix.&lt;/p&gt;

&lt;p&gt;In the meantime, every developer using AI coding tools needs to treat the generated code exactly like code from a junior developer who has never heard of OWASP. Review it. Scan it. Test it. Do not assume that code which runs correctly is code that runs safely.&lt;/p&gt;

&lt;p&gt;The 69 vulnerabilities across 15 apps were not edge cases. They were the norm. The question is not whether your vibe-coded app has security vulnerabilities — it is how many, and whether you find them before someone else does.&lt;/p&gt;

&lt;p&gt;Start by scanning your codebase at &lt;a href="https://tools.pinusx.com/vibescan" rel="noopener noreferrer"&gt;tools.pinusx.com/vibescan&lt;/a&gt;. It takes less time than reading this article, and it might save you from a very bad day.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;p&gt;What are the biggest vibe coding security risks?&lt;/p&gt;

&lt;p&gt;The most critical vibe coding security risks are Server-Side Request Forgery (SSRF), missing CSRF protection, absent security headers, SQL injection via string concatenation, and cross-site scripting (XSS). The Tenzai study found that 100% of AI coding tools introduced SSRF and 0% of AI-generated apps included CSRF protection or security headers like Content-Security-Policy and Strict-Transport-Security.&lt;/p&gt;

&lt;p&gt;How many vulnerabilities do AI coding tools produce?&lt;/p&gt;

&lt;p&gt;Research shows consistently high vulnerability rates. The Tenzai study found 69 vulnerabilities across 15 apps built by five major AI tools. Veracode reported a 45% vulnerability rate across 80 AI coding tasks. Carnegie Mellon found that only 10.5% of AI-generated code is actually secure, even when 61% is functionally correct.&lt;/p&gt;

&lt;p&gt;Is Cursor safe to use for coding?&lt;/p&gt;

&lt;p&gt;Cursor is a capable coding tool but its generated code requires security review. The Tenzai study found 13 vulnerabilities in Cursor-generated apps. Additionally, CVE-2025-54135 identified a vulnerability in Cursor itself related to MCP server interactions. Use Cursor for productivity, but always run security scans on the output before deploying to production.&lt;/p&gt;

&lt;p&gt;How do I scan my vibe-coded app for security vulnerabilities?&lt;/p&gt;

&lt;p&gt;Use an automated security scanner designed for AI-generated code patterns. Tools like VibeScan check for the specific vulnerability classes AI tools produce most often — SSRF, XSS, SQL injection, missing security headers, exposed secrets, and CSRF gaps. You should also run &lt;code&gt;npm audit&lt;/code&gt; for dependency vulnerabilities and manually test authentication endpoints.&lt;/p&gt;

&lt;p&gt;Why does AI-generated code have more security vulnerabilities than human-written code?&lt;/p&gt;

&lt;p&gt;AI models learn from public repositories where most code is written without security review — tutorials, prototypes, and hobby projects. They optimize for functional correctness rather than security. CodeRabbit's analysis found AI-generated code contains 2.74 times more XSS vulnerabilities than human-written code. The models also hallucinate package names, creating supply chain attack vectors.&lt;/p&gt;

&lt;p&gt;Can AI coding tools introduce supply chain vulnerabilities?&lt;/p&gt;

&lt;p&gt;Yes. AI models sometimes reference packages that do not exist. Attackers monitor for these hallucinated package names and register them on npm or PyPI with malicious code inside. When a developer installs dependencies from AI-generated configuration files, they unknowingly pull in the attacker's payload. Always verify that every dependency in your package.json or requirements.txt is a legitimate, well-known package.&lt;/p&gt;

&lt;p&gt;What security headers should I add to my AI-generated web app?&lt;/p&gt;

&lt;p&gt;At minimum, add Content-Security-Policy (prevents XSS), Strict-Transport-Security (forces HTTPS), X-Frame-Options set to DENY (prevents clickjacking), and X-Content-Type-Options set to nosniff (prevents MIME-type sniffing). The Tenzai study found that zero out of 15 AI-generated apps set any of these headers. Most web frameworks have middleware packages that add all of them in a few lines of code.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>security</category>
    </item>
    <item>
      <title>Webhook Security Best Practices for 2026: HMAC Verification, Replay Prevention &amp; Safe Debugging</title>
      <dc:creator>Hari Prakash</dc:creator>
      <pubDate>Fri, 06 Mar 2026 06:11:02 +0000</pubDate>
      <link>https://dev.to/hari_prakash_b0a882ec9225/webhook-security-best-practices-for-2026-hmac-verification-replay-prevention-safe-debugging-5gn</link>
      <guid>https://dev.to/hari_prakash_b0a882ec9225/webhook-security-best-practices-for-2026-hmac-verification-replay-prevention-safe-debugging-5gn</guid>
      <description>&lt;h2&gt;
  
  
  Webhook Security Best Practices Start Before Production
&lt;/h2&gt;

&lt;p&gt;Most webhook security guides jump straight to HMAC verification and TLS. That matters. But if you pasted a live webhook payload into an online JSON formatter last week, you already leaked your secrets before any of that kicked in.&lt;/p&gt;

&lt;p&gt;This guide covers &lt;strong&gt;webhook security best practices&lt;/strong&gt; across the full lifecycle — from signature verification to replay prevention to the debugging phase that most teams ignore. If you build or consume webhooks in 2026, this is the checklist.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Webhook Security Matters More in 2026
&lt;/h2&gt;

&lt;p&gt;API traffic now accounts for over 60% of all HTTP requests, according to Cloudflare's API security research. Event-driven architectures and microservices have made webhooks the default integration pattern. Every payment processor, CI/CD pipeline, and SaaS platform fires them.&lt;/p&gt;

&lt;p&gt;But webhook endpoints are inbound HTTP calls from external systems. They're attack surface you didn't build — you just agreed to accept it. And the tooling developers use to inspect those payloads often introduces risk that never shows up in a threat model.&lt;/p&gt;

&lt;p&gt;Case in point: jsonformatter.org was found to have leaked 80,000+ user credentials by sending pasted content server-side. Developers debugging webhook payloads were unknowingly handing secrets to a third party.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Verify HMAC Signatures on Every Request
&lt;/h2&gt;

&lt;p&gt;Every major webhook provider — Stripe, GitHub, Twilio — signs payloads with HMAC-SHA256. Your endpoint must verify that signature before processing anything.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use a &lt;strong&gt;constant-time comparison&lt;/strong&gt; function (e.g., &lt;code&gt;crypto.timingSafeEqual&lt;/code&gt; in Node.js) to prevent timing attacks.&lt;/li&gt;
&lt;li&gt;Verify against the &lt;strong&gt;raw request body&lt;/strong&gt;, not a parsed/re-serialized version. JSON key ordering matters.&lt;/li&gt;
&lt;li&gt;Reject any request where the signature header is missing or invalid. No fallback. No "log and continue."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Estimates suggest fewer than 30% of webhook integrations actually verify signatures in production. If yours doesn't, fix it today. Need to verify a hash locally? The &lt;a href="https://dev.to/hash-generator"&gt;PinusX Hash Generator&lt;/a&gt; computes HMAC-SHA256 digests entirely in your browser — no data sent to any server.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Prevent Replay Attacks with Timestamp Validation
&lt;/h2&gt;

&lt;p&gt;A valid signature doesn't mean a fresh request. Attackers can capture a signed webhook and replay it. Defend against this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Parse the timestamp from the webhook header (most providers include one).&lt;/li&gt;
&lt;li&gt;Reject any payload older than &lt;strong&gt;5 minutes&lt;/strong&gt;. OWASP recommends this as the maximum tolerance window.&lt;/li&gt;
&lt;li&gt;Combine timestamp checks with &lt;strong&gt;idempotency keys&lt;/strong&gt; — store processed event IDs and skip duplicates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This blocks both malicious replays and accidental retries from providers with aggressive retry policies.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Harden Your Webhook Endpoint
&lt;/h2&gt;

&lt;p&gt;Your endpoint is a public URL accepting POST requests from the internet. Lock it down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IP allowlisting&lt;/strong&gt;: If the provider publishes source IP ranges (Stripe, GitHub do), restrict inbound traffic to those ranges at the network level.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rate limiting&lt;/strong&gt;: Cap requests per second to prevent abuse. A legitimate provider won't burst 1,000 events per second to a single endpoint.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSRF protection&lt;/strong&gt;: If your webhook handler fetches URLs from the payload (e.g., downloading an attachment), validate and sanitize those URLs. Block private IP ranges and internal hostnames.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mTLS&lt;/strong&gt;: For high-security integrations, require mutual TLS authentication. The provider presents a client certificate your server validates.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. Rotate Webhook Secrets Regularly
&lt;/h2&gt;

&lt;p&gt;Webhook signing secrets are credentials. Treat them like passwords:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rotate every 90 days at minimum.&lt;/li&gt;
&lt;li&gt;Support &lt;strong&gt;dual-secret validation&lt;/strong&gt; during rotation — accept signatures from both the old and new secret for a short overlap window.&lt;/li&gt;
&lt;li&gt;Store secrets in a vault (AWS Secrets Manager, HashiCorp Vault), not in environment variables hardcoded in deployment scripts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Webhook Security Best Practices for the Debugging Phase
&lt;/h2&gt;

&lt;p&gt;Here's the gap in most guides. Before your endpoint is production-ready, you're inspecting payloads manually. You're copying JSON from provider dashboards, pasting it into online tools, tweaking fields, and testing locally.&lt;/p&gt;

&lt;p&gt;Every time you paste a webhook payload into a server-side tool, you risk exposing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API keys and bearer tokens in headers&lt;/li&gt;
&lt;li&gt;Customer PII in the payload body&lt;/li&gt;
&lt;li&gt;Signing secrets in metadata fields&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fix is simple: &lt;strong&gt;use tools that process data client-side only&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/webhook-tester"&gt;PinusX Webhook Tester&lt;/a&gt; runs entirely in your browser. No data leaves your machine. You can inspect, format, and validate webhook payloads without sending a single byte to a remote server.&lt;/p&gt;

&lt;p&gt;Need to format the JSON payload before analysis? The &lt;a href="https://dev.to/json-formatter"&gt;JSON Formatter&lt;/a&gt; works the same way — fully client-side, zero server transmission.&lt;/p&gt;

&lt;h2&gt;
  
  
  Webhook Endpoint Security Checklist
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;✅ HMAC signature verification with constant-time comparison&lt;/li&gt;
&lt;li&gt;✅ Timestamp validation (5-minute max tolerance)&lt;/li&gt;
&lt;li&gt;✅ Idempotency keys to prevent duplicate processing&lt;/li&gt;
&lt;li&gt;✅ IP allowlisting where provider supports it&lt;/li&gt;
&lt;li&gt;✅ Rate limiting on webhook endpoints&lt;/li&gt;
&lt;li&gt;✅ SSRF protection on any URL fetching&lt;/li&gt;
&lt;li&gt;✅ Secret rotation every 90 days with dual-secret overlap&lt;/li&gt;
&lt;li&gt;✅ Client-side-only tools for payload debugging&lt;/li&gt;
&lt;li&gt;✅ mTLS for high-security integrations&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is the most common webhook security mistake?
&lt;/h3&gt;

&lt;p&gt;Not verifying HMAC signatures. Many teams skip signature verification during development and never enable it in production. This means any attacker who discovers your endpoint URL can send forged payloads. Always verify the signature against the raw request body using a constant-time comparison function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is it safe to paste webhook payloads into online JSON tools?
&lt;/h3&gt;

&lt;p&gt;Not if the tool sends data to a server. Most popular online formatters transmit your input for server-side processing, which means webhook secrets, API keys, and customer data in the payload are exposed to a third party. Use a client-side tool like &lt;a href="https://dev.to/webhook-tester"&gt;PinusX Webhook Tester&lt;/a&gt; that processes everything in your browser.&lt;/p&gt;

&lt;h3&gt;
  
  
  How often should I rotate webhook signing secrets?
&lt;/h3&gt;

&lt;p&gt;Every 90 days at minimum. Use dual-secret validation during the rotation window so you can update the secret on both sides without downtime. Store secrets in a dedicated secrets manager, not in config files or code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stop Leaking Webhook Secrets
&lt;/h2&gt;

&lt;p&gt;You can nail every server-side security control and still leak credentials during debugging. The tools you use to inspect payloads matter as much as the code that processes them. Try the &lt;a href="https://dev.to/webhook-tester"&gt;PinusX Webhook Tester&lt;/a&gt; — inspect, format, and validate webhook payloads without your data ever leaving the browser.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>security</category>
    </item>
    <item>
      <title>API Rate Limiting Best Practices: Algorithms, Headers &amp; Implementation Guide for 2026</title>
      <dc:creator>Hari Prakash</dc:creator>
      <pubDate>Wed, 04 Mar 2026 16:03:13 +0000</pubDate>
      <link>https://dev.to/hari_prakash_b0a882ec9225/api-rate-limiting-best-practices-algorithms-headers-implementation-guide-for-2026-2mjn</link>
      <guid>https://dev.to/hari_prakash_b0a882ec9225/api-rate-limiting-best-practices-algorithms-headers-implementation-guide-for-2026-2mjn</guid>
      <description>&lt;h2&gt;
  
  
  API Rate Limiting Best Practices: Start with the Right Algorithm
&lt;/h2&gt;

&lt;p&gt;Every API you expose without rate limiting is an open invitation. Unrestricted resource consumption sits at &lt;strong&gt;#4 on the OWASP API Security Top 10 (2023)&lt;/strong&gt;, and with AI-driven API traffic surging — think LLM orchestration, agent-to-agent calls, retrieval-augmented generation pipelines — the attack surface has only grown. This guide covers API rate limiting best practices you can implement today: algorithms, headers, code patterns, and a hands-on testing workflow using webhook endpoints.&lt;/p&gt;

&lt;p&gt;89% of developers consider APIs critical to business strategy, according to Postman's 2023 State of APIs report surveying ~40,000 respondents. If APIs are critical, protecting them is non-negotiable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Rate Limiter Algorithms You Should Know
&lt;/h2&gt;

&lt;p&gt;Most production rate limiters use one of three algorithms. Each makes different tradeoffs between simplicity, fairness, and burst tolerance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Token Bucket
&lt;/h3&gt;

&lt;p&gt;A bucket holds N tokens. Each request consumes one token. Tokens refill at a fixed rate. If the bucket is empty, the request is rejected with HTTP 429.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt; Allows short bursts up to bucket capacity. Simple to implement.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt; Can permit brief traffic spikes that overwhelm downstream services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use when:&lt;/strong&gt; You want to allow occasional bursts (e.g., a user loading a dashboard that fires 10 API calls at once).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pseudocode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tokens = min(maxTokens, tokens + (elapsed * refillRate))
if tokens &amp;gt;= 1:
    tokens -= 1
    return allow_request()
else:
    return respond(429, {"Retry-After": reset_time})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Sliding Window Log
&lt;/h3&gt;

&lt;p&gt;Store a timestamp for every request. Count requests within the last N seconds. If the count exceeds the limit, reject.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt; Precise. No boundary-crossing exploits like fixed windows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt; Memory-intensive — you store every timestamp per client.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use when:&lt;/strong&gt; Accuracy matters more than memory (low-volume, high-value APIs).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Leaky Bucket
&lt;/h3&gt;

&lt;p&gt;Requests enter a queue (the bucket). The queue drains at a constant rate. If the queue is full, new requests are dropped.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt; Smooths output to a perfectly consistent rate. Great for downstream protection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt; Adds latency — requests wait in queue. Bursts are penalized.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use when:&lt;/strong&gt; You need to guarantee a steady request flow to a fragile backend.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  API Rate Limiting Best Practices for Headers and Response Codes
&lt;/h2&gt;

&lt;p&gt;RFC 6585 standardized &lt;strong&gt;HTTP 429 Too Many Requests&lt;/strong&gt; as the universal signal for rate-limited responses. But returning 429 alone is not enough. Good rate limiting communicates state to the client.&lt;/p&gt;

&lt;p&gt;Include these headers in every response — not just 429s:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;X-RateLimit-Limit:&lt;/strong&gt; Maximum requests allowed in the current window.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;X-RateLimit-Remaining:&lt;/strong&gt; Requests left before throttling kicks in.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;X-RateLimit-Reset:&lt;/strong&gt; Unix timestamp when the window resets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Retry-After:&lt;/strong&gt; Seconds until the client should retry (required on 429 responses per RFC 6585).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Clients that respect these headers build resilient integrations. Clients that don't get cut off. Either way, your API stays healthy. Use the &lt;a href="https://dev.to/http-status"&gt;HTTP Status Codes&lt;/a&gt; reference to verify you're returning the correct status codes across your API. To inspect and validate the &lt;a href="https://dev.to/json-formatter"&gt;JSON response bodies&lt;/a&gt; your rate limiter returns, ensure the error payload includes a human-readable message alongside the machine-readable headers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing a Rate Limiting Strategy by Use Case
&lt;/h2&gt;

&lt;p&gt;There is no universal "best" algorithm. Match the strategy to the problem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Public API with free tier:&lt;/strong&gt; Token bucket. Allow bursts, enforce daily quotas.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Payment processing endpoint:&lt;/strong&gt; Sliding window. Precision prevents abuse at the boundary.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Webhook delivery pipeline:&lt;/strong&gt; Leaky bucket. Smooth output protects the receiver.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API gateway (multi-tenant):&lt;/strong&gt; Sliding window + per-tenant quotas. Isolate noisy neighbors.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For distributed systems, you'll need a shared store — Redis is the standard choice. Use &lt;strong&gt;MULTI/EXEC&lt;/strong&gt; or Lua scripts to make rate limit checks atomic. A race condition in your rate limiter is worse than no rate limiter at all. Generate unique client identifiers with a &lt;a href="https://dev.to/uuid-generator"&gt;UUID Generator&lt;/a&gt; to use as rate limit keys when API keys aren't available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Your Rate Limiter with Webhooks
&lt;/h2&gt;

&lt;p&gt;Writing a rate limiter is half the job. Proving it works under load is the other half.&lt;/p&gt;

&lt;p&gt;Here's a practical workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Set up a temporary endpoint using the &lt;a href="https://dev.to/webhooks"&gt;Webhook Tester&lt;/a&gt;. This gives you a URL that captures every incoming request with full headers and timestamps.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Point your rate-limited API's outbound calls (or a test client) at the webhook URL.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Fire requests in bursts — 10 in 1 second, 50 in 5 seconds, 200 in 60 seconds. Vary the pattern.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Inspect the webhook logs. Verify that requests beyond your limit return 429. Check that &lt;strong&gt;Retry-After&lt;/strong&gt; and &lt;strong&gt;X-RateLimit-Remaining&lt;/strong&gt; headers decrement correctly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Confirm that after the reset window, requests succeed again.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach catches off-by-one errors, boundary-crossing bugs, and clock drift issues that unit tests miss. All data stays client-side — no third-party service sees your API traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Mistakes to Avoid
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rate limiting by IP only:&lt;/strong&gt; NAT and shared proxies mean thousands of users can share one IP. Use API keys or authenticated user IDs as the primary identifier.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ignoring distributed deployments:&lt;/strong&gt; If your API runs on 4 instances and each has an in-memory rate limiter set to 100 req/min, your actual limit is 400. Centralize the counter.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No backpressure signals:&lt;/strong&gt; Dropping requests silently (returning 200 but doing nothing) breaks client trust. Always return 429 with clear headers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fixed windows without overlap:&lt;/strong&gt; A client can send 100 requests at 11:59:59 and 100 more at 12:00:01 — hitting 200 in 2 seconds under a "100 per minute" fixed window. Use sliding windows to prevent this.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is the best rate limiting algorithm for a REST API?
&lt;/h3&gt;

&lt;p&gt;There is no single best algorithm. Token bucket works well for most REST APIs because it allows short bursts while enforcing average throughput. For strict compliance requirements — payment APIs, healthcare data — sliding window log provides the most accurate counting with no boundary exploits.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do I test if my rate limiter is working correctly?
&lt;/h3&gt;

&lt;p&gt;Send controlled bursts of requests and inspect the responses. Use a &lt;a href="https://dev.to/webhooks"&gt;Webhook Tester&lt;/a&gt; to capture outbound requests with full headers. Verify that requests beyond your limit return HTTP 429 with a valid Retry-After header, and that the X-RateLimit-Remaining counter decrements correctly with each request.&lt;/p&gt;

&lt;h3&gt;
  
  
  Should I rate limit by IP address or API key?
&lt;/h3&gt;

&lt;p&gt;Prefer API key or authenticated user ID. IP-based limiting breaks for users behind shared NATs, corporate proxies, or VPNs. Use IP as a secondary layer for unauthenticated endpoints, but never as the sole identifier for authenticated APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ship a Rate Limiter That Actually Works
&lt;/h2&gt;

&lt;p&gt;Rate limiting is not a "set and forget" feature. It requires testing under realistic conditions, clear communication via headers, and the right algorithm for your specific traffic pattern. Start by testing your current implementation with the &lt;a href="https://dev.to/webhooks"&gt;Webhook Tester&lt;/a&gt; — fire some bursts, inspect the headers, and find the gaps before your users (or attackers) do.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>api</category>
    </item>
    <item>
      <title>n8n Webhook Vulnerability CVE-2026-21858: Content-Type Trick to Full RCE</title>
      <dc:creator>Hari Prakash</dc:creator>
      <pubDate>Fri, 27 Feb 2026 22:57:09 +0000</pubDate>
      <link>https://dev.to/hari_prakash_b0a882ec9225/n8n-webhook-vulnerability-cve-2026-21858-content-type-trick-to-full-rce-2nn1</link>
      <guid>https://dev.to/hari_prakash_b0a882ec9225/n8n-webhook-vulnerability-cve-2026-21858-content-type-trick-to-full-rce-2nn1</guid>
      <description>&lt;p&gt;A single malformed Content-Type header. That's all it takes to go from zero access to full remote code execution on roughly 100,000 self-hosted n8n servers. &lt;strong&gt;CVE-2026-21858&lt;/strong&gt; — the &lt;strong&gt;n8n webhook vulnerability&lt;/strong&gt; disclosed on January 7, 2026 — carries a CVSS score of 10.0, the maximum possible severity rating. No authentication required. No user interaction needed. If your n8n instance has a Form Webhook node exposed to the internet, an attacker can read arbitrary files from your server, forge an admin session cookie, and execute any operating system command they want.&lt;/p&gt;

&lt;p&gt;Cyera Research Labs discovered the vulnerability and named it "Ni8mare" — a fitting name for what is arguably the worst security flaw in n8n's history. The exploit chain is elegant in the worst possible way: it turns a content parsing oversight into complete server takeover in three HTTP requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the n8n Webhook Vulnerability Exploit Chain Works
&lt;/h2&gt;

&lt;p&gt;The attack exploits how n8n's Form Webhook node processes incoming HTTP requests. Normally, this endpoint expects &lt;code&gt;multipart/form-data&lt;/code&gt; submissions from web forms. But the request parser doesn't properly validate the Content-Type header, and that single oversight creates a path traversal vulnerability that reads arbitrary files from the filesystem.&lt;/p&gt;

&lt;p&gt;Here's the three-step chain, broken down:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Arbitrary File Read via Content-Type Confusion
&lt;/h3&gt;

&lt;p&gt;The attacker sends a crafted HTTP request to any active Form Webhook endpoint. By manipulating the Content-Type header's boundary parameter and injecting path traversal sequences, the multipart parser follows a filename reference to an arbitrary file on disk. The parser treats the file contents as form data and reflects them back in the response or makes them accessible through the workflow.&lt;/p&gt;

&lt;p&gt;The first target is always the same:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# The attacker's first read target
/home/node/.n8n/database.sqlite
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is n8n's SQLite database. It contains everything: user accounts, hashed passwords, workflow definitions, credentials, and — critically — the admin user's session data. The attacker doesn't need to brute-force anything. The database hands them the admin password hash directly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Admin Cookie Forgery
&lt;/h3&gt;

&lt;p&gt;Next, the attacker reads the n8n configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Second file read
/home/node/.n8n/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file contains the encryption key and session secret that n8n uses to sign cookies and encrypt stored credentials. With the admin's password hash from the database and the signing secret from the config, the attacker forges a valid admin session cookie. No password cracking required — they have everything they need to create a cookie the server will trust.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Remote Code Execution via Execute Command Node
&lt;/h3&gt;

&lt;p&gt;With admin access to the n8n dashboard, the attacker creates a new workflow containing an Execute Command node — n8n's built-in node that runs arbitrary shell commands on the host operating system. They trigger the workflow, and they now have a fully interactive shell on your server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// What the attacker's workflow node looks like
{
  "nodes": [
    {
      "type": "n8n-nodes-base.executeCommand",
      "parameters": {
        "command": "cat /etc/passwd &amp;amp;&amp;amp; whoami &amp;amp;&amp;amp; id"
      }
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From here, it's standard post-exploitation: data exfiltration, lateral movement, persistence mechanisms, cryptocurrency miners — whatever the attacker wants. The n8n process typically runs as root in Docker containers, so there's no privilege escalation needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Content-Type Parsing Is Security-Critical
&lt;/h2&gt;

&lt;p&gt;This vulnerability exists because of a fundamental truth that many developers overlook: &lt;strong&gt;request parsing is a security boundary&lt;/strong&gt;. Every HTTP header your server interprets is an attack surface. Content-Type determines how your application reads the request body — and if that parsing logic has flaws, an attacker controls how your server interprets incoming data.&lt;/p&gt;

&lt;p&gt;The n8n case isn't unique. Content-Type confusion vulnerabilities have appeared in Express.js body parsers, Apache Struts (the Equifax breach started this way), and dozens of other frameworks. The pattern is always the same: the parser trusts the Content-Type header without sufficient validation, and an attacker exploits that trust to make the parser do something unintended.&lt;/p&gt;

&lt;p&gt;Common Content-Type parsing mistakes that lead to vulnerabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Accepting any Content-Type on endpoints that expect a specific format.&lt;/strong&gt; If your endpoint only handles JSON, reject anything that isn't &lt;code&gt;application/json&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Parsing multipart boundaries without sanitizing path characters.&lt;/strong&gt; The CVE-2026-21858 vector — boundary parameters containing &lt;code&gt;../&lt;/code&gt; sequences should never reach the filesystem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Using the raw Content-Type header in file operations.&lt;/strong&gt; Parameters extracted from headers must be treated as untrusted input, period.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Falling back to a permissive parser when the Content-Type doesn't match.&lt;/strong&gt; Strict rejection is safer than best-effort parsing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Bigger n8n Attack Surface Problem
&lt;/h2&gt;

&lt;p&gt;CVE-2026-21858 didn't arrive alone. Within 30 days of its disclosure, three additional n8n CVEs dropped: CVE-2026-27577, CVE-2026-27578, and CVE-2026-21877. All expand the webhook-related attack surface. And this follows two prior n8n CVEs from 2025 — CVE-2025-68668 and CVE-2025-68613.&lt;/p&gt;

&lt;p&gt;n8n has over 100 million Docker pulls. It's one of the most widely deployed workflow automation platforms in existence. The self-hosted model means patches don't auto-deploy — each operator has to manually upgrade. The fix has been available since n8n v1.121.0 (released November 18, 2025), but security researchers estimate that tens of thousands of instances remain unpatched months later.&lt;/p&gt;

&lt;p&gt;If you run n8n, check your version right now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check your n8n version
n8n --version

# Or via Docker
docker exec your-n8n-container n8n --version

# If below 1.121.0, upgrade immediately
npm update -g n8n
# Or pull the latest Docker image
docker pull n8nio/n8n:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Test Your Webhook Endpoints for Content-Type Validation
&lt;/h2&gt;

&lt;p&gt;Whether you use n8n or any other webhook handler, you should verify that your endpoints properly validate Content-Type headers. The easiest way to test this is to send crafted requests to your own endpoints and confirm they reject unexpected content types.&lt;/p&gt;

&lt;p&gt;I built &lt;a href="https://dev.to/webhooks"&gt;PinusX Webhook Inspector&lt;/a&gt; specifically for this kind of testing. You get a unique URL, send requests to it, and inspect every detail of what arrives — headers, body, content type, everything in real time. Here's how to use it to test for Content-Type confusion vulnerabilities:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;4.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Test: Send a request with a suspicious Content-Type boundary
curl -X POST https://your-webhook-endpoint.com/hook \
  -H "Content-Type: multipart/form-data; boundary=----WebKitFormBoundary../../etc/passwd" \
  -d 'test payload'

# Expected: 400 Bad Request or 415 Unsupported Media Type
# If your server processes this normally, you have a problem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key insight: your webhook handler should validate Content-Type &lt;em&gt;before&lt;/em&gt; parsing the body. If the Content-Type doesn't exactly match what you expect, return an error and don't attempt to parse. This is basic input validation, but it's the exact check that was missing in n8n.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hardening Your Webhook Handlers
&lt;/h2&gt;

&lt;p&gt;Beyond Content-Type validation, every webhook endpoint should implement the security controls from our &lt;a href="https://dev.to/blog/webhook-security-checklist"&gt;webhook security checklist&lt;/a&gt;: HMAC signature validation, timestamp verification, replay protection, and IP whitelisting where possible. You can also use PinusX's &lt;a href="https://dev.to/hash"&gt;Hash Generator&lt;/a&gt; to verify HMAC calculations when debugging signature validation logic.&lt;/p&gt;

&lt;p&gt;For n8n specifically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Upgrade to v1.121.0 or later.&lt;/strong&gt; This is non-negotiable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don't expose n8n directly to the internet.&lt;/strong&gt; Put it behind a reverse proxy with authentication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Restrict network access to webhook endpoints.&lt;/strong&gt; If only specific services send webhooks, whitelist their IPs at the firewall level.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Disable the Execute Command node&lt;/strong&gt; if you don't need it. n8n allows node type restrictions in the configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitor for unauthorized workflows.&lt;/strong&gt; If an attacker gained access before you patched, check for workflows you didn't create.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;p&gt;What is CVE-2026-21858 and how severe is it?&lt;/p&gt;

&lt;p&gt;CVE-2026-21858 is an unauthenticated remote code execution vulnerability in n8n's Form Webhook endpoint. It carries a CVSS score of 10.0 — the maximum possible severity. An attacker can exploit it without any credentials to read arbitrary files from the server, forge admin session cookies, and execute operating system commands. All n8n versions before 1.121.0 are affected.&lt;/p&gt;

&lt;p&gt;How do I know if my n8n instance is vulnerable?&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;n8n --version&lt;/code&gt; or check your Docker image tag. Any version below 1.121.0 is vulnerable. If your n8n instance has any Form Webhook or Webhook node that is publicly reachable, it can be exploited without authentication. Upgrade immediately to the latest version and audit your workflows for any unauthorized Execute Command nodes.&lt;/p&gt;

&lt;p&gt;Is n8n safe to self-host after this vulnerability?&lt;/p&gt;

&lt;p&gt;n8n is safe to self-host if you keep it updated and follow security best practices. The patch in v1.121.0 fixes CVE-2026-21858. However, self-hosting any workflow automation tool means you're responsible for timely patching, network segmentation, and access control. Never expose n8n directly to the public internet without a reverse proxy and authentication layer in front of it.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>JWT Algorithm Confusion Attack: Two Active CVEs in 2026</title>
      <dc:creator>Hari Prakash</dc:creator>
      <pubDate>Thu, 26 Feb 2026 01:10:30 +0000</pubDate>
      <link>https://dev.to/hari_prakash_b0a882ec9225/jwt-algorithm-confusion-attack-two-active-cves-in-2026-7bc</link>
      <guid>https://dev.to/hari_prakash_b0a882ec9225/jwt-algorithm-confusion-attack-two-active-cves-in-2026-7bc</guid>
      <description>&lt;p&gt;Two &lt;strong&gt;JWT algorithm confusion attack&lt;/strong&gt; CVEs dropped in January 2026, both with public proof-of-concept exploits, both exploiting the exact same root cause: JWT libraries that let the token's own &lt;code&gt;alg&lt;/code&gt; header dictate how signature verification works. CVE-2026-22817 hit Hono — one of the fastest-growing edge-runtime frameworks — with a CVSS score of 8.2. CVE-2026-23993 hit HarbourJwt, a Go library, with a bypass so simple it requires zero cryptographic knowledge. If you run anything that validates JWTs, this is your wake-up call to check whether your library actually pins the algorithm.&lt;/p&gt;

&lt;p&gt;I spent a morning decoding forged tokens from both POC exploits using the &lt;a href="https://dev.to/jwt"&gt;PinusX JWT Decoder&lt;/a&gt;, and the signatures of a weaponized JWT are obvious once you know what to look for. Here's the breakdown.&lt;/p&gt;

&lt;h2&gt;
  
  
  CVE-2026-22817: Hono's RS256-to-HS256 Swap (CVSS 8.2)
&lt;/h2&gt;

&lt;p&gt;Hono is a lightweight web framework that runs on Cloudflare Workers, Deno, Bun, and Node.js. Its built-in JWT middleware, in all versions before 4.11.4, was vulnerable to a classic &lt;strong&gt;JWT algorithm confusion attack&lt;/strong&gt;: it trusted the &lt;code&gt;alg&lt;/code&gt; field in the token header to decide which verification method to use.&lt;/p&gt;

&lt;p&gt;The attack works like this. Your server signs tokens with RS256 — RSA private key signs, RSA public key verifies. The public key is, well, public. An attacker grabs it, then crafts a new JWT with two changes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When the vulnerable Hono middleware receives this token, it reads the &lt;code&gt;alg&lt;/code&gt; header, sees HS256, and switches to HMAC verification — using the public key as the HMAC secret. The signature matches. The attacker's forged claims are now trusted.&lt;/p&gt;

&lt;p&gt;This is classified as CWE-347 (Improper Verification of Cryptographic Signature) and affects every Hono deployment on every runtime prior to version 4.11.4. The POC exploit is already public, so this isn't theoretical.&lt;/p&gt;

&lt;h3&gt;
  
  
  What a Weaponized Hono Token Looks Like
&lt;/h3&gt;

&lt;p&gt;Paste the forged token into a &lt;a href="https://dev.to/jwt"&gt;JWT decoder&lt;/a&gt; and the header immediately gives it away:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Normal token header from your server
{
  "alg": "RS256",
  "typ": "JWT"
}

// Forged token header — the red flag
{
  "alg": "HS256",
  "typ": "JWT"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your server is configured for RS256 but you see HS256 in a decoded token, something is wrong. Either an attacker is probing your system, or your library is silently accepting algorithm swaps.&lt;/p&gt;

&lt;h2&gt;
  
  
  CVE-2026-23993: HarbourJwt's Unknown Algorithm Bypass
&lt;/h2&gt;

&lt;p&gt;This one is even simpler. HarbourJwt, a Go JWT library, didn't reject unknown algorithm values. An attacker could set &lt;code&gt;alg&lt;/code&gt; to literally anything — &lt;code&gt;"zzz"&lt;/code&gt;, &lt;code&gt;"foo"&lt;/code&gt;, &lt;code&gt;"banana"&lt;/code&gt; — and the library's &lt;code&gt;GetSignature()&lt;/code&gt; function would return an empty byte slice instead of an error. The forged token format is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eyJ0eXAiOiJKV1QiLCJhbGciOiJ6enoifQ.eyJzdWIiOiIxMjM0NTY3ODkwIiwiYWRtaW4iOnRydWV9.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the trailing dot with nothing after it — that's an empty signature segment. The library compared its computed empty signature against the token's empty signature, they matched, and the token was accepted as valid. No key needed. No cryptography involved at all.&lt;/p&gt;

&lt;p&gt;The same security review uncovered a comparable bypass in jose-swift (tracked as GHSA-88q6-jcjg-hvmw), suggesting this pattern isn't limited to obscure libraries.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Spot a JWT Algorithm Confusion Attack in Seconds
&lt;/h2&gt;

&lt;p&gt;You don't need security tooling to catch these. Any client-side JWT decoder will show you the header. Here's what to look for:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Drop a suspicious token into the &lt;a href="https://dev.to/jwt"&gt;PinusX JWT Decoder&lt;/a&gt; and check the header. It runs 100% client-side — the token never leaves your browser — so you can safely inspect production tokens without leaking them to a third-party server.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Pattern: Algorithm Pinning Failures Across Languages
&lt;/h2&gt;

&lt;p&gt;These two CVEs aren't isolated incidents. Q1 2026 has seen a cluster of JWT algorithm-related vulnerabilities across multiple languages and frameworks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Go:&lt;/strong&gt; HarbourJwt (CVE-2026-23993) — unknown algorithm bypass&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;TypeScript/JavaScript:&lt;/strong&gt; Hono (CVE-2026-22817) — RS256-to-HS256 confusion, CVSS 8.2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Java:&lt;/strong&gt; Keycloak (CVE-2026-23552) — accepted cross-realm tokens due to missing &lt;code&gt;iss&lt;/code&gt; claim validation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Swift:&lt;/strong&gt; jose-swift (GHSA-88q6-jcjg-hvmw) — similar unknown algorithm bypass&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pattern is clear: library authors across the ecosystem are still implementing JWT verification in a way that trusts token-supplied metadata. Over 8,000 ChatGPT API keys were simultaneously found exposed in GitHub repos and production JavaScript bundles in February 2026, reinforcing that credential and token hygiene is at a crisis point industry-wide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Algorithm Pinning Checklist for Production
&lt;/h2&gt;

&lt;p&gt;Pin the algorithm. Do it today. Here's how in the most common libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Node.js (jsonwebtoken)
jwt.verify(token, key, { algorithms: ['RS256'] });

// Python (PyJWT)
jwt.decode(token, key, algorithms=['RS256'])

// Go (golang-jwt)
token, err := jwt.Parse(tokenString, keyFunc,
  jwt.WithValidMethods([]string{"RS256"}))

// Java (jjwt)
Jwts.parser()
  .requireSignedWith(SignatureAlgorithm.RS256)
  .setSigningKey(key)
  .parseClaimsJws(token);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the full checklist:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;p&gt;What is a JWT algorithm confusion attack?&lt;/p&gt;

&lt;p&gt;A JWT algorithm confusion attack exploits JWT libraries that trust the &lt;code&gt;alg&lt;/code&gt; field in the token header to choose the signature verification method. An attacker changes the algorithm — for example, from RS256 (asymmetric) to HS256 (symmetric) — and signs the forged token with the server's public key used as the HMAC secret. If the library doesn't enforce which algorithm is allowed, it verifies the forged signature as valid. The fix is to hardcode the expected algorithm in your verification logic, never letting the token itself dictate how it should be verified.&lt;/p&gt;

&lt;p&gt;How do I know if my JWT library is vulnerable to algorithm confusion?&lt;/p&gt;

&lt;p&gt;Check whether your &lt;code&gt;verify()&lt;/code&gt; call explicitly specifies the allowed algorithm. If you're calling something like &lt;code&gt;jwt.verify(token, key)&lt;/code&gt; without an &lt;code&gt;algorithms&lt;/code&gt; parameter, your library may be using the token's own &lt;code&gt;alg&lt;/code&gt; header — and you're likely vulnerable. Test by crafting a token with a swapped &lt;code&gt;alg&lt;/code&gt; value (e.g., HS256 instead of RS256) and submitting it. If your server accepts it, you have a problem. Update to the latest version of your JWT library and always pass an explicit algorithm allowlist.&lt;/p&gt;

&lt;p&gt;Can I detect a forged JWT by decoding it?&lt;/p&gt;

&lt;p&gt;Yes. The JWT header is unencrypted Base64url-encoded JSON, so decoding it instantly reveals the &lt;code&gt;alg&lt;/code&gt; value. If you see HS256 on a system configured for RS256, or an unrecognized value like "zzz" or "none", the token is either forged or indicates a misconfiguration. A client-side JWT decoder like the &lt;a href="https://dev.to/jwt"&gt;PinusX JWT Decoder&lt;/a&gt; lets you inspect tokens safely without sending them to a third-party server.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>security</category>
    </item>
    <item>
      <title>JWT Security Best Practices 2026: Stop Making These Mistakes</title>
      <dc:creator>Hari Prakash</dc:creator>
      <pubDate>Thu, 19 Feb 2026 15:22:15 +0000</pubDate>
      <link>https://dev.to/hari_prakash_b0a882ec9225/jwt-security-best-practices-2026-stop-making-these-mistakes-e58</link>
      <guid>https://dev.to/hari_prakash_b0a882ec9225/jwt-security-best-practices-2026-stop-making-these-mistakes-e58</guid>
      <description>&lt;p&gt;JSON Web Tokens have become the default authentication mechanism for modern web applications and APIs. They're elegant, stateless, and well-supported across every major programming language. But that ubiquity has made JWTs one of the most commonly misconfigured security components in production systems. In 2026, JWT security best practices still catch experienced developers off guard — not because the specification is flawed, but because the defaults are dangerous and the mistakes are subtle.&lt;/p&gt;

&lt;p&gt;If you're building anything that issues or validates JWTs, this guide covers the real-world mistakes that lead to compromised applications — and how to fix them before they end up in a breach disclosure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Algorithm Confusion Attack: JWT's Most Dangerous Flaw
&lt;/h2&gt;

&lt;p&gt;The single most critical JWT vulnerability is the &lt;strong&gt;algorithm confusion attack&lt;/strong&gt; (also called key confusion or algorithm substitution). It's been known since 2015, yet continues to appear in production systems because it exploits a design decision in the JWT specification itself.&lt;/p&gt;

&lt;p&gt;Here's how it works. When your server creates a JWT, it signs it using an algorithm — typically RS256 (RSA with SHA-256). The server holds a private key for signing and a public key for verification. An attacker who obtains the public key (which is often, well, public) can forge a valid token by:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Taking the public RSA key&lt;/li&gt;
&lt;li&gt;Changing the JWT header's &lt;code&gt;alg&lt;/code&gt; field from &lt;code&gt;RS256&lt;/code&gt; to &lt;code&gt;HS256&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Signing the token using the RSA public key as the HMAC secret&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If the server's JWT library blindly trusts the &lt;code&gt;alg&lt;/code&gt; header, it switches from RSA verification to HMAC verification — using the public key as the HMAC secret. Since the attacker signed with that same key, the signature validates. The attacker now has a forged token with whatever claims they want.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Never let the token dictate which algorithm to use. Hardcode the expected algorithm in your verification configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ❌ Dangerous — trusts the token's alg header&lt;/span&gt;
&lt;span class="nx"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;verify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;token&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;publicKey&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// ✅ Safe — explicitly requires RS256&lt;/span&gt;
&lt;span class="nx"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;verify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;token&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;publicKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;algorithms&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;RS256&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every major JWT library supports this. If yours doesn't, switch libraries immediately.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "none" Algorithm: Still a Threat in 2026
&lt;/h2&gt;

&lt;p&gt;The JWT specification includes an &lt;code&gt;"alg": "none"&lt;/code&gt; option, which means "this token is unsigned." It was intended for situations where the token's integrity is guaranteed by other means, such as transport-layer security.&lt;/p&gt;

&lt;p&gt;In practice, it's an open door. If your JWT library accepts &lt;code&gt;"none"&lt;/code&gt; as a valid algorithm, an attacker can strip the signature from any token, modify the payload, and submit it as valid. No keys required.&lt;/p&gt;

&lt;p&gt;Most modern JWT libraries reject &lt;code&gt;"none"&lt;/code&gt; by default in 2026 — but "most" isn't "all." Libraries in less common languages, older versions, or custom implementations may still accept unsigned tokens. This is especially dangerous in microservice architectures where different services may use different JWT libraries with different defaults.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Explicitly reject &lt;code&gt;"none"&lt;/code&gt; in your verification logic. Better yet, maintain an allowlist of exactly the algorithms you expect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Only accept the specific algorithm you use&lt;/span&gt;
&lt;span class="nx"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;verify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;token&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;algorithms&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;RS256&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="c1"&gt;// This automatically rejects "none", HS256, and anything else&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Weak Signing Keys: The Silent Vulnerability
&lt;/h2&gt;

&lt;p&gt;HMAC-based JWT algorithms (HS256, HS384, HS512) use a shared secret for both signing and verification. The security of the entire system depends on the strength of this secret.&lt;/p&gt;

&lt;p&gt;The problem: developers routinely use weak secrets. Dictionary words, company names, short strings, or predictable values make brute-force attacks trivial. Tools like &lt;code&gt;jwt-cracker&lt;/code&gt; can test millions of candidate secrets per second against a captured token. Once the secret is cracked, the attacker can forge unlimited tokens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Generate a cryptographically random secret with at least 256 bits of entropy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Generate a 256-bit random secret&lt;/span&gt;
openssl rand &lt;span class="nt"&gt;-base64&lt;/span&gt; 32
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For asymmetric algorithms (RS256, ES256), use minimum 2048-bit RSA keys or P-256 ECDSA curves. Treat signing keys with the same care as database credentials — they belong in secret management systems, not in source code or configuration files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Token Lifetime: The 15-Minute Rule
&lt;/h2&gt;

&lt;p&gt;JWTs are stateless. Once issued, they're valid until they expire. There's no built-in revocation mechanism — if a token is compromised, the attacker has access until the &lt;code&gt;exp&lt;/code&gt; claim is reached.&lt;/p&gt;

&lt;p&gt;This makes token lifetime a critical security parameter. Setting access tokens to expire in 24 hours — or worse, never — gives attackers a generous window to exploit stolen credentials. Yet developers frequently set long lifetimes to avoid the complexity of refresh token implementation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Follow the industry-standard pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Access tokens:&lt;/strong&gt; 15 minutes maximum. Short enough that a stolen token has limited utility.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Refresh tokens:&lt;/strong&gt; Hours to days, stored securely (HTTP-only cookies), with rotation on each use.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Refresh token rotation:&lt;/strong&gt; Each time a refresh token is used, issue a new refresh token and invalidate the old one. If a refresh token is used twice, it's been stolen — revoke the entire family.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Access token — short-lived&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;accessToken&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;secret&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;expiresIn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;15m&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Refresh token — longer, with rotation tracking&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;refreshToken&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;tokenFamily&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;familyId&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="nx"&gt;refreshSecret&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;expiresIn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;7d&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Token Storage: Why localStorage Is Still Wrong
&lt;/h2&gt;

&lt;p&gt;Where you store JWTs on the client determines your exposure to the two major web attack vectors: XSS (Cross-Site Scripting) and CSRF (Cross-Site Request Forgery).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;localStorage / sessionStorage:&lt;/strong&gt; Accessible to any JavaScript running on the page. A single XSS vulnerability — one unsanitized input, one compromised dependency, one injected script — gives the attacker full access to the token. They can exfiltrate it, use it from any device, and you'll never know.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTP-only cookies:&lt;/strong&gt; Not accessible to JavaScript. Period. Even if an attacker achieves XSS, they cannot read or exfiltrate the token. Cookies are vulnerable to CSRF instead, but CSRF is a solved problem — SameSite cookie attributes, CSRF tokens, and origin checking provide robust protection.&lt;/p&gt;

&lt;p&gt;The tradeoff isn't close. XSS attacks are common, difficult to fully prevent (especially with third-party scripts), and result in complete token compromise. CSRF attacks have multiple reliable defenses at the HTTP layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Store JWTs in HTTP-only, Secure, SameSite cookies. Accept the minor additional complexity of CSRF protection over the catastrophic risk of XSS-based token theft.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don't Decode JWTs on Untrusted Servers
&lt;/h2&gt;

&lt;p&gt;Developers frequently paste JWTs into online debugging tools to inspect the payload. This is risky for the same reasons that pasting JSON into online formatters is risky — the server-side tool now has your token.&lt;/p&gt;

&lt;p&gt;A JWT payload typically contains user identifiers, roles, permissions, email addresses, and organizational data. The token itself might still be valid. Pasting it into a server-side decoder gives that server everything it needs to impersonate your user for the remaining lifetime of the token.&lt;/p&gt;

&lt;p&gt;Use a &lt;strong&gt;client-side JWT decoder&lt;/strong&gt; instead. Decoding a JWT doesn't require cryptographic verification — the header and payload are just Base64url-encoded JSON. Any tool that runs entirely in your browser can decode them safely without transmitting the token anywhere. &lt;a href="https://tools.pinusx.com/jwt" rel="noopener noreferrer"&gt;PinusX's JWT Decoder&lt;/a&gt; processes everything client-side — your tokens never leave your browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  JWT Security Checklist for 2026
&lt;/h2&gt;

&lt;p&gt;Before your next deployment, verify every item:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Algorithm is hardcoded in verification.&lt;/strong&gt; Never trust the token's &lt;code&gt;alg&lt;/code&gt; header. Reject &lt;code&gt;"none"&lt;/code&gt; and unexpected algorithms.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Signing key is cryptographically strong.&lt;/strong&gt; At least 256 bits of random data for HMAC, or 2048-bit RSA / P-256 ECDSA key pairs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access tokens expire in ≤15 minutes.&lt;/strong&gt; Use refresh token rotation for longer sessions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tokens are stored in HTTP-only cookies.&lt;/strong&gt; Not localStorage, not sessionStorage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The &lt;code&gt;aud&lt;/code&gt; (audience) claim is validated.&lt;/strong&gt; A token issued for your API shouldn't be accepted by your admin dashboard.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The &lt;code&gt;iss&lt;/code&gt; (issuer) claim is validated.&lt;/strong&gt; Only accept tokens from your own authorization server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key rotation is implemented.&lt;/strong&gt; Use &lt;code&gt;kid&lt;/code&gt; (key ID) headers and JWKS endpoints to rotate signing keys without invalidating all existing tokens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sensitive tokens are never logged.&lt;/strong&gt; Audit your server logs, error reporting tools, and analytics to ensure JWTs aren't captured.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;JWT security best practices haven't changed dramatically in the past few years — but adoption of those practices remains alarmingly low. Algorithm confusion, weak keys, and infinite token lifetimes still dominate the vulnerability reports. The specification gives you enough rope to hang yourself, and the defaults in many libraries make it easy to do exactly that.&lt;/p&gt;

&lt;p&gt;The good news is that every vulnerability in this article has a straightforward fix. Hardcode your algorithms, generate strong keys, keep token lifetimes short, and store tokens in HTTP-only cookies. These aren't exotic security measures — they're the bare minimum for production JWT implementations.&lt;/p&gt;

&lt;p&gt;Need to debug a JWT safely? &lt;a href="https://tools.pinusx.com/jwt" rel="noopener noreferrer"&gt;Decode it client-side with PinusX&lt;/a&gt; — your tokens never leave your browser, so you can inspect headers, payloads, and expiration timestamps without risking exposure to third-party servers.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you're working with AI-generated code, I built a free client-side security scanner that catches JWT misconfigurations, hardcoded secrets, and other vulnerabilities: &lt;a href="https://tools.pinusx.com/vibescan?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=jwt_security" rel="noopener noreferrer"&gt;VibeScan&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>jwt</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>I Built a Privacy-First JSON/YAML Toolkit After 80K Credentials Were Leaked</title>
      <dc:creator>Hari Prakash</dc:creator>
      <pubDate>Mon, 26 Jan 2026 07:32:00 +0000</pubDate>
      <link>https://dev.to/hari_prakash_b0a882ec9225/i-built-a-privacy-first-jsonyaml-toolkit-after-80k-credentials-were-leaked-34e2</link>
      <guid>https://dev.to/hari_prakash_b0a882ec9225/i-built-a-privacy-first-jsonyaml-toolkit-after-80k-credentials-were-leaked-34e2</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Last year, WatchTowr Labs discovered something terrifying: &lt;strong&gt;jsonformatter.org and codebeautify.org exposed 80,000+ credentials&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;AWS keys. Database passwords. SSH recordings. All leaked because these "free tools" stored user data on their servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;PinusX DevTools&lt;/strong&gt; — a JSON/YAML toolkit that runs &lt;strong&gt;100% client-side&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Your data never leaves your browser. No server. No storage. No risk.&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;&lt;a href="https://tools.pinusx.com" rel="noopener noreferrer"&gt;https://tools.pinusx.com&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Does
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;✅ JSON Formatter&lt;/li&gt;
&lt;li&gt;✅ JSON Validator&lt;/li&gt;
&lt;li&gt;✅ YAML Formatter&lt;/li&gt;
&lt;li&gt;✅ JSON ↔ YAML Converter&lt;/li&gt;
&lt;li&gt;✅ JSON Diff&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;React 18 + Vite&lt;/li&gt;
&lt;li&gt;Monaco Editor (same as VS Code)&lt;/li&gt;
&lt;li&gt;Tailwind CSS&lt;/li&gt;
&lt;li&gt;Hosted on Vercel&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Client-Side Matters
&lt;/h2&gt;

&lt;p&gt;Every character you paste stays in your browser. Open DevTools, check the Network tab — zero requests to any server.&lt;/p&gt;

&lt;p&gt;This is how developer tools should work.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;I'm using this as a foundation to test demand for other developer tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Webhook testing&lt;/li&gt;
&lt;li&gt;API monitoring&lt;/li&gt;
&lt;li&gt;Schema validation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;strong&gt;&lt;a href="https://tools.pinusx.com" rel="noopener noreferrer"&gt;https://tools.pinusx.com&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What features would make this more useful for your workflow? Let me know in the comments!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built in 2 days. Launched today. Feedback welcome.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>privacy</category>
      <category>security</category>
      <category>showdev</category>
      <category>tooling</category>
    </item>
  </channel>
</rss>
