<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: kartikay dubey</title>
    <description>The latest articles on DEV Community by kartikay dubey (@dubeykartikay).</description>
    <link>https://dev.to/dubeykartikay</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dubeykartikay"/>
    <language>en</language>
    <item>
      <title>Optimize Hugo Blog Performance: Zero JS and 100% Lighthouse Score</title>
      <dc:creator>kartikay dubey</dc:creator>
      <pubDate>Sat, 04 Apr 2026 14:30:32 +0000</pubDate>
      <link>https://dev.to/dubeykartikay/optimize-hugo-blog-performance-zero-js-and-100-lighthouse-score-4128</link>
      <guid>https://dev.to/dubeykartikay/optimize-hugo-blog-performance-zero-js-and-100-lighthouse-score-4128</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;I reduced my Hugo blog's page weight by eliminating 3.6 MB of JavaScript and 40 KB of external CSS, achieving a 100% JS-free frontend. Key optimizations included HTML minification, inlining CSS, switching to native MathML, and pre-rendering Mermaid diagrams server-side.&lt;/p&gt;

&lt;p&gt;I recently looked into my blog's performance and was surprised to find my pages were downloading over 3.6 MB of JavaScript and render-blocking CSS on every load. For a simple static site, this was too much, so I decided to optimize it. &lt;/p&gt;

&lt;p&gt;Here is the step-by-step breakdown of how I reduced my payload and removed JavaScript.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Baseline
&lt;/h2&gt;

&lt;p&gt;Before starting, my site had some major issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HTML Size:&lt;/strong&gt; 86,348 bytes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JS Size:&lt;/strong&gt; 3,617,515 bytes (3.6 MB)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CSS Size:&lt;/strong&gt; 40,560 bytes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Issue:&lt;/strong&gt; Massive blocking JS/CSS scripts were loaded on every single page for Mermaid diagrams and Math rendering. The HTML was also not minified.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxueyze3w5x0fyw4xvuuh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxueyze3w5x0fyw4xvuuh.png" alt="Hugo Blog Performance Baseline Metrics" width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimization 1: HTML Minification
&lt;/h2&gt;

&lt;p&gt;The first step was simple: adding &lt;code&gt;minifyOutput = true&lt;/code&gt; to &lt;code&gt;hugo.toml&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HTML Size:&lt;/strong&gt; 72,370 bytes (16% smaller)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impact:&lt;/strong&gt; Reduced parsing time for HTML, leading to a faster First Paint.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j7zizubjnffs9lg3loo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j7zizubjnffs9lg3loo.png" alt="Hugo HTML Minification Results and Faster First Paint" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimization 2: Inlining CSS
&lt;/h2&gt;

&lt;p&gt;Next, I removed the &lt;code&gt;&amp;lt;link&amp;gt;&lt;/code&gt; tag pointing to my &lt;code&gt;main.css&lt;/code&gt; file and replaced it with an inline &lt;code&gt;&amp;lt;style&amp;gt;{{.Content|safeCSS}}&amp;lt;/style&amp;gt;&lt;/code&gt; block.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HTML Size:&lt;/strong&gt; Increased to 127,350 bytes (because CSS is now inside the HTML document).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impact:&lt;/strong&gt; This eliminated 1 critical render-blocking HTTP request. The browser no longer waits for an external CSS fetch, which improves &lt;strong&gt;First Contentful Paint (FCP)&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpdsfhulc7wstyg965xd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpdsfhulc7wstyg965xd.png" alt="Hugo Inline CSS Performance Impact and FCP Improvement" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimization 3: Native MathML
&lt;/h2&gt;

&lt;p&gt;My blog used the KaTeX library (JS, CSS, and fonts) to render equations. I removed it and enabled Hugo's Goldmark passthrough extensions to render Native MathML instead.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HTML Size:&lt;/strong&gt; 123,341 bytes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JS Size:&lt;/strong&gt; 3,338,725 bytes (278 KB smaller)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CSS Size:&lt;/strong&gt; 0 bytes (Removed KaTeX CSS, meaning zero external stylesheets are loaded).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impact:&lt;/strong&gt; A significant reduction in payload size. I removed the need for JavaScript and font files for math. The browser now renders it natively.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmjsdg57d0e3suhzal9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmjsdg57d0e3suhzal9y.png" alt="Hugo Native MathML vs KaTeX JavaScript Performance" width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimization 4: Conditional Asset Loading
&lt;/h2&gt;

&lt;p&gt;My Mermaid script was loading on every page. I used Hugo's &lt;code&gt;.Store&lt;/code&gt; to set a flag &lt;code&gt;hasMermaid&lt;/code&gt; when processing Markdown, and only injected the Mermaid &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; tag if that flag is true.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HTML Size:&lt;/strong&gt; 117,632 bytes (Saved 6 KB across all generated pages).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impact:&lt;/strong&gt; Text-only blog posts no longer force the browser to download &lt;code&gt;mermaid.min.js&lt;/code&gt;. The JavaScript is only loaded when necessary.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsojhfxtl4uqly41rsy3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsojhfxtl4uqly41rsy3u.png" alt="Hugo Conditional Asset Loading for Mermaid JS on Text Pages" width="800" height="161"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;(Text-only pages don't load Mermaid)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhf1c3k5tsyc3fws5uqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhf1c3k5tsyc3fws5uqe.png" alt="Hugo Mermaid Diagram Rendering Output with Conditional Logic" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(Pages with diagrams load Mermaid conditionally)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimization 5: Server-side Rendering for Mermaid Diagrams
&lt;/h2&gt;

&lt;p&gt;Even conditionally, loading a 3.3 MB Mermaid script on some pages was heavy. I introduced a Node.js build step to pre-render Mermaid blocks into static SVG files. Now, the frontend outputs an &lt;code&gt;&amp;lt;img src="diagram.svg"&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JS Size:&lt;/strong&gt; 0 bytes (Removed the remaining 3.3 MB of Mermaid JavaScript).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impact:&lt;/strong&gt; The site is now &lt;strong&gt;100% JavaScript-free&lt;/strong&gt; on the frontend. The &lt;code&gt;Total Blocking Time (TBT)&lt;/code&gt; metrics improved because the browser no longer executes JS to calculate layouts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faidqgyhivmjmimdfzx7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faidqgyhivmjmimdfzx7e.png" alt="Hugo Server-side Rendering (SSR) for Mermaid Diagrams with Zero JS" width="800" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimization 6: Early Hints &amp;amp; Caching
&lt;/h2&gt;

&lt;p&gt;Finally, I optimized the network layer. I generated a &lt;code&gt;_headers&lt;/code&gt; file to define strict &lt;code&gt;Cache-Control&lt;/code&gt; rules for immutable assets. I also added &lt;code&gt;Link: &amp;lt;image&amp;gt;; rel=preload; as=image&lt;/code&gt; directives automatically via the build script.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Impact:&lt;/strong&gt; Cloudflare will now return &lt;code&gt;103 Early Hints&lt;/code&gt; responses, telling the browser to fetch SVGs and images immediately. Even before the HTML document finishes downloading. Assets cache indefinitely on repeat visits, eliminating secondary network fetch delays.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Summary
&lt;/h2&gt;

&lt;p&gt;Over the course of these 6 optimizations, I successfully brought the frontend static vendor sizes from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JS Payload:&lt;/strong&gt; 3.6 MB  -&amp;gt;  &lt;strong&gt;0 bytes&lt;/strong&gt; (100% reduction)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;External CSS:&lt;/strong&gt; 40 KB -&amp;gt; &lt;strong&gt;0 bytes&lt;/strong&gt; (Eliminated all external style sheets, saving a round-trip on every page).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTML Payload:&lt;/strong&gt; Minified by 16% initially, offset slightly by securely inlining CSS, ensuring near-instantaneous &lt;code&gt;First Contentful Paint&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Performance matters, and sometimes you don't need a heavy JS framework to deliver a fast experience!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>performance</category>
    </item>
    <item>
      <title>Why Your Kafka Consumers Are Suddenly 10x Slower in v3.9.0</title>
      <dc:creator>kartikay dubey</dc:creator>
      <pubDate>Wed, 11 Mar 2026 19:06:29 +0000</pubDate>
      <link>https://dev.to/dubeykartikay/why-your-kafka-consumers-are-suddenly-10x-slower-in-v390-3f4d</link>
      <guid>https://dev.to/dubeykartikay/why-your-kafka-consumers-are-suddenly-10x-slower-in-v390-3f4d</guid>
      <description>&lt;h3&gt;
  
  
  TL;DR
&lt;/h3&gt;

&lt;p&gt;In a minor release of Apache Kafka, consumer throughput dropped by 10x.&lt;br&gt;&lt;br&gt;
This change was done to prioritize durability over throughput.&lt;br&gt;&lt;br&gt;
The main star of the story was the new &lt;code&gt;min.insync.replicas&lt;/code&gt; in the topic configuration.&lt;br&gt;&lt;br&gt;
In versions before v3.9.0, it controlled when the broker accepted writes from producers using &lt;code&gt;acks=all&lt;/code&gt;.&lt;br&gt;
Now, it can also dictate whether a message is able to be consumed by the consumers.&lt;br&gt;
This slight change caused a throughput drop of 10x for some Kafka users.  &lt;/p&gt;

&lt;p&gt;Keep reading to find out why this change was made, and how to fix it, so that your production systems don't throttle.  &lt;/p&gt;
&lt;h2&gt;
  
  
  It All Begins
&lt;/h2&gt;

&lt;p&gt;In August 2025, a user Sharad Garg raised an &lt;a href="https://issues.apache.org/jira/browse/KAFKA-19652" rel="noopener noreferrer"&gt;issue&lt;/a&gt; on the Kafka Issue tracker. &lt;br&gt;
It was titled "Consumer throughput drops by 10 times with Kafka v3.9.0 in ZK mode". &lt;br&gt;
Other people validated his claim, and shared more information on the reproduction steps.&lt;br&gt;&lt;br&gt;
Notably, Ritvik Gupta ran tests that pointed the blame on configuration &lt;code&gt;min.insync.replicas&lt;/code&gt;.&lt;br&gt;
In his tests he showed that the consumer throughput dropped significantly on changing &lt;code&gt;min.insync.replicas&lt;/code&gt; from &lt;code&gt;1&lt;/code&gt; to &lt;code&gt;2&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdx6suwbze3n19guu1fie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdx6suwbze3n19guu1fie.png" alt="Table showing throughput from when min.insync.replicas is changed" width="800" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other users came to this issue, reporting the same problem.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"We were able to reproduce this issue in our own environment too.&lt;br&gt;
Throughput drops from: 147.5842 MB/sec (Kafka 3.4) to 58.6748 MB/sec (Kafka 3.9) with &lt;code&gt;min.insync.replicas=2&lt;/code&gt;"&lt;br&gt;&lt;br&gt;
-Bertalan Kondrat&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  It Gets Escalated
&lt;/h2&gt;

&lt;p&gt;Eventually the issue gets stale, getting no attention from the maintainers, then Marcus Page escalates it to the Kafka dev mailing list.&lt;br&gt;&lt;br&gt;
This is how I learn about this issue.&lt;br&gt;&lt;br&gt;
This escalation gets the attention of a long-time contributor to the project Chia-Ping Tsai.&lt;br&gt;
A day later, he replies to that email stating that he had identified the root cause, and posted it on the ticket. I obviously rushed to check this root cause, and it left me even more confused about this issue.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Root Cause
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;The root cause is related to KAFKA-15583 - "High watermark can only advance if ISR size is larger than min ISR". The title says it all. The consumer can't read more data due to the HW, which can't be advanced due to the slow followers dropping the partition below the min ISR&lt;br&gt;&lt;br&gt;
-Chia-Ping Tsai&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;
  
  
  What Is High Watermark?
&lt;/h4&gt;

&lt;p&gt;The High Watermark is the offset of the latest message successfully copied to all brokers &lt;strong&gt;currently&lt;/strong&gt; in the In-Sync Replicas (ISR) list. It acts as a strict safety boundary so consumers only read fully committed data that won't disappear if a broker suddenly crashes.  &lt;/p&gt;

&lt;p&gt;Since I didn't know what High Watermark was I was confused, after learning about it, I was even more confused. Why would &lt;code&gt;min.insync.replicas&lt;/code&gt; dictate the HW?&lt;br&gt;&lt;br&gt;
If you don't know what &lt;code&gt;min.insync.replicas&lt;/code&gt; does, it allows you to enforce greater durability guarantees on the producer level. The producer raises an exception if a message is not acknowledged by &lt;code&gt;min.insync.replicas&lt;/code&gt; replicas after a write.&lt;br&gt;&lt;br&gt;
Notice how this description does not mention consumers.&lt;br&gt;&lt;br&gt;
Next, I looked into the related KAFKA-15583 issue.&lt;br&gt;&lt;br&gt;
It has no description. 🥲&lt;br&gt;&lt;br&gt;
It links to 2 PRs, and it is here that I finally understood the whole picture.&lt;/p&gt;
&lt;h2&gt;
  
  
  It's Not a Bug, It's a Feature
&lt;/h2&gt;

&lt;p&gt;After looking at one of the mentioned &lt;a href="https://github.com/apache/kafka/pull/14594" rel="noopener noreferrer"&gt;PR&lt;/a&gt;, I found the 3 lines of code that caused consumer throughput to drop by a factor of 10.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;  private def maybeIncrementLeaderHW(leaderLog: UnifiedLog, currentTimeMs: Long = time.milliseconds): Boolean = {
&lt;span class="gi"&gt;+    if (isUnderMinIsr) {
+      trace(s"Not increasing HWM because partition is under min ISR(ISR=${partitionState.isr})")
+      return false
+    }
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These 3 lines of code effectively &lt;strong&gt;block&lt;/strong&gt; consumer reads until a produced message is replicated in &lt;strong&gt;at least&lt;/strong&gt; &lt;code&gt;min.insync.replicas&lt;/code&gt;.&lt;br&gt;
So if one of the followers is latent, and gets kicked out of in-sync replicas, the leader has to wait for it to catch up before allowing the next read to a consumer.&lt;br&gt;
This effectively is a trade-off between reliability and performance. &lt;br&gt;
So, even though it affects consumer throughput, Kafka accepts this performance loss in favor of being highly reliable and ensuring no data loss.  &lt;/p&gt;

&lt;p&gt;In my opinion, this change maybe needed a major version bump instead of a minor, as this change in throughput could cause a lot of production systems to be impacted.&lt;br&gt;
But bumping versions is already a controversial topic I'll talk about another day&lt;/p&gt;

&lt;p&gt;So that's it, a 3 line change that causes a massive throughput drop.&lt;br&gt;&lt;br&gt;
I half expected it to be an obscure JVM bug, or a CPU architecture issue, but with 99.99999999% of other bugs, it was due to new code.&lt;/p&gt;

&lt;p&gt;Thanks for reading through the end. I have joined the kafka-dev mailing list, and actively trying to become a contributor.&lt;br&gt;&lt;br&gt;
Follow this blog for more quirks and insider information about Kafka.&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>java</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
