<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: sujeet kumar</title>
    <description>The latest articles on DEV Community by sujeet kumar (@snhacker9).</description>
    <link>https://dev.to/snhacker9</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/snhacker9"/>
    <language>en</language>
    <item>
      <title>Debugging a Goroutine Leak Caused by Missing resp.Body.Close() in Go</title>
      <dc:creator>sujeet kumar</dc:creator>
      <pubDate>Tue, 29 Jul 2025 04:39:12 +0000</pubDate>
      <link>https://dev.to/snhacker9/debugging-a-goroutine-leak-caused-by-missing-respbodyclose-in-go-4n6g</link>
      <guid>https://dev.to/snhacker9/debugging-a-goroutine-leak-caused-by-missing-respbodyclose-in-go-4n6g</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
While working on a Golang microservice yesterday, I stumbled upon a subtle yet critical bug that caused memory usage to grow uncontrollably over time. After some digging, I realized the root cause was a missing resp.Body.Close() in an infinite loop where HTTP requests were being made. This oversight led to a steady increase in the number of goroutines, ultimately resulting in a memory leak.&lt;/p&gt;

&lt;p&gt;Here's what I learned and how I fixed it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem: Infinite Loop + Missing Cleanup&lt;/strong&gt;&lt;br&gt;
In our codebase, we had a long-running background goroutine that continuously made HTTP requests to a remote service. Here's a simplified version of the problematic code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for {
    resp, err := http.Get("https://example.com/api/data")
    if err != nil {
        log.Println("request failed:", err)
        continue
    }

    // Process the response (dummy logic)
    process(resp.Body)

    // Missing resp.Body.Close()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At first glance, the code appears harmless. But there's one critical mistake: we forgot to close the response body. In Go, every successful HTTP request must be followed by resp.Body.Close() to release the connection back to the pool.&lt;/p&gt;

&lt;p&gt;Because this was inside an infinite loop, each iteration created a new HTTP response object. Without closing the body, the underlying TCP connections were never released, and goroutines handling those connections began to accumulate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Symptom: Memory Bloat and Goroutine Explosion&lt;/strong&gt;&lt;br&gt;
Over time, we noticed:&lt;/p&gt;

&lt;p&gt;Gradual increase in memory usage.&lt;/p&gt;

&lt;p&gt;runtime.NumGoroutine() kept increasing without bound.&lt;/p&gt;

&lt;p&gt;HTTP client became slower and eventually stalled due to exhausted resources.&lt;/p&gt;

&lt;p&gt;Using pprof, we identified a large number of goroutines blocked in net/http.(*persistConn).readLoop, which was a clear indicator that response bodies weren’t being closed.&lt;/p&gt;

&lt;p&gt;The Fix: Close the Response Body!&lt;br&gt;
The fix was straightforward once the problem was identified:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for {
    resp, err := http.Get("https://example.com/api/data")
    if err != nil {
        log.Println("request failed:", err)
        continue
    }

    // Always close the response body
    func() {
        defer resp.Body.Close()
        process(resp.Body)
    }()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We wrapped the processing in an anonymous function and used defer resp.Body.Close() to ensure the body is closed even if the processing fails midway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaways&lt;/strong&gt;&lt;br&gt;
Always close resp.Body after you're done with it, especially in loops or background goroutines.&lt;/p&gt;

&lt;p&gt;Leaked HTTP connections can lead to goroutine leaks and excessive memory usage.&lt;/p&gt;

&lt;p&gt;Tools like pprof and runtime.NumGoroutine() are invaluable for diagnosing such issues.&lt;/p&gt;

&lt;p&gt;Be cautious with infinite loops—resource leaks in them scale quickly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
This bug was a subtle reminder of how a small oversight in Go's resource management model can lead to large-scale operational issues. If you're working with net/http, always remember: what you open, you must close.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Limiting The Number Of Go Routines</title>
      <dc:creator>sujeet kumar</dc:creator>
      <pubDate>Mon, 09 Dec 2024 05:58:09 +0000</pubDate>
      <link>https://dev.to/snhacker9/limiting-the-number-of-go-routines-1gh2</link>
      <guid>https://dev.to/snhacker9/limiting-the-number-of-go-routines-1gh2</guid>
      <description>&lt;p&gt;Hi Everyone , You are all welcomed to my first blog:) ___&lt;/p&gt;

&lt;p&gt;In this blog we will be exploring all best practices to limit the number of go routines and its need.&lt;/p&gt;

&lt;p&gt;Golang concurrency model powered by go routines and channel is one of its standout features. Go routines are lightweight thread managed by go runtime , using which one can perform concurrent tasks efficiently.&lt;/p&gt;

&lt;p&gt;However spawning too many go routines can cause problems like memory exhaustion and degrading performance.&lt;/p&gt;

&lt;p&gt;Why do we need to limit it -&lt;/p&gt;

&lt;p&gt;Go routines are lightweight thread but those also consume some memory so yeah its not free , because each go routine spawned consumes-&lt;/p&gt;

&lt;p&gt;1.A small amount of memory for its stack .&lt;br&gt;
2.CPU time for its execution .&lt;br&gt;
3.Potential system resources like file descriptors or network sockets.&lt;/p&gt;

&lt;p&gt;What will happen if you don’t limit it-&lt;br&gt;
1.High memory usage.&lt;br&gt;
2.Lead to CPU contention.&lt;br&gt;
3.Overwhelm external system or APIs.&lt;/p&gt;

&lt;p&gt;So to not face the above mentioned problems we need to limit the number of go routines in some of the critical scenarios like Bulk processing tasks, Network or file operation , or workload distributions in micro services.&lt;/p&gt;

&lt;p&gt;Techniques to limit Go Routines -&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Using a semaphore channel&lt;/strong&gt;&lt;br&gt;
A semaphore pattern is a simple way to limit go routines . A buffered channel acts as a semaphore limiting the no of concurrent task.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
"fmt"
"time"
)

func worker(id int, sem chan struct{}) {
   defer func() { &amp;lt;-sem }() // Release semaphore on function exit

   fmt.Printf("Worker %d started\n", id)
   time.Sleep(2 * time.Second)
   fmt.Printf("Worker %d finished\n", id)
}

func main() {
  const maxGoroutines = 3
  sem := make(chan struct{}, maxGoroutines)
  for i := 1; i &amp;lt;= 10; i++ {
    sem &amp;lt;- struct{}{} // Acquire semaphore
    go worker(i, sem)
  }

  // Wait for all workers to finish
  for i := 0; i &amp;lt; cap(sem); i++ {
    sem &amp;lt;- struct{}{}
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see in the above example we are using a buffered channel with a semaphore pattern and max number of go routines .&lt;/p&gt;

&lt;p&gt;Here whenever we are spawning a go routine we are acquiring semaphore which will make sure that at a time there will not be spawned go routines more than max go routines , and once the go routine is done with execution we are releasing semaphore .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Using a Worker Pool&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A worker pool process task in fixed batch sizes . It uses a channel to distribute work among a limited number of workers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
"fmt"
"sync"
"time"
)

func worker(id int, tasks &amp;lt;-chan int, wg *sync.WaitGroup) {
    defer wg.Done()
    for task := range tasks {
    fmt.Printf("Worker %d processing task %d\n", id, task)
    time.Sleep(1 * time.Second)
    }
}

func main() {
  const numWorkers = 3
  tasks := make(chan int, 10)
  var wg sync.WaitGroup

  // Start worker goroutines
  for i := 1; i &amp;lt;= numWorkers; i++ {
  wg.Add(1)
  go worker(i, tasks, &amp;amp;wg)
  }

  // Send tasks to workers
  for i := 1; i &amp;lt;= 10; i++ {
   tasks &amp;lt;- i
  }
  close(tasks) // Close the task channel

  // Wait for workers to finish
  wg.Wait()
  fmt.Println("All tasks completed.")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example you can find we have some workers defined (const numWorker =3) which is basically the no of go routines we want to spawn for this particular task.&lt;/p&gt;

&lt;p&gt;Next we have a task channel which a buffered channel with no of tasks we need to complete .&lt;/p&gt;

&lt;p&gt;The process work like this we have a pipeline(task channel) and we have three workers (numWorkers) , so these three workers will pick task from task channel and execute it , and this they will keep doing until there is no task left in the task channel (ie all tasks have been completed) , after that the workers will be terminated and the execution will come back to main go routine and it will print All tasks completed .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Rate limiting with time.Ticker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This specific technique can come handy when we need to limit the number of go routines for some time sensitive works like Real time data processing , Payment gateways , Gaming applications etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
"fmt"
"time"
)

func worker(id int) {
   fmt.Printf("Worker %d started\n", id)
   time.Sleep(1 * time.Second)
   fmt.Printf("Worker %d finished\n", id)
}


func main() {
  const maxRate = 2 // Goroutines per second
  ticker := time.NewTicker(time.Second / maxRate)
  defer ticker.Stop()

   for i := 1; i &amp;lt;= 10; i++ {
     &amp;lt;-ticker.C // Wait for the next tick
     go worker(i)
   }

   // Allow time for all goroutines to finish
   time.Sleep(10 * time.Second)

   fmt.Println("All workers completed.")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above example basically limits how frequently go routines(workers) started. It limits the rate of starting go routines to 2 per second using ticker.&lt;/p&gt;

&lt;p&gt;Here how it happens -&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;At 0.5 sec worker 1 starts.&lt;/li&gt;
&lt;li&gt;At 1 sec worker 2 starts and 1 still running.&lt;/li&gt;
&lt;li&gt;By 5 sec all the workers are started while some of them are still finishing their tasks.&lt;/li&gt;
&lt;li&gt;At last the program waits long enough (10 sec) for all workers to complete before existing.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s all for this blog :)&lt;/p&gt;

&lt;p&gt;Hope you have gained something out of it .&lt;/p&gt;

&lt;p&gt;See you in next one …:)&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
