<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Silver_dev</title>
    <description>The latest articles on DEV Community by Silver_dev (@silver_dev).</description>
    <link>https://dev.to/silver_dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/silver_dev"/>
    <language>en</language>
    <item>
      <title>Golang. M:P:G Model</title>
      <dc:creator>Silver_dev</dc:creator>
      <pubDate>Sun, 15 Mar 2026 23:32:47 +0000</pubDate>
      <link>https://dev.to/silver_dev/golang-mpg-model-kg6</link>
      <guid>https://dev.to/silver_dev/golang-mpg-model-kg6</guid>
      <description>&lt;p&gt;The M:P:G model is the core of Go's concurrency model, and it is one of the most efficient among modern programming languages.&lt;/p&gt;

&lt;p&gt;It provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Efficient CPU core utilization through a fixed number of P's (Processors).&lt;/li&gt;
&lt;li&gt;Scalability to hundreds of thousands of goroutines via lightweight G's (Goroutines).&lt;/li&gt;
&lt;li&gt;Transparent handling of blocking operations through dynamic management of M's (OS Threads).&lt;/li&gt;
&lt;li&gt;Intelligent load balancing using work stealing and spinning threads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Go abstracts you away from low-level OS thread management, but not from concurrent programming itself. Your task is to design the system by defining the correct concurrency boundaries and synchronization points, while the M:P:G model takes care of efficient execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  G (Goroutine) — Lightweight Execution Unit
&lt;/h2&gt;

&lt;p&gt;A goroutine is not an operating system thread; it's a runtime-level abstraction representing a lightweight thread of control with minimal overhead.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initial Stack Size: 2 KB (roughly 1000 times smaller than an OS thread).&lt;/li&gt;
&lt;li&gt;Context Switch Overhead: ~200 ns (5-10 times faster than switching OS threads).&lt;/li&gt;
&lt;li&gt;Placement: A goroutine is placed into the local run queue of the logical processor (P) designated to execute it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What is G from the Runtime's Perspective?&lt;/strong&gt;&lt;br&gt;
It's a data structure that describes the execution state of a single function. It is not rigidly tied to any specific OS thread (M).&lt;/p&gt;

&lt;p&gt;Key Fields of the &lt;code&gt;g&lt;/code&gt; Struct:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;stack: Describes the goroutine's stack memory.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;lo&lt;/code&gt; and &lt;code&gt;hi&lt;/code&gt;: The lower and upper bounds of the stack in memory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;guard&lt;/code&gt;: Used for stack overflow detection.&lt;/li&gt;
&lt;li&gt;Important: A goroutine's stack starts small (e.g., 2 KB) and grows dynamically (by being copied to a new memory location) as needed.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;sched&lt;/code&gt;: This field is far more critical than it might initially seem. It's a &lt;code&gt;gobuf&lt;/code&gt; structure that saves the execution context (CPU register state) when the goroutine is not running. When a goroutine is paused (parked), it saves its &lt;code&gt;sp&lt;/code&gt; (stack pointer), &lt;code&gt;pc&lt;/code&gt; (program counter), &lt;code&gt;bp&lt;/code&gt; (base pointer), and other registers here. When execution resumes, it restores them from here. This is the essence of "lightweight" switching — switching between G's involves saving and restoring a few dozen bytes in memory, not making an expensive kernel system call.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;code&gt;atomicstatus&lt;/code&gt;: The current state of the goroutine.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;_Gidle&lt;/code&gt;: Just allocated, but hasn't run yet.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_Grunnable&lt;/code&gt;: Ready to run, currently in a run queue.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_Grunning&lt;/code&gt;: Currently executing on an M and P.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_Gsyscall&lt;/code&gt;: Executing, but inside a system call.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_Gwaiting&lt;/code&gt;: Blocked (waiting on a channel, mutex, or network I/O).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_Gdead&lt;/code&gt;: Not in use (held in a pool for reuse to avoid allocations).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_Gcopystack&lt;/code&gt;: Temporary state used while resizing (copying) the stack.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;m&lt;/code&gt;: A pointer to the M struct (OS thread) if the goroutine is running (&lt;code&gt;_Grunning&lt;/code&gt;) or was interrupted during a system call (&lt;code&gt;_Gsyscall&lt;/code&gt;). Otherwise, it's &lt;code&gt;nil&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;waitreason&lt;/code&gt;: If the status is &lt;code&gt;_Gwaiting&lt;/code&gt;, this field holds a string explaining the reason for blocking (e.g., "chan receive").&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;A goroutine knows nothing about CPU cores or OS threads. Its world consists of its stack, its saved registers, and the queue it resides in. It essentially says: "I have code that needs to be executed, and a context (registers/stack) from which to start or continue."&lt;/p&gt;

&lt;h2&gt;
  
  
  P (Processor) — The Conductor
&lt;/h2&gt;

&lt;p&gt;P is an abstraction of a logical processor, the connecting link between M (OS thread) and G (goroutine). This is the most important and uniquely Go concept. P is not a physical CPU core, but a logical resource required to execute Go code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is P from the Runtime's Perspective?&lt;/strong&gt;&lt;br&gt;
It is a scheduling and resource allocation context. Without a P, an M has an OS thread but no "permission" to execute goroutine code (except in special cases, like the &lt;code&gt;g0&lt;/code&gt; system stack).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Fields of the p Struct:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;runq&lt;/code&gt;: The Local Run Queue. This is a fixed-size circular buffer (typically 256 elements). Goroutines with the status &lt;code&gt;_Grunnable&lt;/code&gt; are placed here.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;runnext&lt;/code&gt;: A pointer to the next goroutine to be executed. This is an optimization, particularly for channel operations. When a goroutine becomes unblocked (e.g., after a channel send), it is often placed here to be executed immediately as the next step on the same P. This gives priority to recently unblocked goroutines.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;mcache&lt;/code&gt;: The local memory allocation cache. Memory allocation in Go is very fast because each P has its own cache (&lt;code&gt;mcache&lt;/code&gt;) for small objects. This allows goroutines on different Ps to allocate memory without global locks.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;status: The state of the P.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;_Pidle&lt;/code&gt;: Not currently in use.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_Prunning&lt;/code&gt;: Bound to an M and executing code.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_Psyscall&lt;/code&gt;: Bound to an M, but that M is currently inside a system call.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_Pgcstop&lt;/code&gt;: Stopped for garbage collection.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_Pdead&lt;/code&gt;: No longer in use.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;schedtick&lt;/code&gt;, &lt;code&gt;syscalltick&lt;/code&gt;: Counters for statistics and for detecting "stuck" goroutines.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;P is the main arbiter of resources. The number of P's determines the degree of parallelism.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you have 4 P's, then a maximum of 4 goroutines can be executing simultaneously (in parallel) at any given moment.&lt;/li&gt;
&lt;li&gt;P decouples M and G. Thanks to P, an M doesn't need to know which goroutines are running on other Ms. Each P only concerns itself with its own queue.&lt;/li&gt;
&lt;li&gt;When garbage collection (GC) starts, all P's are stopped (&lt;code&gt;_Pgcstop&lt;/code&gt;), guaranteeing that no goroutine is mutating memory during the "stop the world" (STW) phase.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Characteristics&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Number of P's:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Determined by &lt;code&gt;GOMAXPROCS&lt;/code&gt; (defaults to the number of CPU cores).&lt;/li&gt;
&lt;li&gt;Fixed for the lifetime of the program (can be changed via &lt;code&gt;runtime.GOMAXPROCS&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Important: This is NOT the maximum number of concurrent operations, but the number of execution contexts.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Local Run Queue:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Size: 256 elements (circular buffer).&lt;/li&gt;
&lt;li&gt;Lock-free access (no mutexes) for the currently bound M.&lt;/li&gt;
&lt;li&gt;Uses the "work stealing" algorithm for load balancing.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;At Program Startup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;GOMAXPROCS&lt;/code&gt; instances of P are created.&lt;/li&gt;
&lt;li&gt;Each P receives its own local run queue for goroutines.&lt;/li&gt;
&lt;li&gt;When a goroutine is created, it is placed into the queue of the current P.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Distribution example
P0: [G1, G2, G3, G4]
P1: [G5, G6]
P2: []
P3: [G7, G8, G9]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Work Stealing in Action&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When P0 finishes executing goroutines and its local queue is empty:&lt;/li&gt;
&lt;li&gt;It first checks the global run queue.&lt;/li&gt;
&lt;li&gt;If the global queue is empty, it "steals" approximately half of the goroutines from another P's queue (e.g., P3).&lt;/li&gt;
&lt;li&gt;Algorithm: The stealing starts from the middle of the victim's queue, not the beginning, to ensure better load balancing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Data Locality: The local queue reduces contention (lock contention) and improves cache efficiency.&lt;br&gt;
Load Balancing: Work stealing ensures an even distribution of workload across available P's.&lt;br&gt;
Potential Pitfall: Setting GOMAXPROCS too high (significantly exceeding the number of physical CPU cores) can increase overhead due to more frequent stealing attempts and context switching.&lt;/p&gt;

&lt;h2&gt;
  
  
  M (Machine) — The Executor
&lt;/h2&gt;

&lt;p&gt;If G represents the "work," then M represents the "worker." It is a direct representation of an operating system thread. M is the OS thread that actually executes machine code. It serves as the crucial link between the OS and the Go runtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is M from the Runtime's Perspective?&lt;/strong&gt;&lt;br&gt;
It's a structure that manages a single OS thread. M is the only entity that actually executes anything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Fields of the m Struct:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;g0&lt;/code&gt;: A pointer to a special goroutine called g0. This is the most important concept for understanding M's operation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every M has its own dedicated &lt;code&gt;g0&lt;/code&gt; goroutine.&lt;/li&gt;
&lt;li&gt;Why is it needed? The code running in a regular goroutine (G) is "user" code. However, the code for the scheduler, the garbage collector, and signal handling is "system" (runtime) code. When M needs to execute scheduler code (e.g., to choose the next G to run), it switches to its &lt;code&gt;g0&lt;/code&gt; stack. This is a separate stack allocated specifically for runtime tasks.&lt;/li&gt;
&lt;li&gt;This separation is critical for safety: user code cannot accidentally or maliciously corrupt the scheduler's stack, and vice versa. Switching between a user G and &lt;code&gt;g0&lt;/code&gt; is done via the &lt;code&gt;runtime·mcall&lt;/code&gt; or &lt;code&gt;runtime·systemstack&lt;/code&gt; mechanisms.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;curg&lt;/code&gt;: A pointer to the current user goroutine being executed. If M is executing on its &lt;code&gt;g0&lt;/code&gt; stack, &lt;code&gt;curg&lt;/code&gt; might be &lt;code&gt;nil&lt;/code&gt; or point to the goroutine that was preempted.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;p&lt;/code&gt;: A pointer to the Processor (P) to which this M is currently attached. If M is idle or inside a system call without a P, this field can be &lt;code&gt;nil&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;nextp&lt;/code&gt;, &lt;code&gt;oldp&lt;/code&gt;: Used for temporarily holding a P during transitions (e.g., while entering or exiting a system call).&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;spinning&lt;/code&gt;: A flag indicating that this M is actively seeking work even though its associated P's queue is empty. This is an important optimization for load balancing.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;park&lt;/code&gt;: A structure used to park (put to sleep) the M itself when there is absolutely no work to do.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;M lives in two worlds simultaneously. It executes both user code (on the &lt;code&gt;curg&lt;/code&gt; stack) and runtime code (on the &lt;code&gt;g0&lt;/code&gt; stack). The switching between these modes happens very frequently and must be extremely fast. This is where the magic of "lightweight" goroutines occurs: M switches between different G's, saving the context of one and loading the context of another, all while remaining the same underlying OS thread.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Characteristics&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;M-to-P Relationship:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At any given moment, an M can be attached to at most one P.&lt;/li&gt;
&lt;li&gt;The number of M's can exceed the number of P's.&lt;/li&gt;
&lt;li&gt;Maximum number of M's: ~10,000 (limited by maxmcount).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;M States:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spinning: Actively searching for work (high priority).&lt;/li&gt;
&lt;li&gt;Idle: Idle, but can be activated if work appears.&lt;/li&gt;
&lt;li&gt;Blocked: Blocked in the OS (e.g., during a system call).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Special M's:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;M0: The first OS thread created and run when the program starts.&lt;/li&gt;
&lt;li&gt;G0: A special goroutine for running system code on each M (one per M).&lt;/li&gt;
&lt;li&gt;Sysmon: The system monitor (a dedicated M for background tasks like network polling, preemption, and GC triggering).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How It Looks in Action&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Scenario 1: Normal Execution&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;M1&lt;/code&gt; is attached to and executing on &lt;code&gt;P1&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;P1&lt;/code&gt; processes goroutines from its local run queue.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;M1&lt;/code&gt; executes the goroutine's code on its &lt;code&gt;curg&lt;/code&gt; stack.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Scenario 2: Blocking System Call&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A running goroutine &lt;code&gt;G2&lt;/code&gt; makes a blocking system call and transitions to the &lt;code&gt;_Gsyscall&lt;/code&gt; state.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;M1&lt;/code&gt; (which was running &lt;code&gt;G2&lt;/code&gt;) detaches itself from its P (&lt;code&gt;P1&lt;/code&gt;). &lt;code&gt;P1&lt;/code&gt; is now free (&lt;code&gt;_Pidle&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Another M (e.g., &lt;code&gt;M2&lt;/code&gt;) can now acquire the now-idle &lt;code&gt;P1&lt;/code&gt; and continue executing other goroutines from &lt;code&gt;P1&lt;/code&gt;'s local queue, preventing the CPU core from sitting idle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Scenario 3: Recovery After a System Call&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When the blocking system call (e.g., a network request) completes for &lt;code&gt;G2&lt;/code&gt;:&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;M1&lt;/code&gt; (which was blocked with &lt;code&gt;G2&lt;/code&gt;) attempts to re-acquire its original P, &lt;code&gt;P1&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If &lt;code&gt;P1&lt;/code&gt; is now busy (being used by &lt;code&gt;M2&lt;/code&gt;), &lt;code&gt;M1&lt;/code&gt; cannot get it back immediately. It places &lt;code&gt;G2&lt;/code&gt; into the global run queue.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;sysmon&lt;/code&gt; thread or another spinning M will eventually notice &lt;code&gt;G2&lt;/code&gt; in the global queue and wake up or direct an idle M to handle &lt;code&gt;G2&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The "Spinning Threads" Mechanism&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Go runtime uses heuristics to maintain "hot" (spinning) threads to minimize latency.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spinning M: OS threads that are actively searching for work (checking local/global queues, attempting work stealing) even though they aren't currently executing a goroutine.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Goal: To minimize the delay (latency) when new tasks become available. A spinning M can pick up a new task immediately without waiting for an idle thread to wake up.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Algorithm:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If there are idle P's (available CPU capacity) and no spinning M's currently looking for work, the runtime may create a new spinning M.&lt;/li&gt;
&lt;li&gt;A spinning M searches for work for a short period (approximately 10 microseconds).&lt;/li&gt;
&lt;li&gt;If it doesn't find any work within that time, it eventually transitions to an idle state and is parked to save power/CPU.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Efficiency: Spinning M's significantly reduce latency when new tasks appear.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Overhead: Too many spinning M's would waste CPU cycles.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Limitation: The number of spinning M's is automatically and dynamically regulated by the runtime based on the current load and available work.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Interactions&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Goroutine Creation (go func()):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The new goroutine (G) is placed into the local run queue of the current P.&lt;/li&gt;
&lt;li&gt;If the local queue is full, part of its contents (along with the new G) is moved to the global run queue to make space.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Goroutine Execution (Scheduling Cycle):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An M must acquire (or already be attached to) a P.&lt;/li&gt;
&lt;li&gt;It retrieves a runnable goroutine (G) from the P's local run queue (checking runnext first, then the circular buffer runq).&lt;/li&gt;
&lt;li&gt;If the local queue is empty, it attempts work stealing from other P's or checks the global queue.&lt;/li&gt;
&lt;li&gt;The M executes the goroutine's code on its curg stack. Execution continues until the goroutine blocks, makes a system call, is preempted (due to running too long), or voluntarily yields.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Overhead Metrics&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Time (Go 1.20)&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Goroutine Creation&lt;/td&gt;
&lt;td&gt;~200 ns&lt;/td&gt;
&lt;td&gt;Including stack allocation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Goroutine Context Switch&lt;/td&gt;
&lt;td&gt;~200 ns&lt;/td&gt;
&lt;td&gt;Without a system call&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Blocking System Call&lt;/td&gt;
&lt;td&gt;~500 ns&lt;/td&gt;
&lt;td&gt;Including parking the goroutine&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Network Operation (Non-blocking)&lt;/td&gt;
&lt;td&gt;~100 ns&lt;/td&gt;
&lt;td&gt;Handled via the netpoller&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;M:P:G Interaction — A Simplified Analogy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you write &lt;code&gt;go func() { ... }()&lt;/code&gt;, you are creating a G.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The runtime places this G into the queue of some P (or the global queue, if necessary).&lt;/li&gt;
&lt;li&gt;The M (OS thread) attached to that P is awakened (if idle) and executes the code of G.&lt;/li&gt;
&lt;li&gt;If G makes a blocking system call, the M detaches from its P. The P, now free, can find (or be picked up by) another M to continue executing other G's from its queue.&lt;/li&gt;
&lt;li&gt;If G simply runs a CPU-bound loop for ~10ms, the scheduler preempts it and hands control over to another G waiting in the P's local queue.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To solidify this concept, imagine a factory representing the Go process:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;G (Goroutine)&lt;/strong&gt; — is a task or job (like a blueprint and materials on a small, dedicated work table). There can be thousands of these tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;M (Machine)&lt;/strong&gt; — is a worker. Each worker has their own personal toolbox (the g0 stack) for performing internal, system-level tasks needed to manage the jobs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;P (Processor)&lt;/strong&gt; — is a workbench. On the workbench, there is a list (local queue) of jobs waiting to be done, and a special spot (runnext) for the most urgent next job. The worker (M) comes to the workbench (P) to do the actual work (G).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Workflow Process:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For a worker (M) to start working on a task (G), they must approach a workbench (P) and take a task from it.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The worker places the task's blueprint onto the workbench (curg) and begins working.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the task needs to wait (for example, for materials to arrive — analogous to blocking on a channel), the worker puts the blueprint away in the desk drawer (g0 stack) and takes the next task from the workbench.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the worker needs to step away for a very important reason (a system call), they leave the workbench (detach from the P) so that another worker can use it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the workbench runs out of tasks, the worker can approach a neighboring workbench and steal half of its tasks (work stealing).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Concurrency ≠ Parallelism&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is perhaps the most important statement by Rob Pike to understand.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Concurrency: This is about the composition of independently executing tasks. It's about structuring a program. Imagine you are cooking dinner: you put water on for pasta, you chop tomatoes for a salad, and you keep an eye on the patties in the pan. You are one person, but you are managing multiple tasks by switching between them. You are concurrent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Parallelism: This is about the simultaneous execution of tasks. For this, you need multiple cooks (CPU cores). One cook boils the pasta, a second chops the salad, and a third fries the patties. They are working in parallel.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Go is built for concurrency first and foremost. It makes it easy to write programs structured as a set of interacting tasks. Whether this code actually executes in parallel depends on the hardware (the number of CPU cores) and the runtime settings (like &lt;code&gt;GOMAXPROCS&lt;/code&gt;).&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Characteristic&lt;/th&gt;
&lt;th&gt;OS Threads&lt;/th&gt;
&lt;th&gt;Go Goroutines&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Stack Size&lt;/td&gt;
&lt;td&gt;1-8 MB&lt;/td&gt;
&lt;td&gt;2 KB (initial)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Creation&lt;/td&gt;
&lt;td&gt;1000+ ns&lt;/td&gt;
&lt;td&gt;~200 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context Switch&lt;/td&gt;
&lt;td&gt;1000+ ns&lt;/td&gt;
&lt;td&gt;~200 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Management&lt;/td&gt;
&lt;td&gt;OS Kernel&lt;/td&gt;
&lt;td&gt;Go Runtime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scalability&lt;/td&gt;
&lt;td&gt;100-1000 threads&lt;/td&gt;
&lt;td&gt;100,000+ goroutines&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Scheduler Overhead:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Time (Go 1.20)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Goroutine Creation&lt;/td&gt;
&lt;td&gt;~200 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Goroutine Context Switch&lt;/td&gt;
&lt;td&gt;~200 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Blocking System Call&lt;/td&gt;
&lt;td&gt;~500 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Network Operation (Non-blocking)&lt;/td&gt;
&lt;td&gt;~100 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Goroutine Parking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The term "parking" in the context of the scheduler means that a running goroutine (G) is transitioned into a waiting state, and the OS thread (M) on which it was running is freed up to execute other goroutines. This is a key mechanism that prevents OS threads from idling when a goroutine blocks (e.g., waiting for data from a channel or for a system call to complete).&lt;/p&gt;

&lt;p&gt;The process of "parking" and "unparking" is inextricably linked to the concepts of handoff and work stealing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenarios for Goroutine Parking and M Behavior&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's examine the main scenarios that clearly illustrate how M and G interact.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Blocking System Call (e.g., reading from a file)
This is the classic case where a goroutine enters a long blocking call to the OS kernel.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Situation: Goroutine G1 is running on thread M1, which is attached to processor P1. G1 initiates a blocking system call (e.g., read(fileFd)).&lt;/li&gt;
&lt;li&gt;Scheduler Action: The Go scheduler understands that M1 will be blocked by the kernel for an indeterminate amount of time. It detaches (handoff) P1 from M1. M1 goes into waiting for the system call to complete, along with G1.&lt;/li&gt;
&lt;li&gt;Result: The now-freed P1 no longer has a thread. The scheduler takes another thread, for example, M2 (from a sleeping pool or creates a new one), and attaches it to P1. M2 begins executing the next goroutine G2 from P1's local run queue.&lt;/li&gt;
&lt;li&gt;Return: When the system call in G1 completes, M1 wakes up. What should G1 do? It needs to run again. The scheduler looks for a free P. If there is a free P (e.g., P2), M1 with G1 attaches to it and continues working. If not, G1 is placed into the global run queue, and M1 becomes a spare (sleeping) thread.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Blocking on a Channel or Mutex
Here, the blocking occurs not at the kernel level, but at the runtime level.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Situation: Goroutine G1 on M1 (with P1) attempts to read from a channel that has no data.&lt;/li&gt;
&lt;li&gt;Action: The scheduler transitions G1 into a waiting state and places it into the wait queue of that channel. Important: M1 is no longer needed for this goroutine.&lt;/li&gt;
&lt;li&gt;Result: M1 remains attached to P1 and immediately (in the same scheduler cycle) takes the next runnable goroutine G2 from P1's local queue and starts executing it. The OS thread does not idle for a nanosecond.&lt;/li&gt;
&lt;li&gt;Return: When another goroutine sends data into the channel, G1 becomes "runnable". It will be added to the local queue of the P on which the sender goroutine is running (to improve data locality) or to the global run queue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How the Scheduler Balances Load: Work Stealing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What happens when a thread has nothing to do? Imagine: on M1, attached to P1, all goroutines in the local queue are finished, and the global queue is empty. The scheduler will not let M1 simply go to sleep while others have work.&lt;/p&gt;

&lt;p&gt;The work stealing mechanism is activated:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;M1, in a "spinning" state (actively seeking work), selects another processor, for example, P2, and attempts to steal approximately half of the goroutines from its local run queue.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If successful, M1 begins executing the stolen goroutines. This way, the load is evenly distributed across all available OS threads.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The "Spinning Threads" Mechanism&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When a thread (M) looks for work, as in the example above, it enters a spinning state. Spinning threads are not parked immediately; instead, they actively search for work for a short time (stealing, checking the global queue, the netpoller), consuming CPU resources in the process.&lt;/p&gt;

&lt;p&gt;Why is this necessary? To avoid the frequent and expensive creation/destruction or parking/unparking of OS threads. The algorithm works like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;When new work appears (a runnable goroutine), the scheduler checks if there are any idle P's and if there are already any spinning threads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If there is an idle P and no spinning threads exist, the scheduler wakes up (unparks) a new OS thread (M) or creates one. This new thread enters spinning mode.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If spinning threads already exist, a new thread is not woken up — the existing ones will handle the work themselves.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This mechanism keeps exactly as many active threads as needed for maximum core utilization, without unnecessary context-switching overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Locality and Affinity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Go runtime tries, as much as possible, to keep a goroutine attached to the same thread. Why? This improves CPU cache locality. If a goroutine constantly jumps from thread to thread, its data must be constantly invalidated and reloaded into the cache of the new core, which is slow.&lt;/p&gt;

&lt;p&gt;Thanks to the local queues on P and the work stealing mechanism, a goroutine that was blocked on a channel is highly likely to be resumed on the same P (and consequently, the same M) where it was previously running. The scheduler even gives priority to goroutines just unparked from channels, placing them at the front of the queue (&lt;code&gt;runnext&lt;/code&gt;) so they execute as soon as possible and resume work with a "warm" cache.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Schematic Lifecycle&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;G is Running: G executes on M (which is attached to P).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;G Blocks: G blocks (syscall, channel, mutex). The scheduler transitions G to Waiting.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If syscall: M detaches from P, P looks for a new M. G remains on M.&lt;/li&gt;
&lt;li&gt;If channel/mutex: G enters the resource's wait queue. M stays with P and takes a new G from the local queue.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;G becomes Runnable: The resource G was waiting for becomes available. The scheduler adds G to a queue (most often the local queue of the P that unblocked it).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;G is Selected for Execution: When its turn comes, some M attached to this P will remove G from the queue and begin executing it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>go</category>
      <category>tutorial</category>
      <category>performance</category>
    </item>
    <item>
      <title>Concurrency patterns on Golang: Singleflight</title>
      <dc:creator>Silver_dev</dc:creator>
      <pubDate>Tue, 24 Feb 2026 17:42:33 +0000</pubDate>
      <link>https://dev.to/silver_dev/concurrency-patterns-on-golang-singleflight-ola</link>
      <guid>https://dev.to/silver_dev/concurrency-patterns-on-golang-singleflight-ola</guid>
      <description>&lt;h2&gt;
  
  
  Problems this pattern can solve:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;The "Thundering Herd" problem. Imagine: a cache with user data has expired. 100 concurrent requests arrive, all see the cache miss and simultaneously stampede to the database. The database collapses under load, the service dies.&lt;/li&gt;
&lt;li&gt;A microservice calls an external API with strict rate limits. Under high load, 10 parallel requests with the same key could exhaust the limit or simply create redundant traffic. (All requests with the same key are coalesced into a single external call, saving limits and resources.)&lt;/li&gt;
&lt;li&gt;Heavy operations: web page parsing via a headless browser, generating PDF reports, ML inference. If 50 users request the same report, launching 50 browsers is memory suicide. (One browser will be launched, the result will be shared by all.)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Essence&lt;/strong&gt;: Singleflight guarantees that for a given key, only one operation is executed at a time. All other calls with the same key do not start a new operation but "attach" themselves to the already running one and wait for its result. After the original call completes, all waiters receive the same result (or error).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Idea&lt;/strong&gt;: Coalesce multiple concurrent requests with the same key into a single real call and distribute the result to all waiters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use the official package &lt;code&gt;golang.org/x/sync/singleflight&lt;/code&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnfyu447e1hs7jajxipo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnfyu447e1hs7jajxipo.png" alt=" " width="800" height="711"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Singleflight vs Other Patterns
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Difference from Caching.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cache: Stores the result after execution. On concurrent access to an empty cache, all requests will still go to the DB (cache miss).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Singleflight: Doesn't store the result, but coalesces requests at the moment. After completion, the result is not saved (unless you put it in the cache yourself).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Difference from Semaphore (worker pool).&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Semaphore: Limits the number of concurrently executing operations, but doesn't coalesce identical ones. 10 requests for the same key can execute in parallel (if the pool allows).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Singleflight: Regardless of pool size, only one operation is ever executed for a given key.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Difference from WaitGroup.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;WaitGroup: Simply waits for a group of goroutines to complete, but doesn't manage duplicates or share results.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Singleflight: Uses WaitGroup internally to coordinate waiting goroutines and distribute the result.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Difference from Mutex.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Mutex: Locks access to a critical section, but each time the mutex is released, a new thread enters and repeats the operation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Singleflight: The operation is executed once, the result is given to all.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"sync"&lt;/span&gt;
    &lt;span class="s"&gt;"time"&lt;/span&gt;

    &lt;span class="s"&gt;"golang.org/x/sync/singleflight"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;group&lt;/span&gt; &lt;span class="n"&gt;singleflight&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Group&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt; &lt;span class="n"&gt;sync&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WaitGroup&lt;/span&gt;

    &lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;"user:123"&lt;/span&gt;
    &lt;span class="n"&gt;requests&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;

    &lt;span class="c"&gt;// Simulating a heavy operation (database call or external API)&lt;/span&gt;
    &lt;span class="n"&gt;expensiveOp&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;interface&lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Heavy operation started (actually once)"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// simulating long work&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Heavy operation completed"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;"data_for_"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c"&gt;// Launching 5 concurrent requests&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;reqID&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Done&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

            &lt;span class="c"&gt;// Execute via singleflight&lt;/span&gt;
            &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;shared&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;group&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Do&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;expensiveOp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Request %d: error %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;reqID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Request %d: result = %v (shared = %v)&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;reqID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;shared&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Wait&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>go</category>
      <category>performance</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Concurrency patterns on Golang: ErrGroup</title>
      <dc:creator>Silver_dev</dc:creator>
      <pubDate>Mon, 23 Feb 2026 22:48:18 +0000</pubDate>
      <link>https://dev.to/silver_dev/concurrency-patterns-on-golang-errgroup-2f8c</link>
      <guid>https://dev.to/silver_dev/concurrency-patterns-on-golang-errgroup-2f8c</guid>
      <description>&lt;h2&gt;
  
  
  Problems this pattern can solve:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You've launched 5 goroutines. You need to wait for all of them to complete, but if at least one fails with an error — stop the rest and return the error. Implementing this with sync.WaitGroup + context + error channels leads to boilerplate code that's easy to break.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the classic approach, if one goroutine fails with an error, the rest continue working uselessly or, worse, block forever writing to a channel that no one is reading from. This is a resource leak.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The problem of complex error collection from multiple goroutines. You need to collect all errors or at least the first significant one. Manual implementation requires creating mutexes to protect a shared error variable or separate channels.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Essence&lt;/strong&gt;: ErrGroup is a goroutine group manager that provides two main mechanisms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Synchronization&lt;/strong&gt;: Waiting for all launched goroutines to complete (like sync.WaitGroup).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error propagation and cancellation&lt;/strong&gt;: When an error occurs in any of the goroutines, the shared context is automatically canceled, signaling other goroutines to terminate, and the &lt;code&gt;Wait()&lt;/code&gt; method returns this error.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Idea&lt;/strong&gt;: Combine launching concurrent tasks, waiting for their completion, and automatically stopping all of them at the first error through a shared context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use the official package &lt;code&gt;golang.org/x/sync/errgroup&lt;/code&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1h7187yai1yp09i1a7t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1h7187yai1yp09i1a7t.png" alt=" " width="800" height="860"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ErrGroup vs Other Patterns
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Difference from sync.WaitGroup&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;sync.WaitGroup&lt;/code&gt;: A simple goroutine counter. Waits for all to complete, but doesn't handle errors and context. If a goroutine fails, the rest continue working.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ErrGroup: A wrapper over WaitGroup that adds context binding and "stop everything on first error" semantics.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Difference from Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Pipeline: A pattern for organizing sequential data processing stages, where data flows through channels.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ErrGroup: A pattern for managing concurrent tasks that may be independent or loosely coupled. Used for coordination, not for data stream transformation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Difference from Worker Pool (Fan-out/Fan-in)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Worker Pool: About distributing tasks among a fixed number of workers for parallel processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ErrGroup: About coordinating the execution of a set of tasks (often heterogeneous) and reacting to the first error. ErrGroup can be used inside a worker or to manage a group of workers, but the focus is on cancellation, not task distribution.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"context"&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"time"&lt;/span&gt;

    &lt;span class="s"&gt;"golang.org/x/sync/errgroup"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// Create an errgroup with context&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;errgroup&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

    &lt;span class="c"&gt;// Launch a goroutine that will complete successfully&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Go&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;After&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Goroutine 1: success"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Done&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Goroutine 1: canceled"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="c"&gt;// Launch a goroutine that will fail with an error&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Go&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;After&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;500&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Millisecond&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Goroutine 2: error!"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"something went wrong in goroutine 2"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Done&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Goroutine 2: canceled"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="c"&gt;// Launch a goroutine that should be canceled&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Go&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;After&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Goroutine 3: success (but this won't happen)"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Done&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Goroutine 3: canceled"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c"&gt;// context.Canceled&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="c"&gt;// Wait for all goroutines to complete and get the first error&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Wait&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error: %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"All successful"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c"&gt;// Give time to see the output&lt;/span&gt;
    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>go</category>
      <category>performance</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Concurrency patterns on Golang: Fan-out / Fan-in</title>
      <dc:creator>Silver_dev</dc:creator>
      <pubDate>Sun, 22 Feb 2026 20:40:08 +0000</pubDate>
      <link>https://dev.to/silver_dev/concurrency-patterns-on-golang-fan-out-fan-in-goj</link>
      <guid>https://dev.to/silver_dev/concurrency-patterns-on-golang-fan-out-fan-in-goj</guid>
      <description>&lt;h2&gt;
  
  
  Problems this pattern can solve:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;You have 8 cores, but your data processing stage runs sequentially in a single goroutine. The CPU sits idle while the task queue grows. We create N workers (one per core) that pull tasks from a shared channel in parallel, utilizing all available cores.&lt;/li&gt;
&lt;li&gt;Suppose your Pipeline has a stage that calls an external API or resizes images. This is slow. If left in a single thread, the entire pipeline will bottleneck at this stage.&lt;/li&gt;
&lt;li&gt;You've launched 100 goroutines for web scraping. How do you collect all results in one place for final database writing, without using global variables and mutexes.&lt;/li&gt;
&lt;li&gt;Creating a new goroutine for each incoming request is dangerous. Under peak load, this can crash the service (panic due to memory exhaustion).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Essence&lt;/strong&gt;: A pattern consisting of two phases that work in tandem to parallelize tasks.&lt;br&gt;
    - &lt;strong&gt;Fan-out&lt;/strong&gt;: The process of launching multiple goroutines (workers) to read tasks from a single input channel. This distributes the load.&lt;br&gt;
    - &lt;strong&gt;Fan-in&lt;/strong&gt;: The process of merging results from multiple goroutines into a single output channel. This consolidates data for final processing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Idea&lt;/strong&gt;: Parallelize the execution of similar tasks by distributing them among a fixed pool of workers and subsequently aggregating results through multiplexing of output channels.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqmcw5tvt90drqcrmk08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqmcw5tvt90drqcrmk08.png" alt=" " width="800" height="932"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Fan-out / Fan-in vs Other Patterns
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Difference from Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fan-out/Fan-in: This is about horizontal scaling of a single stage. We take one step and multiply its executors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pipeline: This is about vertical decomposition of a process into different steps (read -&amp;gt; process -&amp;gt; write). Fan-out/Fan-in often lives inside one specific stage of a Pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Difference from Worker Pool&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Actually, they are practically the same thing. Fan-out/Fan-in is a conceptual description of how a worker pool is structured.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fan-out = task dispatching to the pool. Fan-in = result collection from the pool.&lt;br&gt;
You could say that Fan-out/Fan-in is an architectural pattern, and Worker Pool is its concrete implementation for executing tasks.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Difference from Pub/Sub (Publish-Subscribe)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fan-out/Fan-in: Each task from the input channel is received by only one worker. This is work distribution (competition for tasks).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pub/Sub: Each message is received by all subscribers. This is broadcast distribution.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"context"&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"sync"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;// Task generation&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;generateJobs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ch&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;ch&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nb"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ch&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ch&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// Fan-out: Distributing tasks among workers&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;fanOut&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;numWorkers&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;workerChannels&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;numWorkers&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;numWorkers&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;resultCh&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="nb"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;resultCh&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="n"&gt;job&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ok&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;jobs&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
                    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;ok&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="c"&gt;// Task channel closed&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="c"&gt;// Task processing (example: squaring)&lt;/span&gt;
                    &lt;span class="n"&gt;resultCh&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;job&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;job&lt;/span&gt;
                &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Done&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
                    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="c"&gt;// Cancellation via context&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}()&lt;/span&gt;

        &lt;span class="n"&gt;workerChannels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;workerChannels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;resultCh&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;workerChannels&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// Fan-in: Merging results&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;fanIn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channels&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt; &lt;span class="n"&gt;sync&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WaitGroup&lt;/span&gt;
    &lt;span class="n"&gt;merged&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channels&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ch&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;channels&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Done&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;merged&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}(&lt;/span&gt;&lt;span class="n"&gt;ch&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c"&gt;// Goroutine to close the final channel&lt;/span&gt;
    &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Wait&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nb"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;merged&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}()&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;merged&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// 1. Generate tasks&lt;/span&gt;
    &lt;span class="n"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;generateJobs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// 2. Fan-out: distribute tasks among 3 workers&lt;/span&gt;
    &lt;span class="n"&gt;resultChannels&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;fanOut&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;jobs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// 3. Fan-in: merge results from all channels&lt;/span&gt;
    &lt;span class="n"&gt;mergedResults&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;fanIn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;resultChannels&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// 4. Results&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;mergedResults&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Получен результат: %d&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>architecture</category>
      <category>go</category>
      <category>performance</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Concurrency patterns on Golang: Pipeline</title>
      <dc:creator>Silver_dev</dc:creator>
      <pubDate>Sun, 22 Feb 2026 18:26:11 +0000</pubDate>
      <link>https://dev.to/silver_dev/concurrency-patterns-on-golang-pipeline-16mb</link>
      <guid>https://dev.to/silver_dev/concurrency-patterns-on-golang-pipeline-16mb</guid>
      <description>&lt;h2&gt;
  
  
  Problems this pattern can solve:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;When you need to process millions of records (e.g., log lines, images, or database entries), executing all steps sequentially for one item after another leads to resource underutilization. The CPU sits idle during I/O operations (disk reads, network requests), while subsequent steps wait for previous ones to complete.&lt;/li&gt;
&lt;li&gt;Imagine a 500-line function that does: &lt;code&gt;read -&amp;gt; transform -&amp;gt; validate -&amp;gt; enrich -&amp;gt; save&lt;/code&gt;. It's impossible to cover with adequate tests, difficult to extend (adding a new step requires modifying the entire function), and easy to break by affecting the logic of an adjacent step.&lt;/li&gt;
&lt;li&gt;In concurrent environments, it's challenging to properly handle situations where a fatal error occurs at one stage. You need to stop all stages, prevent goroutine leaks, and shut down the program gracefully.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Essence&lt;/strong&gt;: Breaking down a complex data processing operation into a sequence of independent, reusable stages, where each stage performs one specific function, and data is passed between them via channels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Idea&lt;/strong&gt;: Split data processing into sequential, concurrently executing stages, where the output of one stage serves as input for the next, using channels as a conveyor belt for data transfer and flow control.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqk98nygd5w0425vydjl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqk98nygd5w0425vydjl.png" alt=" " width="412" height="983"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipeline vs Other Patterns
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Difference from Chain of Responsibility&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Pipeline: Each element necessarily goes through all stages of the pipeline and is transformed at each stage. Data moves in one direction. It's about data processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Chain of Responsibility: A request is passed along a chain of handlers until one of them handles it. The request does not necessarily go through the entire chain. It's about passing responsibility. (Example: middleware in web frameworks, where one handler can return a response and break the chain).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Difference from Fan-Out / Fan-In&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Pipeline: This is a structural code organization pattern.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Fan-Out / Fan-In: These are parallelization patterns often used inside Pipeline stages.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fan-Out: Launching multiple goroutines (workers) at one stage to parallelize heavy work.&lt;/li&gt;
&lt;li&gt;Fan-In: Collecting results from multiple workers back into a single channel.&lt;/li&gt;
&lt;li&gt;It can be said that Fan-Out/Fan-In are ways to implement a specific Pipeline stage to improve performance.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Difference from Observer&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Pipeline: Push-model. The "generator" actively sends data to the next stage. It's a unidirectional stream.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Observer: One subject notifies many subscribers about changes in its state (one-to-many). Subscribers are passive and wait for notifications. It's about event broadcasting, not data transformation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"fmt"&lt;/span&gt;

&lt;span class="c"&gt;// stage 1: Generator&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;gen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nums&lt;/span&gt; &lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;out&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;nums&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;out&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nb"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;out&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// stage 2: Multiplying by 2&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;sq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;in&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;out&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;in&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;out&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nb"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;out&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// Assembling the pipeline&lt;/span&gt;
    &lt;span class="n"&gt;in&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;gen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;out&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;sq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;in&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// Result&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;out&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// 4, 9, 16&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>go</category>
      <category>performance</category>
      <category>coding</category>
      <category>backend</category>
    </item>
    <item>
      <title>Concurrency patterns on Golang: Semaphore</title>
      <dc:creator>Silver_dev</dc:creator>
      <pubDate>Sat, 21 Feb 2026 21:50:00 +0000</pubDate>
      <link>https://dev.to/silver_dev/concurrency-patterns-on-golang-semaphore-4e63</link>
      <guid>https://dev.to/silver_dev/concurrency-patterns-on-golang-semaphore-4e63</guid>
      <description>&lt;h2&gt;
  
  
  Problems this pattern can solve:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;The external API only allows 5 concurrent requests. Any more — and it bans by IP.&lt;/li&gt;
&lt;li&gt;You have a database connection pool of 10. If you spawn 100 goroutines, each will try to acquire a connection — 90 will be waiting and consuming memory.&lt;/li&gt;
&lt;li&gt;A microservice collapses under load. You need to limit the number of concurrent requests to it, and when the limit is exceeded — quickly return an error without overloading the service.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Essence:&lt;/strong&gt; A synchronization mechanism that uses a counter to limit the number of concurrently executing operations or access to a resource. Goroutines "acquire" the semaphore before starting work and "release" it upon completion, while the semaphore blocks new acquisitions when the counter reaches zero.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key idea&lt;/strong&gt;: A permission counter that blocks execution when the limit is exhausted and allows it when there are free slots.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe75zqhry3bdqz6iw4tpz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe75zqhry3bdqz6iw4tpz.png" alt=" " width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use the official package &lt;code&gt;golang.org/x/sync/semaphore&lt;/code&gt; except when you need a combination with other channel-based patterns or require specific behavior.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Example (Simplified)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// Semaphore = buffered channel with empty structs&lt;/span&gt;
&lt;span class="n"&gt;sem&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt; &lt;span class="n"&gt;N&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;sem&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="p"&gt;{}{}&lt;/span&gt; &lt;span class="c"&gt;// Acquire: take a permit (blocks if full)&lt;/span&gt;
&lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;sem&lt;/span&gt;             &lt;span class="c"&gt;// Release: return a permit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Semaphore Disadvantages:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;No priorities. Semaphore does not guarantee access order (FIFO)&lt;/li&gt;
&lt;li&gt;Deadlock risk. Careful defer required.&lt;/li&gt;
&lt;li&gt;No ownership. Unlike a mutex, a semaphore can be released by any goroutine, not only the one that acquired it.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Semaphore vs Other Patterns:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Worker Pool&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Worker Pool: Manages goroutines that execute tasks. Workers live permanently.&lt;/li&gt;
&lt;li&gt;Semaphore: Manages access to a resource. Goroutines are created per task but are blocked by the semaphore before the "heavy" operation.&lt;/li&gt;
&lt;li&gt;Key difference: Semaphore does not create goroutines, it only limits their concurrent execution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Mutex (sync.Mutex)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mutex: Binary (0 or 1), protects a critical section from simultaneous access.&lt;/li&gt;
&lt;li&gt;Semaphore: Can be counting (N &amp;gt; 1), manages the number of concurrent accesses.&lt;/li&gt;
&lt;li&gt;Key difference: Mutex is for mutual exclusion, semaphore is for limiting concurrency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Rate Limiter&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rate Limiter: Limits the number of operations per time unit (e.g., 100/sec).&lt;/li&gt;
&lt;li&gt;Semaphore: Limits the number of concurrent operations (e.g., 10 concurrent requests).&lt;/li&gt;
&lt;li&gt;Key difference: Rate limiter works with a time window, semaphore works with concurrency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Channels&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Channels: Pass data between goroutines, can be used as semaphores.&lt;/li&gt;
&lt;li&gt;Semaphore: Specialized primitive for synchronization only, without data passing.&lt;/li&gt;
&lt;li&gt;Key difference: Semaphore is lighter and faster for pure access limiting.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"sync"&lt;/span&gt;
    &lt;span class="s"&gt;"time"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;// Semaphore - concurrency limiter&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Semaphore&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ch&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;NewSemaphore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;maxConcurrent&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Semaphore&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;Semaphore&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;ch&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt; &lt;span class="n"&gt;maxConcurrent&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// Acquire - get a permit (blocks if limit is exceeded)&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Semaphore&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;Acquire&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ch&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="p"&gt;{}{}&lt;/span&gt; &lt;span class="c"&gt;// Send blocks when channel is full&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// Release - return permit to the pool&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Semaphore&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;Release&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ch&lt;/span&gt; &lt;span class="c"&gt;// Release slot&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sem&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Semaphore&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;sync&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WaitGroup&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Done&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;sem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Acquire&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;       &lt;span class="c"&gt;// Wait for free slot&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;sem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Release&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c"&gt;// Release after work&lt;/span&gt;

    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"[%s] Worker %d: started work&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"15:04:05"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// Simulate work&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"[%s] Worker %d: finished&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"15:04:05"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;sem&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;NewSemaphore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// Maximum 3 concurrent tasks&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt; &lt;span class="n"&gt;sync&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WaitGroup&lt;/span&gt;

    &lt;span class="c"&gt;// Start 10 goroutines, but only 3 will be active&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="n"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sem&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;200&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Millisecond&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// Small delay between launches&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Wait&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"All tasks completed"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>go</category>
      <category>performance</category>
      <category>backend</category>
      <category>coding</category>
    </item>
    <item>
      <title>Concurrency patterns on Golang: Worker Pool</title>
      <dc:creator>Silver_dev</dc:creator>
      <pubDate>Thu, 19 Feb 2026 20:52:44 +0000</pubDate>
      <link>https://dev.to/silver_dev/concurrency-patterns-on-golang-worker-pool-4b9m</link>
      <guid>https://dev.to/silver_dev/concurrency-patterns-on-golang-worker-pool-4b9m</guid>
      <description>&lt;h2&gt;
  
  
  Problems this pattern can solve:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;If your service makes 1,000 queries per second to a database, but it can only handle 100, a worker pool will protect the database from crashing.&lt;/li&gt;
&lt;li&gt;In the event of a sudden traffic spike, using go func() for each request could spawn 100,500 goroutines and consume all your memory. The pool limits concurrency.&lt;/li&gt;
&lt;li&gt;If you have a pool of socket connections or file descriptors, a worker pool ensures you don't exceed the OS limit.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The Essence:&lt;/strong&gt;&lt;br&gt;
We create a fixed number of goroutines (workers) that are started in advance and wait for tasks. The main goroutine (the dispatcher) puts tasks into a channel (the task queue). The workers concurrently take these tasks from the channel and execute them. They can send the results back through another channel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Idea:&lt;/strong&gt; Limiting the number of concurrently executed operations and reusing goroutines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2z9p5igrvnms6x47pf1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2z9p5igrvnms6x47pf1.png" alt=" " width="800" height="671"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Difference between Worker Pool and other approaches:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;go func()&lt;/code&gt; (Spawning raw goroutines):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cons compared to Worker Pool: Uncontrolled growth in the number of goroutines can lead to resource exhaustion (memory, file descriptors) and a panic. There is no control over concurrency.&lt;/li&gt;
&lt;li&gt;Pros compared to Worker Pool: Simpler to write, lower startup latency (no need to wait for a free worker).&lt;/li&gt;
&lt;li&gt;When to use it: For handling signals, very lightweight tasks, or when the load is guaranteed to be low.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pipeline:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How it's different: This is about sequential processing. Data flows through a chain of stages, where each stage is executed by its own goroutine (or pool), connected by channels.&lt;/li&gt;
&lt;li&gt;Example: &lt;code&gt;stage1 (generate) -&amp;gt; stage2 (multiply) -&amp;gt; stage3 (save)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Cons compared to Worker Pool: More difficult to cancel and handle errors; the throughput is limited by the slowest stage.&lt;/li&gt;
&lt;li&gt;Pros compared to Worker Pool: Ideal for tasks that can be broken down into distinct, independent processing steps.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Semaphore:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How it's different: A semaphore is a synchronization primitive used to limit access to a resource. You still spawn a goroutine for each task, but before starting the "heavy" part, they acquire a slot from the semaphore.&lt;/li&gt;
&lt;li&gt;How this relates: A Worker Pool is often implemented using a semaphore, but a semaphore is a lower-level tool.&lt;/li&gt;
&lt;li&gt;If the task is heavy (e.g., an HTTP request, complex calculation, disk I/O) and there aren't millions of them — use a Semaphore. The overhead of creating a goroutine is negligible compared to the task's execution time (e.g., parallel scraping of 50 websites).&lt;/li&gt;
&lt;li&gt;If the task is very light (e.g., parsing a string, a simple transformation) and the stream is infinite — use a Worker Pool. Otherwise, the GC will be overwhelmed by the creation of thousands of goroutines (e.g., real-time log processing, handling events from Kafka).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;MapReduce:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How it's different: A higher-level pattern for distributed computations. It involves a "Map" phase (parallelization) and a "Reduce" phase (aggregation). A Worker Pool is often used as an implementation for the "Map" phase.&lt;/li&gt;
&lt;li&gt;Cons compared to Worker Pool: Overkill for simple concurrent processing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"sync"&lt;/span&gt;
    &lt;span class="s"&gt;"time"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;sync&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WaitGroup&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Done&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;recover&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Worker %d: panic: %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}()&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;job&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;jobs&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Worker %d started the task %d&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;job&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// Simulating a task&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Worker %d completed the task %d&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;job&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;numJobs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
    &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;numWorkers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
    &lt;span class="n"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;numJobs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt; &lt;span class="n"&gt;sync&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WaitGroup&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;numWorkers&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="n"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;jobs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;numJobs&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nb"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jobs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Wait&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"All tasks completed"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>go</category>
      <category>performance</category>
      <category>coding</category>
      <category>backend</category>
    </item>
  </channel>
</rss>
