<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Codebaker</title>
    <description>The latest articles on DEV Community by Codebaker (@debianbaker).</description>
    <link>https://dev.to/debianbaker</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/debianbaker"/>
    <language>en</language>
    <item>
      <title>Inside the Go Scheduler: How GMP Model Powers Millions of Goroutines</title>
      <dc:creator>Codebaker</dc:creator>
      <pubDate>Mon, 30 Mar 2026 10:48:00 +0000</pubDate>
      <link>https://dev.to/debianbaker/inside-the-go-scheduler-how-gmp-model-powers-millions-of-goroutines-940</link>
      <guid>https://dev.to/debianbaker/inside-the-go-scheduler-how-gmp-model-powers-millions-of-goroutines-940</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;A common question developers ask when learning Go is: &lt;strong&gt;"Why goroutines when threads already work?"&lt;/strong&gt; Take Java, for example—each client request is executed on an OS thread. Simple, straightforward, and battle-tested. So why did Go introduce this additional abstraction?&lt;br&gt;
The answer lies in &lt;em&gt;scalability&lt;/em&gt; and &lt;em&gt;efficiency&lt;/em&gt;. While OS threads are powerful, they're also heavyweight—creating thousands of them can overwhelm a system. Goroutines, on the other hand, are lightweight and managed by Go's runtime, allowing you to spawn millions without breaking a sweat. But this raises another question: &lt;strong&gt;how does Go efficiently map thousands of goroutines onto a limited number of OS threads?&lt;/strong&gt;&lt;br&gt;
This is where Go's ingenious &lt;strong&gt;GMP scheduling model&lt;/strong&gt; comes into play.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Challenge: Mapping Goroutines to Threads
&lt;/h2&gt;

&lt;p&gt;OS threads are maintained by the operating system, which means the OS only knows about threads, not goroutines. Therefore, &lt;strong&gt;a goroutine must be mapped onto a thread to execute&lt;/strong&gt;. This implies M:N mapping. At any given time, one thread handles one goroutine.&lt;br&gt;
But how should this mapping occur? Let's explore two approaches and their problems:&lt;/p&gt;
&lt;h3&gt;
  
  
  Approach 1: A Single Global Queue
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Idea&lt;/strong&gt;: A single global queue where threads push and pull goroutines concurrently.&lt;br&gt;
Problem: This creates lock contention on the global queue. Each thread must acquire a lock, push or pull a goroutine, and then release the lock. Under high goroutine throughput, every thread is constantly fighting over the same queue. Each thread has to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Acquire a lock on the global queue&lt;/li&gt;
&lt;li&gt;Pull/Push a goroutine in the queue&lt;/li&gt;
&lt;li&gt;Release the lock&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Approach 2: A Local Queue Per Thread
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Idea&lt;/strong&gt;: Give each thread its own local queue, eliminating contention on a shared structure.&lt;br&gt;
&lt;strong&gt;Problem&lt;/strong&gt;: Two issues arise. First, if a goroutine makes a blocking system call, the OS blocks that thread — all goroutines waiting behind it are now stuck, even though the CPU is free. Second, load becomes unbalanced: one thread's queue may hold 100 goroutines while another's is empty, and there is no rebalancing mechanism.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Solution: The GMP Scheduling Model
&lt;/h2&gt;

&lt;p&gt;The Go developers devised an elegant solution called the &lt;strong&gt;GMP scheduling model&lt;/strong&gt;, which cleverly avoids these bottlenecks. The model consists of three key components:&lt;br&gt;
&lt;strong&gt;The Three Components:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;G (Goroutine)&lt;/strong&gt; - The lightweight thread of execution&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;M (Machine)&lt;/strong&gt; - An OS thread (the term "Machine" is used in Go's runtime)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;P (Processor)&lt;/strong&gt; - Not a CPU, but a logical processor that acts as a middleman.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Important: There is still a global run queue in the GMP model, but it's not the primary queue. It is used as a secondary queue.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;What is a Processor (P)?&lt;/strong&gt;&lt;br&gt;
Instead of assigning queues directly to threads, Go uses &lt;strong&gt;distributed run queues owned by Ps&lt;/strong&gt;. Each P maintains its own local run queue that holds multiple goroutines. Think of P as a scheduling context that bridges goroutines and threads. &lt;br&gt;
&lt;strong&gt;Key relationships:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each P maintains a local run queue of goroutines&lt;/li&gt;
&lt;li&gt;Each P is attached to an M (OS thread)&lt;/li&gt;
&lt;li&gt;P controls the parallelism in your program&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Understanding GOMAXPROCS: Tuning the Engine's Parallelism
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When are Goroutines and Threads Created?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Goroutines (G)&lt;/strong&gt; are created as per your code instructions (e.g., go functionName())&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Threads (M)&lt;/strong&gt; are created by the scheduler when needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's the crucial insight: &lt;strong&gt;P controls parallelism&lt;/strong&gt;. The number of Ps determines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The number of local run queues&lt;/li&gt;
&lt;li&gt;The maximum number of goroutines that can run in parallel&lt;/li&gt;
&lt;li&gt;The number of threads (Ms) required&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The GOMAXPROCS Setting&lt;/strong&gt;&lt;br&gt;
GOMAXPROCS determines the number of Ps in your program, and it can be manually configured.&lt;br&gt;
&lt;strong&gt;Example scenario:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;System: 2 CPU cores&lt;/li&gt;
&lt;li&gt;Goroutines: 16 created&lt;/li&gt;
&lt;li&gt;Setting: GOMAXPROCS = 4&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What happens:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;4 Ps are created → 4 local run queues&lt;/li&gt;
&lt;li&gt;Goroutines are distributed across queues (e.g., 4 goroutines per queue)&lt;/li&gt;
&lt;li&gt;4 Goroutines can run in parallel.&lt;/li&gt;
&lt;li&gt;The Go runtime requests 4 Ms (threads) from the OS&lt;/li&gt;
&lt;li&gt;Each P attaches to an M&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt; With only 2 CPU cores but 4 threads, the OS must perform context switching between threads at the kernel level, which is relatively expensive.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Best practice: Set GOMAXPROCS = number of CPU cores (this is also the default in modern Go).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Thread Management: Creation, Parking, and Reuse
&lt;/h2&gt;

&lt;p&gt;The scheduler doesn't always create new threads. Here's how Go optimizes thread management:&lt;br&gt;
&lt;strong&gt;Scenario 1: Blocking System Call&lt;/strong&gt;&lt;br&gt;
When a goroutine makes a blocking system call:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The goroutine (G) is blocked&lt;/li&gt;
&lt;li&gt;The OS marks the thread (M) executing it as blocked&lt;/li&gt;
&lt;li&gt;The local run queue (P) needs to be attached to a thread&lt;/li&gt;
&lt;li&gt;The runtime detaches the P from the blocked M — this is called a P Handoff&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;new M&lt;/strong&gt; is created and attached to P to continue running other goroutines.&lt;/li&gt;
&lt;li&gt;When the blocking call completes:

&lt;ul&gt;
&lt;li&gt;The unblocked G is placed into a randomly chosen local run queue&lt;/li&gt;
&lt;li&gt;The M is &lt;strong&gt;parked&lt;/strong&gt; (not destroyed) to save CPU overhead&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2: Subsequent Blocking Call&lt;/strong&gt;&lt;br&gt;
When another goroutine makes a blocking system call:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Again, the local run queue needs attachment to M&lt;/li&gt;
&lt;li&gt;This time, &lt;strong&gt;no new M&lt;/strong&gt; is created&lt;/li&gt;
&lt;li&gt;Instead, the &lt;strong&gt;parked M&lt;/strong&gt; is reused, saving creation overhead&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This parking and reusing strategy significantly reduces the overhead of thread management.&lt;/p&gt;
&lt;h2&gt;
  
  
  Scheduling Goroutines:
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Scheduling loop:
&lt;/h3&gt;

&lt;p&gt;When P needs to assign a G to the attached M, runtime runs this loop:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiabmikpq5fode4im5471.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiabmikpq5fode4im5471.png" alt="Goroutine scheduling" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;a. Every 61st goroutine — check &lt;strong&gt;global queue&lt;/strong&gt; first — If a goroutine is found in the global queue, run it. If empty, proceed to b.&lt;br&gt;
b. Check &lt;strong&gt;local queue&lt;/strong&gt; — The P checks its own local queue. If a goroutine is found, run it. If empty, proceed to c.&lt;br&gt;
c. Check &lt;strong&gt;global queue&lt;/strong&gt; — Checked when local queue is empty (skipped if already checked in step a). If a goroutine is found, run it. If empty, proceed to d.&lt;br&gt;
d. &lt;strong&gt;Work Stealing&lt;/strong&gt; — Steals up to half the goroutines from another P. The runtime visits all Ps in a random order and stops when it finds a victim with stealable goroutines.&lt;br&gt;
e. Checks the &lt;strong&gt;network poller&lt;/strong&gt; - Runtime checks if any I/O bound goroutine is ready to resume.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is &lt;strong&gt;network poller&lt;/strong&gt; - When Goroutine does non-blocking syscall, then the runtime parks the Goroutine, and registers the file descriptor with the netpoller. The M that was executing the Goroutine is not blocked (unlike with blocking syscalls), and it picks up another runnable Goroutine. When the file descriptor is ready, netpoller unparks the Goroutine into a run queue. Examples of non-blocking sys-calls are: net.Dial(), net.Listen() etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Cooperative Scheduling:
&lt;/h3&gt;

&lt;p&gt;Go was primarily designed for backend systems that rely heavily on channels, function calls, and I/O. These naturally act as yield points where the scheduler can switch goroutines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Channel send/receive&lt;/li&gt;
&lt;li&gt;System calls&lt;/li&gt;
&lt;li&gt;Function calls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is Cooperative Scheduling, happening entirely within the Go runtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem: CPU-Bound Goroutines&lt;/strong&gt;&lt;br&gt;
A goroutine with no yield points — such as a tight infinite loop — will never cooperate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;monopoly&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt; &lt;span class="c"&gt;// no function calls, no channel operation — never yields&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before Go 1.14, this goroutine would monopolize its P indefinitely, starving every other goroutine in the same local queue.&lt;/p&gt;

&lt;h3&gt;
  
  
  Preemptive Scheduling (Go 1.14+)
&lt;/h3&gt;

&lt;p&gt;Go 1.14 introduced &lt;strong&gt;signal-based preemption&lt;/strong&gt; as a fallback for CPU-bound goroutines.&lt;br&gt;
The mechanism is driven by &lt;strong&gt;sysmon&lt;/strong&gt; — a background thread that runs without a P, continuously monitoring the scheduler. When sysmon detects a goroutine has been running for approximately 10ms without yielding:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sysmon sends &lt;strong&gt;SIGURG&lt;/strong&gt; to the M running that goroutine&lt;/li&gt;
&lt;li&gt;The Go runtime's signal handler fires and hijacks execution&lt;/li&gt;
&lt;li&gt;The goroutine is paused, marked runnable, and placed back in its local queue&lt;/li&gt;
&lt;li&gt;The M proceeds to the next goroutine&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Timer-based preemption only kicks in for goroutines that never reach a yield point.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Distributed Scheduling&lt;/strong&gt;: Per-P local queues eliminate global lock contention, allowing threads to pick work independently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thread Efficiency&lt;/strong&gt;: Threads (Ms) are parked and reused rather than destroyed, significantly reducing creation overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;P Handoff&lt;/strong&gt;: During blocking syscalls, the P detaches from the blocked M and attaches to a new or parked M to keep other goroutines moving.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Work Stealing&lt;/strong&gt;: Idle Ps automatically balance the load by stealing half the tasks from a randomly selected P.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Starvation Prevention&lt;/strong&gt;: The 61-tick rule ensures the global queue is periodically prioritized so no goroutine is left behind.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hybrid Scheduling&lt;/strong&gt;: Combines Cooperative yielding at natural code points (I/O, channels) with Signal-based preemption (via sysmon and SIGURG) for long-running CPU tasks.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This design allows Go programs to efficiently manage millions of goroutines with only a handful of OS threads, giving you the simplicity of synchronous code with the performance of asynchronous systems.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>go</category>
      <category>beginners</category>
      <category>architecture</category>
    </item>
    <item>
      <title>From Deadlock Hell to Worker Pool Heaven: A Go Journey</title>
      <dc:creator>Codebaker</dc:creator>
      <pubDate>Thu, 19 Mar 2026 18:07:46 +0000</pubDate>
      <link>https://dev.to/debianbaker/why-and-how-to-get-control-of-concurrency-22h0</link>
      <guid>https://dev.to/debianbaker/why-and-how-to-get-control-of-concurrency-22h0</guid>
      <description>&lt;p&gt;&lt;em&gt;This script is going to make you control your code's concurrency&lt;/em&gt;. &lt;br&gt;
Hi! This is going to be my meticulous attempt to incite the interest and curiosity that piqued in me too, while I was exploring Concurrency patterns in Golang. It delineates the key idea behind controlled concurrency, provides deadlock insights, discusses optimisations &amp;amp; presents a correct concurrency control pattern.&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisite:
&lt;/h2&gt;

&lt;p&gt;Knowledge of Golang syntax, how concurrency works &amp;amp; slight theoretical knowledge of how channels in Go work is useful to get the comprehensive guide work out for you.&lt;/p&gt;
&lt;h2&gt;
  
  
  Would you like to give some thoughts over this question before we start?
&lt;/h2&gt;

&lt;p&gt;Ask yourself: Suppose you have 100,000 goroutines working concurrently, and they all do some CPU work, some DB call, some network I/O, what happens?&lt;br&gt;
... What happens to the Memory usage, the GC pressure? What happens to the External system(eg. DB)?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's mentally simulate this:&lt;/strong&gt;&lt;br&gt;
Each Goroutine starts small (nearly 2KB stack) but 100K -&amp;gt; 200MB+ stack space quickly;&lt;br&gt;
There can be many variables allocated in the Heap or they escaped to the heap (Go's Escape Analysis);&lt;br&gt;
The DB connection pool can exhaust, Queries to the DB queue up, Multiple timeout and retries to the DB occur.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What issues arise from this?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Large no. of Goroutines =&amp;gt; Scheduler overhead is more. While goroutines are "cheaper" than threads, still the scheduler needs to find the runnable goroutines to schedule them to the CPU.&lt;/li&gt;
&lt;li&gt;200MB+ space in modern servers, sounds manageable, but the danger lies at the heap level. The GC has to mark the "simultaneous live objects in the heap" and "CPU Stealing" can occur, which means, the GC takes up CPU time away from the application logic.&lt;/li&gt;
&lt;li&gt;If the DB has let's say a connection pool of 100, but 100k goroutines try to query it, 99,900 goroutines will block waiting for a connection. This creates a long queue, thereby again consuming memory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You see, this can cause the system to collapse. Therefore, we need to &lt;strong&gt;limit the no. of concurrent goroutines&lt;/strong&gt;. And, so let's introduce &lt;strong&gt;Worker Pools&lt;/strong&gt;. &lt;/p&gt;
&lt;h2&gt;
  
  
  Let us take a working example to gradually ramp up our understanding on worker pools:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Take &lt;strong&gt;2 Unbuffered channels&lt;/strong&gt; - &lt;strong&gt;Jobs&lt;/strong&gt; &amp;amp; &lt;strong&gt;Results&lt;/strong&gt; =&amp;gt; In a gist, unbuffered channel works like this - Sender on the channel is blocked until receive happens, OR Receiver is blocked until a send happens.&lt;/li&gt;
&lt;li&gt;Consider the &lt;strong&gt;Main Goroutine&lt;/strong&gt; is both the producer to the Jobs channel and the consumer to the Results channel. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3 worker goroutines&lt;/strong&gt; (let's say G1, G2, G3) consume on the Jobs channel and produce on the Results channel.&lt;/li&gt;
&lt;li&gt;Apart from these goroutines, we have a &lt;strong&gt;Listener Goroutine&lt;/strong&gt; that closes the Results channel as soon as all the worker goroutines execute their exit code (&lt;em&gt;implemented using WaitGroup wg&lt;/em&gt;)
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"sync"&lt;/span&gt;
    &lt;span class="s"&gt;"strings"&lt;/span&gt;
    &lt;span class="s"&gt;"strconv"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Job&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Id&lt;/span&gt;  &lt;span class="kt"&gt;int&lt;/span&gt;
    &lt;span class="n"&gt;Url&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Result&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;JobId&lt;/span&gt;   &lt;span class="kt"&gt;int&lt;/span&gt;
    &lt;span class="n"&gt;Url&lt;/span&gt;     &lt;span class="kt"&gt;string&lt;/span&gt;
    &lt;span class="n"&gt;StatusCode&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;
    &lt;span class="n"&gt;Err&lt;/span&gt;     &lt;span class="kt"&gt;error&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(){&lt;/span&gt;
    &lt;span class="c"&gt;/*
        Unbuffered channels - jobs and results
        jobs := make(chan Job)
        results := make(chan Result)
    */&lt;/span&gt;

    &lt;span class="n"&gt;jobs&lt;/span&gt;    &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="n"&gt;Job&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="n"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt; &lt;span class="n"&gt;sync&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WaitGroup&lt;/span&gt; 

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="n"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jobs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// 3 worker goroutines spawned&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;urls&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;"https://httpbin.org/status/200"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/404"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/500"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/delay/1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/200"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;urls&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;  &lt;span class="c"&gt;// Main produces jobs&lt;/span&gt;
        &lt;span class="n"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;Job&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;Id&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Url&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nb"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jobs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(){&lt;/span&gt;   &lt;span class="c"&gt;// Listener Goroutine&lt;/span&gt;
        &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Wait&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nb"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}()&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;   &lt;span class="c"&gt;// Main consumes results&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"job-%d %s → %d&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;JobId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StatusCode&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Main ended"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="n"&gt;Job&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;sync&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WaitGroup&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Done&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;job&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;jobs&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;tp&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;strings&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;job&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"/"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;strconv&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Atoi&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tp&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
        &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;JobId&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;      &lt;span class="n"&gt;job&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;Url&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;        &lt;span class="n"&gt;job&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;StatusCode&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"G ended"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Let's dry run:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When Main goroutine puts a job in the Jobs channel, it gets blocked on the Jobs channel, until any worker goroutine receives the job. &lt;/li&gt;
&lt;li&gt;Suppose G1 takes the job from the Jobs channel and consequently Main unblocks, then G1 extracts status code from the job URL, and produces a result in the Results Channel. Since no receiver is there on the Results channel currently, G1 gets blocked on the Results channel.&lt;/li&gt;
&lt;li&gt;Main again produces the job in the Jobs channel, and gets blocked again; Let's say now G2 picks up that job, similar to G1, G2 gets blocked on the Results channel. &lt;/li&gt;
&lt;li&gt;Main again produces the job, and now G3 picks the job and similarly, gets blocked on the Results channel.&lt;/li&gt;
&lt;li&gt;That means, after Main produces 3 jobs, G1, G2, G3 are blocked on the &lt;strong&gt;Results&lt;/strong&gt; channel, and Main now gets blocked on the &lt;strong&gt;Jobs&lt;/strong&gt; channel when it tries to produce 4th job. =&amp;gt; DEADLOCK!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;That implies, when Main is both the producer on the Jobs channel, and the consumer on the Results channel, and the Jobs and the Results channels both are unbuffered, Deadlock occurs if no. of jobs &amp;gt; no. of worker goroutines.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;To avoid Deadlock in this case&lt;/em&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The no. of jobs = no. of worker goroutines = 3 (in our example)(must condition). After producing the 3rd job, Main moves out of the job producer loop and &lt;em&gt;closes the Jobs channel&lt;/em&gt; implying that now no jobs would be produced. At this state, G1, G2 &amp;amp; G3 are all blocked on the Results channel. &lt;/li&gt;
&lt;li&gt;Now, Main starts to consume on the Results channel "pairing with the 3 blocked worker goroutines one by one", consequently unblocking all of them". &lt;/li&gt;
&lt;li&gt;G1, G2 &amp;amp; G3 all exit the job consumer loop since Jobs channel was closed. &lt;/li&gt;
&lt;li&gt;The "Listener Goroutine" now closes the Results channel as soon as all the 3 workers exit.&lt;/li&gt;
&lt;li&gt;Subsequently, the Main exits the Results consumer loop, as Results channel was closed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But, this is not the reality. No. of jobs can be 10, 20, 30.... any number. So, what can we do? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's try using buffered channels:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="n"&gt;Job&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// You can simulate with any size&lt;/span&gt;
&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="n"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's assume for one moment (This can help a lot if you pay attention to what I mean to say) - &lt;br&gt;
There are &lt;em&gt;infinite jobs&lt;/em&gt; that the Main goroutine wants to produce. &lt;br&gt;
So, Tell me -- &lt;em&gt;How many jobs can Main produce before deadlock happens?&lt;/em&gt; Think carefully! Run Concurrent goroutines in your head, simulating how buffered channels work..&lt;/p&gt;

&lt;p&gt;Let me Answer (I hope you thought of it!) - &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Main produces job in the Jobs channel -&amp;gt; Any goroutine picks it up -&amp;gt; it produces result in the Results channel&lt;/li&gt;
&lt;li&gt;Until Results channel fills up (size = 6), 6 jobs can be produced by Main. &lt;/li&gt;
&lt;li&gt;The 3 worker goroutines can take up 3 more jobs and all 3 get blocked on the filled Results channel. &lt;/li&gt;
&lt;li&gt;Further, Main can produce 4 more jobs until the Jobs channel fills up (size = 4).&lt;/li&gt;
&lt;li&gt;Now, if Main tries to produce another job, it gets blocked; and Deadlock occurs.
This gives us the formula =&amp;gt; &lt;strong&gt;Max. Number of jobs that Main can produce without Deadlock&lt;/strong&gt; = Results channel size (6) + no. of worker goroutines (3) + Jobs channel size (4) = 13. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Similarly, you can simulate for yourself, in the case of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Unbuffered Jobs channel &amp;amp; Buffered Results channel&lt;/em&gt;: Max. Number of jobs that Main can produce without Deadlock = Results channel size + no. of worker goroutines.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Buffered Jobs channel &amp;amp; Unbuffered Results channel&lt;/em&gt;: Max. Number of jobs that Main can produce without Deadlock = Jobs channel size + no. of worker goroutines.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;That implies our controlled concurrency mechanism failed to scale with the number of jobs; once the job count exceeds a threshold, it causes a deadlock.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  This Reveals the key insight!
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Buffer is just a &lt;strong&gt;performance optimisation&lt;/strong&gt;, not a &lt;strong&gt;correctness mechanism&lt;/strong&gt;. It only &lt;strong&gt;delayed the Deadlock&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Correctness comes from the *&lt;em&gt;correct goroutine pattern
*&lt;/em&gt;. What is that pattern??&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Correct Goroutine pattern: Decoupled Producer and Consumer
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Earlier, the pattern we were studying was - &lt;em&gt;Main being both the Producer to the Jobs channel and the Consumer to the Results channel.&lt;/em&gt; Deadlock occurred when the Main goroutine blocks on the Jobs channel and the worker goroutines block on the Results channel.&lt;/li&gt;
&lt;li&gt;What if - &lt;strong&gt;we side-by-side (concurrently) start a consumer service on the Results channel, consequently unblocking the worker goroutines, and making them ready to pick another jobs.&lt;/strong&gt; And, hence, we found the solution!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;A service produces jobs to the Jobs channel -&amp;gt; Worker routines pick up jobs and produce to the Results channel -&amp;gt; A service consumes from the Results channel. This is the Correct Goroutine pattern.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;** Take a look at the code: **&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;/*
    Decoupled Producer and Consumer
    Main is the results Consumer, new Goroutine spawned as jobs Producer.
    ** Why not main as producer? Because after producing jobs, main will end without waiting for the consumer to finish.
    Unbuffered Channels - Jobs and Results
*/&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(){&lt;/span&gt;
    &lt;span class="n"&gt;jobs&lt;/span&gt;    &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="n"&gt;Job&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="n"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt; &lt;span class="n"&gt;sync&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WaitGroup&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="n"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jobs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(){&lt;/span&gt;            &lt;span class="c"&gt;// Producer goroutine&lt;/span&gt;
        &lt;span class="n"&gt;urls&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="m"&gt;15&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/200"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/404"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/500"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/500"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/delay/1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/delay/1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/200"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/200"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/404"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/500"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/delay/1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/200"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/200"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/404"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"https://httpbin.org/status/404"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;urls&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;Job&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;Id&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Url&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nb"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jobs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}()&lt;/span&gt;

    &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(){&lt;/span&gt;
        &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Wait&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nb"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}()&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;     &lt;span class="c"&gt;// Main consumes results&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"job-%d %s → %d&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;JobId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StatusCode&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Main ended"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="n"&gt;Job&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;sync&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WaitGroup&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;wg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Done&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;job&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;jobs&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;tp&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;strings&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;job&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"/"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;strconv&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Atoi&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tp&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
        &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;JobId&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;      &lt;span class="n"&gt;job&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;Url&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;        &lt;span class="n"&gt;job&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;StatusCode&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"G ended"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Therefore, we have arrived at a correct implementation of a worker pool and established control over concurrency. You can now experiment further:
&lt;/h2&gt;

&lt;p&gt;a. Use buffered channels to improve performance in this pattern.&lt;br&gt;
b. Replace the URL parsing with actual HTTP calls, and incorporate context cancellation.&lt;/p&gt;

&lt;p&gt;From here, you can compose more advanced primitives within this model.”&lt;/p&gt;

&lt;p&gt;_Thanks a lot! _&lt;/p&gt;

</description>
      <category>backend</category>
      <category>go</category>
      <category>softwareengineering</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
