<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kyle Grah</title>
    <description>The latest articles on DEV Community by Kyle Grah (@kgrah).</description>
    <link>https://dev.to/kgrah</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kgrah"/>
    <language>en</language>
    <item>
      <title>🐹 Series: Leetcoding in Go — slices.Backward()</title>
      <dc:creator>Kyle Grah</dc:creator>
      <pubDate>Wed, 14 May 2025 00:59:23 +0000</pubDate>
      <link>https://dev.to/kgrah/series-leetcoding-in-go-slicesbackward-1aaj</link>
      <guid>https://dev.to/kgrah/series-leetcoding-in-go-slicesbackward-1aaj</guid>
      <description>&lt;p&gt;As Go engineers, we take pride in the language and our experience with it. Yet, many Go developers default to Python in technical interviews, often believing it's more interview-friendly.&lt;/p&gt;

&lt;p&gt;But if you're applying to a Go-heavy team, there's no better way to win over your future coworkers than by showing you're fluent in Go — even under pressure.&lt;/p&gt;

&lt;p&gt;There's a common perception that interviewing in Go puts you at a disadvantage compared to interviewing in Python. Python has built-ins  such as reversed()—great for saving time and lines of code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;slices.Backward(): &lt;a href="https://pkg.go.dev/slices#Backward" rel="noopener noreferrer"&gt;https://pkg.go.dev/slices#Backward&lt;/a&gt; (introduced in 1.23)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following is a valid solution to leetcode 347: &lt;a href="https://leetcode.com/problems/top-k-frequent-elements/description/" rel="noopener noreferrer"&gt;https://leetcode.com/problems/top-k-frequent-elements/description/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;    Tracks frequencies in a single map pass&lt;/li&gt;
&lt;li&gt;    Groups numbers into frequency “buckets”&lt;/li&gt;
&lt;li&gt;    Uses slices.Backward() to iterate from highest frequency down&lt;/li&gt;
&lt;li&gt;    Stops early once k results are collected
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import "slices"

func topKFrequent(nums []int, k int) []int {
    frequencies := make(map[int]int)

    for _, n := range nums {
        frequencies[n]++
    }

    buckets := make([][]int, len(nums)+1)
    for n, freq := range frequencies {
        buckets[freq] = append(buckets[freq], n)
    }

    results := make([]int, 0, k) 

// Reverse-iterate the buckets slice. Comparable to Python's reversed.
    for _, b := range slices.Backward(buckets) {
        if len(results) == k {
            return results
        }

        if len(b) &amp;lt;= k-len(results) {
            results = append(results, b...)
        } else {
            results = append(results, b[:k-len(results)]...)
        }
    }

    return results
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compared to &lt;code&gt;for i := len(nums) - 1; i &amp;gt;= 0; i-- {}&lt;/code&gt;, using slices.Backward() is easier to read, more expressive of intent, and shows fluency with Go’s modern standard library.&lt;/p&gt;

</description>
      <category>go</category>
      <category>leetcode</category>
      <category>interview</category>
    </item>
    <item>
      <title>🐹 Series: Leetcoding in Go — slices.Backward()</title>
      <dc:creator>Kyle Grah</dc:creator>
      <pubDate>Wed, 14 May 2025 00:59:23 +0000</pubDate>
      <link>https://dev.to/kgrah/series-leetcoding-in-go-slicesbackward-43f1</link>
      <guid>https://dev.to/kgrah/series-leetcoding-in-go-slicesbackward-43f1</guid>
      <description>&lt;p&gt;As Go engineers, we take pride in the language and our experience with it. Yet, many Go developers default to Python in technical interviews, often believing it's more interview-friendly.&lt;/p&gt;

&lt;p&gt;But if you're applying to a Go-heavy team, there's no better way to win over your future coworkers than by showing you're fluent in Go — even under pressure.&lt;/p&gt;

&lt;p&gt;There's a common perception that interviewing in Go puts you at a disadvantage compared to interviewing in Python. Python has built-ins  such as reversed()—great for saving time and lines of code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;slices.Backward(): &lt;a href="https://pkg.go.dev/slices#Backward" rel="noopener noreferrer"&gt;https://pkg.go.dev/slices#Backward&lt;/a&gt; (introduced in 1.23)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following is a valid solution to leetcode 347: &lt;a href="https://leetcode.com/problems/top-k-frequent-elements/description/" rel="noopener noreferrer"&gt;https://leetcode.com/problems/top-k-frequent-elements/description/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;    Tracks frequencies in a single map pass&lt;/li&gt;
&lt;li&gt;    Groups numbers into frequency “buckets”&lt;/li&gt;
&lt;li&gt;    Uses slices.Backward() to iterate from highest frequency down&lt;/li&gt;
&lt;li&gt;    Stops early once k results are collected
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import "slices"

func topKFrequent(nums []int, k int) []int {
    frequencies := make(map[int]int)

    for _, n := range nums {
        frequencies[n]++
    }

    buckets := make([][]int, len(nums)+1)
    for n, freq := range frequencies {
        buckets[freq] = append(buckets[freq], n)
    }

    results := make([]int, 0, k) 

// Reverse-iterate the buckets slice. Comparable to Python's reversed.
    for _, b := range slices.Backward(buckets) {
        if len(results) == k {
            return results
        }

        if len(b) &amp;lt;= k-len(results) {
            results = append(results, b...)
        } else {
            results = append(results, b[:k-len(results)]...)
        }
    }

    return results
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compared to &lt;code&gt;for i := len(nums) - 1; i &amp;gt;= 0; i-- {}&lt;/code&gt;, using slices.Backward() is easier to read, more expressive of intent, and shows fluency with Go’s modern standard library.&lt;/p&gt;

</description>
      <category>go</category>
      <category>leetcode</category>
      <category>interview</category>
    </item>
    <item>
      <title>🐹 Series: Leetcoding in Go — slices.Backward()</title>
      <dc:creator>Kyle Grah</dc:creator>
      <pubDate>Wed, 14 May 2025 00:59:23 +0000</pubDate>
      <link>https://dev.to/kgrah/series-leetcoding-in-go-slicesbackward-abg</link>
      <guid>https://dev.to/kgrah/series-leetcoding-in-go-slicesbackward-abg</guid>
      <description>&lt;p&gt;As Go engineers, we take pride in the language and our experience with it. Yet, many Go developers default to Python in technical interviews, often believing it's more interview-friendly.&lt;/p&gt;

&lt;p&gt;But if you're applying to a Go-heavy team, there's no better way to win over your future coworkers than by showing you're fluent in Go — even under pressure.&lt;/p&gt;

&lt;p&gt;There's a common perception that interviewing in Go puts you at a disadvantage compared to interviewing in Python. Python has built-ins  such as reversed()—great for saving time and lines of code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;slices.Backward(): &lt;a href="https://pkg.go.dev/slices#Backward" rel="noopener noreferrer"&gt;https://pkg.go.dev/slices#Backward&lt;/a&gt; (introduced in 1.23)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following is a valid solution to leetcode 347: &lt;a href="https://leetcode.com/problems/top-k-frequent-elements/description/" rel="noopener noreferrer"&gt;https://leetcode.com/problems/top-k-frequent-elements/description/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;    Tracks frequencies in a single map pass&lt;/li&gt;
&lt;li&gt;    Groups numbers into frequency “buckets”&lt;/li&gt;
&lt;li&gt;    Uses slices.Backward() to iterate from highest frequency down&lt;/li&gt;
&lt;li&gt;    Stops early once k results are collected
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import "slices"

func topKFrequent(nums []int, k int) []int {
    frequencies := make(map[int]int)

    for _, n := range nums {
        frequencies[n]++
    }

    buckets := make([][]int, len(nums)+1)
    for n, freq := range frequencies {
        buckets[freq] = append(buckets[freq], n)
    }

    results := make([]int, 0, k) 

// Reverse-iterate the buckets slice. Comparable to Python's reversed.
    for _, b := range slices.Backward(buckets) {
        if len(results) == k {
            return results
        }

        if len(b) &amp;lt;= k-len(results) {
            results = append(results, b...)
        } else {
            results = append(results, b[:k-len(results)]...)
        }
    }

    return results
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compared to &lt;code&gt;for i := len(nums) - 1; i &amp;gt;= 0; i-- {}&lt;/code&gt;, using slices.Backward() is easier to read, more expressive of intent, and shows fluency with Go’s modern standard library.&lt;/p&gt;

</description>
      <category>go</category>
      <category>leetcode</category>
      <category>interview</category>
    </item>
    <item>
      <title>Stop OOMs with Semaphores</title>
      <dc:creator>Kyle Grah</dc:creator>
      <pubDate>Mon, 12 May 2025 23:32:08 +0000</pubDate>
      <link>https://dev.to/kgrah/stop-ooms-with-semaphores-4lh1</link>
      <guid>https://dev.to/kgrah/stop-ooms-with-semaphores-4lh1</guid>
      <description>&lt;p&gt;Go makes it easy to write concurrent code — just add go doSomething() and you're off. But if you're not careful, you can overwhelm your own service with too many goroutines. Here's how to avoid accidentally DDoSing yourself using a simple, effective semaphore.&lt;/p&gt;

&lt;p&gt;Semaphore is a concurrency pattern that has existed long before Go did, but is exceptionally easy to implement with Go’s channels.&lt;/p&gt;

&lt;p&gt;There are many use cases for semaphores in computer science, but one of the most practical in Go is limiting the number of goroutines your program spawns.&lt;/p&gt;

&lt;p&gt;Goroutines are cheap but not free. Unbounded goroutines can lead to degraded performance due to CPU contention, runaway memory usage (heap and stack), and even goroutine leaks — where goroutines silently keep running forever.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
The following code snippet was adapted from go-chi: &lt;a href="https://github.com/go-chi/chi" rel="noopener noreferrer"&gt;https://github.com/go-chi/chi&lt;/a&gt;. It creates a simple web server with one http endpoint to process large files provided as multipart requests, and uses a semaphore to limit global concurrency, ensuring the service never processes more than a fixed number of files at once — regardless of how many users hit the endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func main() {
    maxGoroutinesEnv := os.Getenv("MAX_GOROUTINES")
    maxGoroutines, err := strconv.Atoi(maxGoroutinesEnv)
    if err != nil {
        log.Fatalf("failed to load MAX_GOROUTINES env var %w", err)
    }

    // create semaphore
    sem := NewSemaphore(maxGoroutines)

    r := chi.NewRouter()
    r.Use(middleware.Logger)
    r.Get("/", func(w http.ResponseWriter, r *http.Request) {
        processListOfLargeCustomerProvidedConfigs(w, r, sem)
    })
    http.ListenAndServe(":3000", r)
}

func processListOfLargeCustomerProvidedConfigs(
    w http.ResponseWriter,
    r *http.Request,
    sem *Semaphore,
) {
    err := r.ParseMultipartForm(50 &amp;lt;&amp;lt; 20)
    if err != nil {
        errMsg := fmt.Sprintf("could not parse multipart form: %w", err)
        http.Error(w, errMsg, http.StatusBadRequest)
        return
    }

    files := r.MultipartForm.File["configs"]
    if len(files) == 0 {
        http.Error(w, "no files in request", http.StatusBadRequest)
        return
    }

    for _, f := range files {
        // blocks if the semaphore is "full"
        sem.Acquire(1)

        go func(f *multipart.FileHeader) {
            defer sem.Release(1)
            // memory and cpu intensive task
            processFile(f)
        }(f)
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see above, every time we attempt to create a goroutine to process a file, we block if the semaphore is "full" and automatically continue once a piece of work has been released.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Minimal Semaphore Implementation:&lt;/strong&gt;&lt;br&gt;
Here’s the full implementation used in the example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Semaphore struct {
    c chan struct{}
}

func NewSemaphore(w int) *Semaphore {
    return &amp;amp;Semaphore{
        // create a buffered channel with capacity
       // equal to the weight of the semaphore
        c: make(chan struct{}, w),
    }
}

func (s *Semaphore) Acquire(w int) {
    for range w {
        // Send an empty struct to the channel.
        // Blocks if the channel is full — meaning we've  reached our   concurrency limit.
        // We use `struct{}` to avoid extra allocations.
        s.c &amp;lt;- struct{}{}
    }
}

func (s *Semaphore) Release(w int) {
        // pull the desired amount of work
        // out of the semaphore channel
    for range w {
        &amp;lt;-s.c
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;📦 Prefer a Library?&lt;/strong&gt;&lt;br&gt;
And if you want to get a semaphore up and running even faster, you can find one here: &lt;a href="https://pkg.go.dev/golang.org/x/sync/semaphore" rel="noopener noreferrer"&gt;https://pkg.go.dev/golang.org/x/sync/semaphore&lt;/a&gt;&lt;/p&gt;

</description>
      <category>go</category>
      <category>backend</category>
      <category>performance</category>
    </item>
    <item>
      <title>Stop OOMs with Semaphores</title>
      <dc:creator>Kyle Grah</dc:creator>
      <pubDate>Mon, 12 May 2025 23:32:08 +0000</pubDate>
      <link>https://dev.to/kgrah/stop-ooms-with-semaphores-a1a</link>
      <guid>https://dev.to/kgrah/stop-ooms-with-semaphores-a1a</guid>
      <description>&lt;p&gt;Go makes it easy to write concurrent code — just add go doSomething() and you're off. But if you're not careful, you can overwhelm your own service with too many goroutines. Here's how to avoid accidentally DDoSing yourself using a simple, effective semaphore.&lt;/p&gt;

&lt;p&gt;Semaphore is a concurrency pattern that has existed long before Go did, but is exceptionally easy to implement with Go’s channels.&lt;/p&gt;

&lt;p&gt;There are many use cases for semaphores in computer science, but one of the most practical in Go is limiting the number of goroutines your program spawns.&lt;/p&gt;

&lt;p&gt;Goroutines are cheap but not free. Unbounded goroutines can lead to degraded performance due to CPU contention, runaway memory usage (heap and stack), and even goroutine leaks — where goroutines silently keep running forever.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
The following code snippet was adapted from go-chi: &lt;a href="https://github.com/go-chi/chi" rel="noopener noreferrer"&gt;https://github.com/go-chi/chi&lt;/a&gt;. It creates a simple web server with one http endpoint to process large files provided as multipart requests, and uses a semaphore to limit global concurrency, ensuring the service never processes more than a fixed number of files at once — regardless of how many users hit the endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func main() {
    maxGoroutinesEnv := os.Getenv("MAX_GOROUTINES")
    maxGoroutines, err := strconv.Atoi(maxGoroutinesEnv)
    if err != nil {
        log.Fatalf("failed to load MAX_GOROUTINES env var %w", err)
    }

    // create semaphore
    sem := NewSemaphore(maxGoroutines)

    r := chi.NewRouter()
    r.Use(middleware.Logger)
    r.Get("/", func(w http.ResponseWriter, r *http.Request) {
        processListOfLargeCustomerProvidedConfigs(w, r, sem)
    })
    http.ListenAndServe(":3000", r)
}

func processListOfLargeCustomerProvidedConfigs(
    w http.ResponseWriter,
    r *http.Request,
    sem *Semaphore,
) {
    err := r.ParseMultipartForm(50 &amp;lt;&amp;lt; 20)
    if err != nil {
        errMsg := fmt.Sprintf("could not parse multipart form: %w", err)
        http.Error(w, errMsg, http.StatusBadRequest)
        return
    }

    files := r.MultipartForm.File["configs"]
    if len(files) == 0 {
        http.Error(w, "no files in request", http.StatusBadRequest)
        return
    }

    for _, f := range files {
        // blocks if the semaphore is "full"
        sem.Acquire(1)

        go func(f *multipart.FileHeader) {
            defer sem.Release(1)
            // memory and cpu intensive task
            processFile(f)
        }(f)
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see above, every time we attempt to create a goroutine to process a file, we block if the semaphore is "full" and automatically continue once a piece of work has been released.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Minimal Semaphore Implementation:&lt;/strong&gt;&lt;br&gt;
Here’s the full implementation used in the example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Semaphore struct {
    c chan struct{}
}

func NewSemaphore(w int) *Semaphore {
    return &amp;amp;Semaphore{
        // create a buffered channel with capacity
       // equal to the weight of the semaphore
        c: make(chan struct{}, w),
    }
}

func (s *Semaphore) Acquire(w int) {
    for range w {
        // Send an empty struct to the channel.
        // Blocks if the channel is full — meaning we've  reached our   concurrency limit.
        // We use `struct{}` to avoid extra allocations.
        s.c &amp;lt;- struct{}{}
    }
}

func (s *Semaphore) Release(w int) {
        // pull the desired amount of work
        // out of the semaphore channel
    for range w {
        &amp;lt;-s.c
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;📦 Prefer a Library?&lt;/strong&gt;&lt;br&gt;
And if you want to get a semaphore up and running even faster, you can find one here: &lt;a href="https://pkg.go.dev/golang.org/x/sync/semaphore" rel="noopener noreferrer"&gt;https://pkg.go.dev/golang.org/x/sync/semaphore&lt;/a&gt;&lt;/p&gt;

</description>
      <category>go</category>
      <category>backend</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
