DEV Community

Cover image for iter.Seq in Go 1.23+: The Iterator Type Behind range-over-func
Gabriel Anhaia
Gabriel Anhaia

Posted on

iter.Seq in Go 1.23+: The Iterator Type Behind range-over-func


Range-over-func is the language feature everyone wrote about in Go 1.23. iter.Seq[V] is the type your code is supposed to pass around. The standard library quietly grew an ecosystem to feed it, drain it, sort it, and chunk it.

The whole iter package fits on one screen — two type aliases and two helpers:

package iter

type Seq[V any]      func(yield func(V) bool)
type Seq2[K, V any]  func(yield func(K, V) bool)

func Pull[V any](seq Seq[V]) (next func() (V, bool),
                              stop func())
func Pull2[K, V any](seq Seq2[K, V]) (next func() (K, V, bool),
                                      stop func())
Enter fullscreen mode Exit fullscreen mode

Two type aliases for callback-shaped functions and two helpers that flip from push to pull.

What makes the package matter is the rest of the standard library that grew up around it. Once a function returns iter.Seq[T], the slices, maps, bytes, and strings packages have helpers ready to feed it, drain it, sort it, chunk it. The shape your code passes around is iter.Seq[T].

What iter.Seq[V] actually is

iter.Seq[int] is func(yield func(int) bool). Nothing more.

A producer:

package main

import (
    "fmt"
    "iter"
)

func Count(start, n int) iter.Seq[int] {
    return func(yield func(int) bool) {
        for i := 0; i < n; i++ {
            if !yield(start + i) {
                return
            }
        }
    }
}

func main() {
    for v := range Count(10, 5) {
        fmt.Println(v) // 10 11 12 13 14
    }
}
Enter fullscreen mode Exit fullscreen mode

The for-range form is sugar. The same iterator works as a value you pass around:

seq := Count(10, 5)         // iter.Seq[int]
total := 0
seq(func(v int) bool {      // call it directly
    total += v
    return true
})
fmt.Println(total)          // 60
Enter fullscreen mode Exit fullscreen mode

for v := range seq is the readable form. Calling seq(yield) directly is what standard-library helpers do internally.

The 1.23 ecosystem you compose against

slices and maps shipped twelve iterator-aware functions in 1.23 (Go 1.23 release notes).

Producers (slice/map → iter.Seq):

// Signatures from the stdlib — not a runnable block:
slices.All[Slice ~[]E, E any](s Slice) iter.Seq2[int, E]
slices.Values[Slice ~[]E, E any](s Slice) iter.Seq[E]
slices.Backward[Slice ~[]E, E any](s Slice) iter.Seq2[int, E]
slices.Chunk[Slice ~[]E, E any](s Slice, n int) iter.Seq[Slice]

maps.All[Map ~map[K]V, K comparable, V any](m Map) iter.Seq2[K, V]
maps.Keys(m) iter.Seq[K]
maps.Values(m) iter.Seq[V]
Enter fullscreen mode Exit fullscreen mode

Consumers (iter.Seq → slice/map):

// Signatures from the stdlib — not a runnable block:
slices.Collect[E any](seq iter.Seq[E]) []E
slices.AppendSeq(s, seq) Slice
slices.Sorted[E cmp.Ordered](seq iter.Seq[E]) []E
slices.SortedFunc(seq, cmp) []E
slices.SortedStableFunc(seq, cmp) []E

maps.Collect(seq iter.Seq2[K, V]) map[K]V
maps.Insert(m, seq iter.Seq2[K, V])
Enter fullscreen mode Exit fullscreen mode

Go 1.24 added the bytes and strings siblings: Lines, SplitSeq, SplitAfterSeq, FieldsSeq, and FieldsFuncSeq. All return iter.Seq[[]byte] or iter.Seq[string], so string parsing chains into the same pipeline (Go 1.24 release notes).

A short example wiring four of them together:

// requires Go 1.24+ for strings.SplitSeq
import (
    "maps"
    "slices"
    "strings"
)

words := strings.SplitSeq("go go go iter seq go", " ")
counts := map[string]int{}
for w := range words {
    counts[w]++
}

top := slices.SortedStableFunc(
    slices.Collect(maps.Keys(counts)),
    func(a, b string) int {
        return counts[b] - counts[a]
    },
)
fmt.Println(top) // [go iter seq] (iter and seq tie at 1; SortedStableFunc keeps insertion order)
Enter fullscreen mode Exit fullscreen mode

SplitSeq is the producer. maps.Keys turns the count map into another iter.Seq. The sort runs over the collected keys and returns a sorted slice. No intermediate slice from the original split, just iterator nodes wired by type.

A paginated API client that yields iter.Seq[Item]

This is the shape iterators were quietly built for. Cursor-paginated APIs return one page at a time. The caller wants one stream of items. Before 1.23 you wrote a closure that returned (Item, bool, error) or you allocated everything into a slice. Both leak the pagination into the caller.

The iterator version reads top-down. The shape below — a (iter.Seq[Item], func() error) pair — is the same one bufio.Scanner uses (Scan() plus Err()):

package pages

import (
    "context"
    "encoding/json"
    "fmt"
    "iter"
    "net/http"
    "net/url"
)

type Item struct {
    ID   string `json:"id"`
    Name string `json:"name"`
}

type page struct {
    Items      []Item `json:"items"`
    NextCursor string `json:"next_cursor"`
}

type Client struct {
    HTTP    *http.Client
    BaseURL string
}

func (c *Client) Items(
    ctx context.Context,
) (iter.Seq[Item], func() error) {
    var fetchErr error
    seq := func(yield func(Item) bool) {
        cursor := ""
        for {
            p, err := c.fetch(ctx, cursor)
            if err != nil {
                fetchErr = err
                return
            }
            for _, it := range p.Items {
                if !yield(it) {
                    return
                }
            }
            if p.NextCursor == "" {
                return
            }
            cursor = p.NextCursor
        }
    }
    errFn := func() error { return fetchErr }
    return seq, errFn
}

func (c *Client) fetch(
    ctx context.Context,
    cursor string,
) (*page, error) {
    u := c.BaseURL + "/items"
    if cursor != "" {
        u += "?cursor=" + url.QueryEscape(cursor)
    }
    req, err := http.NewRequestWithContext(
        ctx, http.MethodGet, u, nil)
    if err != nil {
        return nil, err
    }
    resp, err := c.HTTP.Do(req)
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()
    if resp.StatusCode != http.StatusOK {
        return nil, fmt.Errorf("status %d",
            resp.StatusCode)
    }
    var p page
    if err := json.NewDecoder(resp.Body).Decode(&p); err != nil {
        return nil, err
    }
    return &p, nil
}
Enter fullscreen mode Exit fullscreen mode

Two choices worth flagging.

The return type is a (iter.Seq[Item], func() error) pair rather than iter.Seq2[Item, error]. Both shapes work, and both have the same forget-the-check footgun: if the caller ignores the error path (the Err() closure here, the second range variable for Seq2), errors are dropped silently. The Seq2 form is more compact at the call site; the pair form follows bufio.Scanner and keeps the error out of every loop iteration. Pick whichever your team will remember to check, and document it.

The yield-return-on-false check inside the inner loop is structural. Once the consumer breaks, yield returns false and the iterator function returns. No bonus page after the consumer asked it to stop.

The call site:

items, errFn := client.Items(ctx)
for it := range items {
    if it.Name == "" {
        continue
    }
    process(it)
}
if err := errFn(); err != nil {
    return err
}
Enter fullscreen mode Exit fullscreen mode

The HTTP work is hidden. The pagination is hidden. The consumer reads as if it had a slice.

A filter + map + take pipeline

The other half of iter.Seq's value is composition. The standard library does not ship Filter, Map, or Take helpers — that is on you. Each is a one-liner.

package xiter

import "iter"

func Filter[V any](
    seq iter.Seq[V],
    keep func(V) bool,
) iter.Seq[V] {
    return func(yield func(V) bool) {
        for v := range seq {
            if !keep(v) {
                continue
            }
            if !yield(v) {
                return
            }
        }
    }
}

func Map[V, R any](
    seq iter.Seq[V],
    fn func(V) R,
) iter.Seq[R] {
    return func(yield func(R) bool) {
        for v := range seq {
            if !yield(fn(v)) {
                return
            }
        }
    }
}

func Take[V any](
    seq iter.Seq[V],
    n int,
) iter.Seq[V] {
    return func(yield func(V) bool) {
        if n <= 0 {
            return
        }
        i := 0
        for v := range seq {
            if !yield(v) {
                return
            }
            i++
            if i >= n {
                return
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Each is a one-liner over iter.Seq. Filter and Take preserve type; Map transforms it. All three return iter.Seq so they compose.

Take deserves a closer look. The cap check sits after the yield and the increment, so Take(seq, 10) pulls exactly 10 items from upstream — not 11. If you put the check at the top of the loop body, you re-enter the loop after the tenth yield, pull one more item from upstream, then return. For a paginated client where item 11 forces a second page request, that is a wasted round-trip.

Plug them into the paginated client:

import (
    "slices"
    "strings"
)

items, errFn := client.Items(ctx)

names := xiter.Map(
    xiter.Take(
        xiter.Filter(
            items,
            func(it Item) bool { return it.Name != "" },
        ),
        10,
    ),
    func(it Item) string {
        return strings.ToUpper(it.Name)
    },
)

result := slices.Collect(names)
if err := errFn(); err != nil {
    return err
}
Enter fullscreen mode Exit fullscreen mode

Read it bottom-up. The pages stream in. Filter drops items with empty names. Take stops the chain at ten. Map upper-cases each. slices.Collect materialises one slice of ten strings.

Once Take has counted ten yields, it returns. Filter sees yield return false and returns itself. The iterator inside client.Items returns before the next page request goes out. One goroutine, one page in memory at a time, no extra fetch beyond the cap.

When to reach for iter.Pull

Push iterators ranged with for v := range seq cover most cases. iter.Pull is for code that needs to drive consumption from a place a for-loop body cannot reach: a state machine, an io.Reader adapter, a merger that interleaves two sequences.

items, errFn := client.Items(ctx)
next, stop := iter.Pull(items)
defer stop()

for {
    item, ok := next()
    if !ok {
        break
    }
    if !decideAndStash(item) {
        return
    }
}
if err := errFn(); err != nil {
    return err
}
Enter fullscreen mode Exit fullscreen mode

iter.Pull runs the push iterator on a coroutine and gives you back synchronous next and stop. The cost is the second goroutine. That's tolerable in outer loops; it adds up in inner loops over millions of elements. Reach for it when the for-range form bends the surrounding code awkwardly. See the iter package docs for the coroutine semantics and the Go blog on range-over-func for the design discussion.


If this was useful

The iterator vocabulary is one of the bigger reorientations Go has shipped since generics. The Complete Guide to Go Programming covers iter.Seq, the standard-library helpers in slices and maps, and the patterns above — paginated clients, transducer pipelines, when to switch to iter.Pull — alongside the rest of the language top to bottom.

Thinking in Go — the 2-book series on Go programming and hexagonal architecture

Top comments (0)