DEV Community

Cover image for Understanding Pointers in Go; The Two Runes (& and *) of Go
Emmanuel Ayinde
Emmanuel Ayinde

Posted on • Originally published at gr8soln.vercel.app

Understanding Pointers in Go; The Two Runes (& and *) of Go

A thorough guide to addresses, pointers, and the elegant dance between them - grounded in how your computer's RAM actually works, with real examples, and an honest look at the trade-offs.

Table of Contents

  1. Let's Start with a Story
  2. How RAM Actually Works
  3. The Stack and The Heap
  4. From RAM to Pointers - The Bridge
  5. The & Operator
  6. The * Operator
  7. Pointer Types
  8. nil Pointers
  9. Pointers in Functions
  10. Pointers & Structs
  11. Pointer Receivers
  12. The new() Function
  13. Advantages of Pointers in Go
  14. Disadvantages & Risks of Pointers in Go
  15. When to Use Pointers
  16. Common Gotchas
  17. Quick Reference Cheat Sheet

Let's Start with a Story

Before we dive into the technical details, let's set the stage with a simple story. This isn't just a metaphor - it's a direct analogy to how pointers work in Go and how they relate to RAM.

🏠 The Address Story

Imagine your town has thousands of houses. Every house has a unique address - say, "42 Elm Street" - and each house contains a value: a family, furniture, secrets.

Now suppose your friend Alice wants to give you her house key. She has two choices: she could photocopy everything inside her house and hand you the copy - expensive, bulky, and changes to your copy don't affect hers. Or she could just write her address on a piece of paper and hand it to you. You now hold a pointer - a small slip of paper that tells you where the real house is. Walk to that address, and you can reach in and change the actual furniture.

In Go, & gives you the address. * lets you walk through the door.

Every Go developer eventually faces the & and * operators. They look cryptic at first - sometimes right next to each other on the same line - but once you understand what they represent at the hardware level, they become second nature. This post is a complete guide: from how RAM actually works, through every nuanced use of these operators, to an honest accounting of their trade-offs.


How RAM Actually Works

Before pointers make complete sense, you need a clear mental model of what RAM is and how your program interacts with it. This isn't hand-wavy background - it's the actual foundation that pointers are built on.

RAM is a flat, indexed array of bytes

Your computer's RAM (Random Access Memory) is, at the hardware level, an enormous contiguous array of bytes. Each byte has a unique numeric index - that index is its memory address. On a modern 64-bit system, addresses are 64-bit integers, giving you a theoretical address space of 2^64 bytes (~18 exabytes). In practice, your OS and hardware limit how much of that space maps to physical chips.

Physical RAM (conceptually)

Address    Byte value
──────────────────────
0x0000     0x00
0x0001     0x4A
0x0002     0x1F
0x0003     0x00
...        ...
0xFFFF...  0x00
Enter fullscreen mode Exit fullscreen mode

When you declare var age int = 42 in Go, the runtime doesn't invent some abstract "variable" - it picks a location in this byte array, writes the binary representation of 42 into it (8 bytes for an int on 64-bit), and associates the name age with that address. The name exists only in your source code and debug symbols. At runtime, it's all addresses and bytes.

How the CPU reads and writes memory

The CPU communicates with RAM through a memory bus. A read: CPU puts an address on the bus, RAM returns the bytes at that location. A write: CPU puts an address and a value on the bus, RAM stores them. This takes nanoseconds - fast, but significantly slower than reading from CPU registers or cache.

This is why CPUs have L1, L2, and L3 caches - small, extremely fast memory banks between the CPU cores and main RAM. When you access an address, the CPU checks its caches first. A cache hit costs ~1–4 cycles. A cache miss - reaching all the way to RAM - costs ~100–300 cycles. That gap is enormous at scale, and it has real implications for how you structure data in Go.

CPU Access Latency (approximate)

Register          ~1 cycle
L1 Cache          ~4 cycles        (32–64 KB per core)
L2 Cache          ~12 cycles       (256 KB – 1 MB per core)
L3 Cache          ~40 cycles       (shared, 4–32 MB)
RAM               ~100–300 cycles  (GBs)
SSD               ~100,000 cycles
HDD               ~40,000,000 cycles
Enter fullscreen mode Exit fullscreen mode

What a "variable" actually is at runtime

When the Go compiler processes your source code, every named variable gets an address assignment - either a stack offset or a heap address. By the time your code runs, the name age is just a shorthand. The emitted machine code uses addresses directly:

// Go source
age := 42
age = age + 1

// Rough equivalent in machine terms
MOV [0xc000014080], 42     // write 42 to address 0xc000014080
MOV RAX, [0xc000014080]    // load from that address into register RAX
ADD RAX, 1                 // add 1
MOV [0xc000014080], RAX    // write result back
Enter fullscreen mode Exit fullscreen mode

This is the key insight: a pointer is simply a variable whose stored value is one of these numeric addresses. There is no magic. It's an integer that the runtime interprets as a location in RAM.

Memory alignment

The CPU doesn't read arbitrary single bytes from RAM in isolation - it reads words (8 bytes on 64-bit, aligned to natural boundaries). A float64 at address 0xc000014000 is one bus transaction. The same value at 0xc000014001 (misaligned) requires two. Go's compiler handles alignment automatically, inserting invisible padding bytes between struct fields where necessary.

type Bad struct {
    A bool    // 1 byte
              // 7 bytes padding (Go inserts this)
    B float64 // 8 bytes - must be 8-byte aligned
    C bool    // 1 byte
              // 7 bytes padding
}
// Total size: 24 bytes - holds only 10 bytes of real data

type Good struct {
    B float64 // 8 bytes
    A bool    // 1 byte
    C bool    // 1 byte
              // 6 bytes padding
}
// Total size: 16 bytes - same data, better layout
Enter fullscreen mode Exit fullscreen mode

Field ordering matters. This is a real micro-optimisation in memory-bound Go programs.


The Stack and The Heap

Your Go program doesn't use RAM as one undifferentiated pool. It carves it into two primary regions with very different characteristics. Knowing which one your data lives in is essential for reasoning about pointer behaviour and performance.

The Stack

The stack is a contiguous block of memory managed in LIFO (last-in, first-out) order. Every goroutine gets its own stack - starting at 2KB in Go, growing dynamically as needed. When a function is called, Go pushes a stack frame onto it: a region holding the function's local variables, arguments, and return values. When the function returns, that entire frame is popped in one operation - the stack pointer register simply moves back.

Stack (grows downward)

High address
┌─────────────────────┐
│  main() frame       │
│    x = 5            │  ← stack pointer before double() call
├─────────────────────┤
│  double() frame     │
│    n = 5 (copy)     │  ← stack pointer during double()
└─────────────────────┘
Low address

When double() returns, its frame is gone instantly.
Stack pointer moves back. No GC. No bookkeeping.
Enter fullscreen mode Exit fullscreen mode

Stack allocation is extremely fast - it's just arithmetic on the stack pointer register. Stack variables are also cache-friendly because they're packed tightly in a small region.

The constraint: the stack frame is temporary. Once a function returns, its frame is gone. If you return the address of a variable that lived on the stack... that address is now dangling. In C, this is undefined behaviour. Go handles it with escape analysis.

The Heap

The heap is a large region of memory managed dynamically. Allocations on the heap persist beyond the function that created them. Go's runtime manages a heap allocator and a garbage collector (GC) that periodically scans for unreferenced objects and reclaims them.

Heap allocation is slower than stack allocation: the allocator must find a free block, update metadata, and potentially trigger GC. Heap objects are also scattered across a large address range - more likely to cause cache misses compared to tightly-packed stack variables.

Memory Layout of a Running Go Program

┌─────────────────────┐  High addresses
│      Stack(s)       │  - one per goroutine, grows downward
│  goroutine 1 ↓      │  - fast, LIFO, automatically freed
│  goroutine 2 ↓      │
├─────────────────────┤
│   (unmapped gap)    │
├─────────────────────┤
│       Heap          │  - grows upward
│  [obj1][obj2]...    │  - GC-managed, persistent
├─────────────────────┤
│   BSS Segment       │  - zero-initialized globals
├─────────────────────┤
│   Data Segment      │  - initialized globals
├─────────────────────┤
│   Text Segment      │  - compiled machine code (read-only)
└─────────────────────┘  Low addresses
Enter fullscreen mode Exit fullscreen mode

Escape analysis: Go decides for you

In Go, you don't call malloc or free. The compiler runs escape analysis - a static pass that determines whether a variable's lifetime can be confined to a stack frame, or whether it needs to "escape" to the heap.

The rules are intuitive:

  • If a variable's address is returned from a function, it escapes (the stack frame will be gone).
  • If a variable is stored in a data structure that outlives the current function, it escapes.
  • If a variable is too large for the stack, it escapes.
func stackAlloc() int {
    x := 42       // stays on stack - doesn't escape
    return x      // value is copied out, address is never exposed
}

func heapAlloc() *int {
    x := 42       // escapes to heap - address is returned
    return &x     // safe in Go - compiler promotes x to heap
}
Enter fullscreen mode Exit fullscreen mode

You can inspect escape decisions yourself:

go build -gcflags="-m" ./...
# Output: ./main.go:6:2: x escapes to heap
#         ./main.go:2:2: x does not escape
Enter fullscreen mode Exit fullscreen mode

Senior engineers use this to find and reduce unnecessary heap allocations in hot paths.


From RAM to Pointers - The Bridge

Now the picture snaps together. You have:

  • RAM: a flat byte array, every location has a numeric address.
  • Stack: fast, temporary, cleaned up automatically when a function returns.
  • Heap: persistent, GC-managed, slower to allocate.
  • Variables: names the compiler associates with specific RAM addresses.

A pointer is exactly what it sounds like: a variable whose stored value is an address in RAM. It points at another location in memory. That's the entire concept.

RAM (partial view of a running Go program)

Address        Value                What it is
──────────────────────────────────────────────────────────────
0xc000014070   [42, 0, 0, 0,        age int = 42
                0,  0, 0, 0]         (8 bytes, little-endian)

0xc000014078   [0x70, 0x40, 0x01,   ptr *int = &age
                0x00, 0xc0, 0x00,    (8 bytes storing address
                0x00, 0x00]           0xc000014070)
Enter fullscreen mode Exit fullscreen mode
  • Reading age: go to 0xc000014070, read 8 bytes, interpret as int42
  • Reading ptr: go to 0xc000014078, read 8 bytes, interpret as address → 0xc000014070
  • Reading *ptr: go to 0xc000014078, get 0xc000014070, then go there, read 8 bytes → 42

Two memory reads instead of one. That's dereferencing - and that cost is real, though usually trivial in isolation. It compounds in tight loops.


The & Operator - "Give me the address"

The & symbol placed before a variable is the address-of operator. It evaluates to the memory address of its operand - not the value stored there, but the location in RAM where the value lives.

&x asks: "Where in RAM does x live?"

package main

import "fmt"

func main() {
    age := 42

    fmt.Println(age)   // 42           - the value
    fmt.Println(&age)  // 0xc000014080 - the RAM address

    // & produces a *int (pointer to int)
    var ptr *int = &age
    fmt.Println(ptr)   // 0xc000014080 - same address
}
Enter fullscreen mode Exit fullscreen mode

The type of &age is *int - a pointer to an int. & can be applied to any addressable value: variables, struct fields, array/slice elements, and more.

What can & be applied to?

type Person struct {
    Name string
    Age  int
}

func main() {
    // ✅ Variable
    x := 10
    _ = &x

    // ✅ Struct field
    p := Person{Name: "Alice", Age: 30}
    _ = &p.Age   // *int pointing into the struct's RAM location

    // ✅ Slice element
    nums := []int{1, 2, 3}
    _ = &nums[0]  // *int pointing at first element of backing array

    // ✅ Composite literal - Go heap-allocates it and gives you the pointer
    pp := &Person{Name: "Bob", Age: 25}  // *Person
    _ = pp

    // ❌ NOT addressable - compile error
    // _ = &42         (literal - no stable RAM location)
    // _ = &len(nums)  (function return value - temporary register value)
}
Enter fullscreen mode Exit fullscreen mode

⚠️ Literals like 42 are typically inlined into machine instructions - they don't live at a stable, named RAM address. Taking their address is a compile-time error.

The composite literal shortcut

// Verbose
p := Person{Name: "Alice", Age: 30}
ptr := &p      // *Person

// Idiomatic - allocates on heap, returns pointer immediately
ptr2 := &Person{Name: "Alice", Age: 30}  // *Person

// Seen everywhere in real Go codebases:
resp := &http.Response{StatusCode: 200}
node := &ListNode{Val: 42, Next: nil}
Enter fullscreen mode Exit fullscreen mode

The * Operator - "Go to that address"

The * symbol has two distinct jobs in Go. Conflating them is the most common source of pointer confusion.

Job Context Meaning
Type declaration Type position *T means "a pointer to T"
Dereference Expression position *ptr means "follow this pointer into RAM and give me the value"

**ptr says: "Follow the address. Read what's in RAM at that location."*

func main() {
    score := 100

    // & → give me the RAM address   (* in TYPE position)
    var ptr *int = &score

    fmt.Println(ptr)   // 0xc000014080  (a RAM address)

    // * → go to that RAM address    (* in EXPRESSION position)
    fmt.Println(*ptr)  // 100           (the value at that address)

    // Modify through the pointer - writes directly to score's RAM location
    *ptr = 200
    fmt.Println(score) // 200 - score itself changed
}
Enter fullscreen mode Exit fullscreen mode

The full picture together

Step 1: score := 100
  ┌──────────────────────────┐
  │ score  @0xc000014080     │
  │        value = 100       │
  └──────────────────────────┘

Step 2: ptr := &score
  ┌──────────────────────────┐      ┌────────────────────────────┐
  │ score  @0xc000014080     │◄─────│ ptr    @0xc000014088       │
  │        value = 100       │      │        value = 0xc000014080│
  └──────────────────────────┘      └────────────────────────────┘

Step 3: *ptr = 200
  ┌──────────────────────────┐      ┌────────────────────────────┐
  │ score  @0xc000014080     │◄─────│ ptr    @0xc000014088       │
  │        value = 200 ✏️    │      │        value = 0xc000014080│
  └──────────────────────────┘      └────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Pointer Types Are Strongly Typed

A *int is a completely different type from a *string or a *Person. The compiler enforces this strictly - no implicit casting between pointer types.

name   := "Alice"
age    := 30
active := true

var pName   *string  = &name    // ✅
var pAge    *int     = &age     // ✅
var pActive *bool    = &active  // ✅

// ❌ Type mismatch - compile error
// pAge = &name   // cannot use *string as *int

// Dereferencing gives back the original type
var n string = *pName   // "Alice"
var a int    = *pAge    // 30
Enter fullscreen mode Exit fullscreen mode
Variable Type Pointer Type Dereference Type
int *int int
string *string string
bool *bool bool
float64 *float64 float64
Person (struct) *Person Person
[]int (slice) *[]int []int
*int (pointer) **int *int

💡 **int is valid - a pointer to a pointer to an int. You rarely need more than one level of indirection in practice.


nil - The Zero Value of Pointers

A pointer that hasn't been assigned holds the zero value nil - numerically, address 0x0. The OS deliberately leaves address 0 unmapped. Any dereference of a nil pointer causes a segmentation fault, which Go catches and converts into a runtime panic.

🚪 The Phantom Address

A nil pointer is a slip of paper with no address written on it. Your car starts, you pull out of the driveway, and there's nowhere to go. Go terminates the program: runtime: invalid memory address or nil pointer dereference.

func main() {
    var ptr *int        // nil - holds address 0x0
    fmt.Println(ptr)   // <nil>

    // ❌ Panics at runtime - dereferences address 0x0
    // fmt.Println(*ptr)

    // ✅ Always check before dereferencing
    if ptr != nil {
        fmt.Println(*ptr)
    } else {
        fmt.Println("pointer is nil - nothing to read")
    }
}

// Idiomatic Go: return nil to signal "not found"
func findUser(id int) *User {
    if id <= 0 {
        return nil
    }
    return &User{ID: id, Name: "Alice"}
}
Enter fullscreen mode Exit fullscreen mode

⚠️ Returning *T with a possible nil is a contract. The caller is obligated to check. Forgetting to nil-check before dereferencing is one of the most common sources of production panics in Go.


Pointers in Functions

In Go, function arguments are passed by value - the callee receives a copy, allocated in its own stack frame. Modifying it has no effect on the original.

The problem: pass by value

func double(n int) {
    n = n * 2    // modifies the stack copy, not the original
}

func main() {
    x := 5
    double(x)
    fmt.Println(x)  // 5 - unchanged
}
Enter fullscreen mode Exit fullscreen mode

The solution: pass the address

func double(n *int) {
    *n = *n * 2   // follows the pointer into the caller's stack frame
}

func main() {
    x := 5
    double(&x)
    fmt.Println(x)  // 10
}
Enter fullscreen mode Exit fullscreen mode

📮 The Photocopy vs. House Key

double(x) hands the function a photocopy of 5. It scribbles on the copy and throws it away. Your original is untouched. double(&x) hands it your house key - the function walks to the actual RAM location and changes what's there.

Performance: avoiding large copies

type BigReport struct {
    Title   string
    Data    [10000]float64  // ~80KB
    Summary string
}

// ❌ Copies ~80KB on every call
func processReport(r BigReport) { ... }

// ✅ Copies only 8 bytes (the pointer)
func processReport(r *BigReport) { ... }
Enter fullscreen mode Exit fullscreen mode

In hot paths - data pipelines, request handlers, game loops - this difference is measurable. Benchmarks regularly show 2–5x throughput improvement for moderately sized structs.


Pointers & Structs

When you have a pointer to a struct, Go auto-dereferences on dot access - p.Name and (*p).Name are identical. This is pure syntactic sugar.

type Person struct {
    Name string
    Age  int
}

func main() {
    p := &Person{Name: "Alice", Age: 30}

    fmt.Println((*p).Name)  // "Alice" - explicit dereference
    fmt.Println(p.Name)     // "Alice" - identical, idiomatic

    p.Age = 31  // modifies the heap-allocated Person directly
}
Enter fullscreen mode Exit fullscreen mode

Linked list - the canonical pointer use case

type Node struct {
    Value int
    Next  *Node  // pointer to next node - 8 bytes
}

func main() {
    head := &Node{Value: 1}
    head.Next = &Node{Value: 2}
    head.Next.Next = &Node{Value: 3}

    for curr := head; curr != nil; curr = curr.Next {
        fmt.Println(curr.Value)
    }
    // Output: 1, 2, 3
}
Enter fullscreen mode Exit fullscreen mode

A Node value cannot contain another Node value - that would be infinite size at compile time. A *Node is just 8 bytes. This is why recursive data structures require pointers.


Pointer Receivers vs Value Receivers

type Counter struct {
    count int
}

// Value receiver - operates on a copy
func (c Counter) Value() int {
    return c.count
}

// Pointer receiver - operates on the original in RAM
func (c *Counter) Increment() {
    c.count++
}

func (c *Counter) Reset() {
    c.count = 0
}

func main() {
    c := Counter{}
    c.Increment()  // Go auto-takes address: (&c).Increment()
    c.Increment()
    fmt.Println(c.Value())  // 2
    c.Reset()
    fmt.Println(c.Value())  // 0
}
Enter fullscreen mode Exit fullscreen mode
Situation Use
Method needs to modify the receiver Pointer receiver *T
Struct is large (avoid copying) Pointer receiver *T
Method is read-only, struct is small Value receiver T
Any method on type uses *T - be consistent Pointer receiver *T

💡 Mixed receiver sets cause subtle interface satisfaction bugs. If any method uses a pointer receiver, use pointer receivers throughout.


The new() Function

new(T) heap-allocates zeroed storage for type T and returns a *T. It's equivalent to &T{} for most cases.

func main() {
    p := new(int)      // *int pointing to 0 on the heap
    fmt.Println(*p)    // 0
    *p = 42
    fmt.Println(*p)    // 42

    s1 := new(Person)  // *Person, all fields zeroed
    s2 := &Person{}    // identical
    _ = s1; _ = s2

    flag := new(bool)  // most natural use - primitive zero-value pointer
    *flag = true
}
Enter fullscreen mode Exit fullscreen mode

💡 Prefer &T{} for structs (allows field initialization). new() is cleaner for primitives.


Advantages of Pointers in Go

Pointers are not just a feature - in the right contexts, they're the correct tool. Here's a precise breakdown of what they buy you.

1. Mutation across function boundaries

The primary reason pointers exist. Go's pass-by-value semantics mean a function cannot modify its caller's data without a pointer. Pointer parameters are an explicit, visible contract: this function will modify the value at this address.

func normalise(v *Vector3) {
    mag := math.Sqrt(v.X*v.X + v.Y*v.Y + v.Z*v.Z)
    v.X /= mag
    v.Y /= mag
    v.Z /= mag
}
Enter fullscreen mode Exit fullscreen mode

The caller sees the mutation. The function signature makes it visible and deliberate.

2. Avoiding expensive copies

For structs beyond a few dozen bytes, passing by pointer is meaningfully faster. The function call overhead drops from O(struct size) to O(8 bytes), and the stack frame is smaller.

// Copies 256 bytes on every call
func render(m Matrix4x4) { ... }

// Copies 8 bytes
func render(m *Matrix4x4) { ... }
Enter fullscreen mode Exit fullscreen mode

In tight loops or hot paths, benchmarks regularly show 2–5x throughput improvements for moderately sized structs.

3. Expressing optional values

A *T can be nil, giving you a clean way to represent the absence of a value - without a separate boolean flag or a magic sentinel.

type Config struct {
    Timeout  *time.Duration  // nil means "use the default"
    MaxRetry *int            // nil means "unlimited"
}
Enter fullscreen mode Exit fullscreen mode

Immediately readable: if the pointer is nil, the field was not set.

4. Shared mutable state

When multiple parts of your code operate on the same data - a cache, a connection pool, an in-memory store - pointers give all of them a reference to the same RAM location.

type Cache struct {
    mu    sync.RWMutex
    store map[string]string
}

func NewCache() *Cache {
    return &Cache{store: make(map[string]string)}
}

// Every caller holding *Cache operates on the same object in RAM
Enter fullscreen mode Exit fullscreen mode

Without pointers, every assignment would copy the cache - updates in one copy would be invisible to others.

5. Recursive data structures

Trees, linked lists, graphs, tries - any structure where a node references a same-type node requires a pointer. A Node value cannot contain a Node value. A *Node is 8 bytes.

6. Interface satisfaction and polymorphism

Pointer receivers expand the method set of a type. An interface satisfied by *T cannot be satisfied by T alone. Pointer types and interfaces together form Go's core abstraction mechanism for dependency injection and plugin architectures.


Disadvantages & Risks of Pointers in Go

Every advantage has a corresponding cost. Experienced engineers weigh these deliberately.

1. Nil pointer dereferences - runtime panics

The most immediate risk. A *T can be nil, and any dereference panics at runtime. Unlike type errors, there's no static guarantee that a pointer is non-nil. Go does not have non-nullable pointer types. Every *T is implicitly nullable.

func processUser(u *User) {
    fmt.Println(u.Name)  // panics if u is nil - no compiler warning
}
Enter fullscreen mode Exit fullscreen mode

In large codebases, nil checks become tedious and are frequently omitted. Consider whether a *T parameter is truly necessary, or whether a T value would remove the problem entirely.

2. Heap allocations increase GC pressure

Every time a value escapes to the heap, Go's GC is responsible for eventually reclaiming it. In systems with millions of small, short-lived pointer allocations - a common pattern in naively written Go HTTP servers - GC overhead becomes significant. Even Go's low-latency concurrent GC adds latency jitter that's hard to eliminate without rethinking allocation patterns.

// Allocates a new *Response on the heap for every request
func handleRequest(r *http.Request) *Response {
    return &Response{...}
}

// In hot paths, sync.Pool amortises allocations
var pool = sync.Pool{New: func() any { return &Response{} }}
Enter fullscreen mode Exit fullscreen mode

Profile with go tool pprof and check allocs/op in benchmarks. Stack allocations cost nothing to GC - they're freed when the function returns.

3. Pointer indirection degrades cache performance

Modern CPUs are optimised for sequential memory access. When data is laid out contiguously in RAM ([]struct{}), the CPU prefetcher pulls entire cache lines ahead of your loop. When data is a slice of pointers ([]*struct{}), each element is a random jump somewhere in the heap - a potential cache miss on every access.

// Cache-friendly - all Particle data is contiguous in RAM
particles := make([]Particle, 100_000)
for i := range particles {
    particles[i].X += particles[i].VX
}

// Cache-hostile - each pointer is a separate heap allocation
particles := make([]*Particle, 100_000)
for _, p := range particles {
    p.X += p.VX  // potential cache miss on every iteration
}
Enter fullscreen mode Exit fullscreen mode

For large datasets, the throughput difference can be 10x or more. This is why Go's standard library and high-performance Go code strongly prefer value slices over pointer slices.

4. Pointer aliasing makes code harder to reason about

When two pointers point to the same address, a write through one silently changes what the other sees. The compiler cannot assume pointer parameters are distinct, which limits certain optimizations and makes code harder to audit.

func add(a, b, result *int) {
    *result = *a + *b
}

x := 5
add(&x, &x, &x)  // all three alias the same address
// result = 5 + 5, then written to x - order matters here
Enter fullscreen mode Exit fullscreen mode

In concurrent code, aliasing combined with unsynchronized writes produces data races - some of the hardest bugs to reproduce and diagnose.

5. Ambiguous data ownership

With value semantics, ownership is clear: each copy is independent. With pointers, multiple parts of the code may hold a reference to the same object - and it's not always obvious who owns it, who can mutate it, or when it's safe to discard.

Go's GC removes the memory-safety aspect (no use-after-free), but logical ownership ambiguity remains. In complex systems, poorly managed pointer sharing leads to subtle state corruption.

// Who owns cfg? Can handleFoo mutate it? Can handleBar?
// If both do concurrently, do they race?
func setup(cfg *Config) {
    go handleFoo(cfg)
    go handleBar(cfg)
}
Enter fullscreen mode Exit fullscreen mode

Rust's borrow checker enforces ownership at compile time. Go leaves it to convention, documentation, and sync primitives.

6. Increased cognitive overhead

Code passing and returning pointers requires the reader to track multiple levels of indirection. p.Name looks like a value access, but if p is a *Person, it's a dereference followed by a field read. In deeply nested pointer chains, this becomes genuinely difficult to follow, and mutation bugs are non-obvious.

Summary table

Value (T) Pointer (*T)
Allocation Stack (usually) Heap (usually)
GC pressure None Yes - GC must track and reclaim
Nil risk None Runtime panic if nil
Mutation semantics Copy - caller unaffected Shared - caller sees changes
Cache behaviour Contiguous, prefetcher-friendly Scattered, potential cache misses
Ownership clarity Clear - independent copies Requires explicit discipline
Copy cost on call O(size of T) O(8 bytes) always

When to Use Pointers

Go's philosophy is that you should reach for a pointer deliberately, not reflexively.

✅ Use a pointer when…

  • You need to mutate the original value inside a function or method.
  • The struct is large enough that copying is measurably wasteful (~64–128 bytes as a rough heuristic - benchmark to be sure).
  • You want to express optionality - a nil-able *T instead of a zero value.
  • You're building recursive data structures (trees, linked lists, graphs).
  • You're implementing interfaces where pointer receivers are required.
  • You need shared mutable state across goroutines (with appropriate synchronization).

❌ Avoid a pointer when…

  • The value is small (int, float64, bool, small struct) and doesn't need mutation.
  • You want to signal immutability - a value parameter tells the caller "this function won't touch your data."
  • The type is already a reference type: slices, maps, channels, and interfaces contain internal pointers. Wrapping them in an additional * is almost never necessary.
  • You're iterating over a large dataset - value slices are dramatically more cache-friendly than pointer slices.

⚠️ Slices and maps already have pointer semantics for element mutation. You only need *[]int if the function needs to affect the caller's slice header (e.g. an append that must be visible to the caller).

func modifyElement(s []int) {
    s[0] = 999  // ✅ modifies backing array - visible to caller
}

func appendToSlice(s *[]int) {
    *s = append(*s, 42)  // ✅ caller sees new length
}

func appendWrong(s []int) {
    s = append(s, 42)  // ❌ modifies local copy of slice header only
}
Enter fullscreen mode Exit fullscreen mode

ommon Gotchas

1. Loop variable capture

// ❌ Classic bug - all pointers point to the same loop variable
ptrs := make([]*int, 3)
for i := 0; i < 3; i++ {
    ptrs[i] = &i   // &i is the same address every iteration
}
// After loop, i == 3. All three ptrs point to it.
fmt.Println(*ptrs[0], *ptrs[1], *ptrs[2])  // 3 3 3

// ✅ Fix - new variable per iteration
for i := 0; i < 3; i++ {
    v := i
    ptrs[i] = &v
}
fmt.Println(*ptrs[0], *ptrs[1], *ptrs[2])  // 0 1 2

// Note: Go 1.22+ changed loop variable semantics - per-iteration by default
Enter fullscreen mode Exit fullscreen mode

2. Returning a pointer to a local variable - safe in Go

In C, returning &localVar is undefined behaviour - the stack frame is gone. In Go, escape analysis detects this and promotes x to the heap automatically.

func newInt(v int) *int {
    x := v    // compiler promotes x to heap
    return &x // ✅ perfectly safe
}
Enter fullscreen mode Exit fullscreen mode

Run go build -gcflags="-m" to confirm which variables escape.

3. Pointer comparison

a := 42
b := 42
pa, pb := &a, &b

fmt.Println(pa == pb)    // false - different RAM addresses
fmt.Println(pa == &a)    // true  - same address
fmt.Println(*pa == *pb)  // true  - same value at different addresses
Enter fullscreen mode Exit fullscreen mode

Pointer equality checks address identity, not value equality. A frequent source of bugs when engineers expect == to compare pointed-to values.

4. Don't over-pointer

Scattering * everywhere "for performance" backfires: unnecessary heap allocations increase GC pressure, pointer indirection causes cache misses, and nil checks add noise throughout the codebase. Use pointers when you have a concrete reason: mutation, large size, optionality, or shared state.


Quick Reference Cheat Sheet

Expression Reads as Result Type What it does
&x "address of x" *T Returns the RAM address of variable x
*p "value at p" T Reads the value at the RAM address in p
*p = v "write v to p" - Writes v into RAM at the address in p
var p *T "p is a pointer to T" *T Declares p as a pointer (zero value: nil / 0x0)
new(T) "allocate a T" *T Heap-allocates zero-value T, returns pointer
&T{...} "new T literal" *T Heap-allocates initialised T, returns pointer
p == nil "is p nil?" bool True if p holds 0x0 - points nowhere
p.Field "field via pointer" field type Auto-dereferences; identical to (*p).Field


Stay Updated and Connected

To ensure you don't miss any part of this series and to connect with me for more in-depth
discussions on Software Development (Web, Server, Mobile or Scraping / Automation), data
structures and algorithms, and other exciting tech topics, follow me on:

Stay tuned and happy coding 👨‍💻🚀

Top comments (0)