DEV Community

Cover image for The Hidden Cost of Reflection in Go — Why Your Code Is Slower Than You Think
Pavel Sanikovich
Pavel Sanikovich

Posted on

The Hidden Cost of Reflection in Go — Why Your Code Is Slower Than You Think

Most Go developers learn about reflection early, because it hides inside the standard library — especially encoding/json.
But very few understand what reflection actually costs.

I didn’t either.

Not until I profiled a critical service and saw something shocking: almost 50% of CPU time was spent inside the reflection machinery of encoding/json.
This wasn’t some massive struct, or crazy nested schema. It was a normal API payload.

That was the moment I realized something important:

Reflection is one of the most expensive features in Go, and it hurts real production systems more than we think.

This article explains why reflection is slow, how it behaves inside Go’s runtime, where it silently drains performance, and what to do about it.


1. Why Reflection Exists — And Why It’s Expensive

Reflection is Go’s escape hatch for dynamic behavior.

It’s how Go can:

  • marshal arbitrary structs to JSON
  • decode JSON without schema
  • inspect struct tags
  • walk nested fields at runtime
  • dynamically call methods
  • generically explore types

The problem is simple:

Reflection turns static, compile-time knowledge into runtime computations.

What the compiler normally optimizes into fast, predictable machine code becomes:

  • dynamic lookups
  • indirect calls
  • pointer chasing
  • type switches
  • value wrapping (reflect.Value)
  • and lots of heap allocations

Reflection converts:

FAST → SLOW
STATIC → DYNAMIC
ZERO-ALLOC → ALLOC-HEAVY

And you feel it immediately in benchmarks.


2. How Reflection Shows Up in Flamegraphs

In real profiling data, reflection-heavy code has a signature.

A simplified version looks like this:

  • reflect.Value.Field
  • reflect.Value.Interface
  • reflect.Value.Kind
  • reflect.Value.Index
  • reflect.MakeSlice
  • reflect.New
  • reflect.Value.Set

If encoding/json is involved, you also see:

  • encodeState.reflectValue
  • decodeState.value
  • mapassign_faststr
  • growfunc

When a service runs hot, these functions dominate CPU flamegraphs.

In my case, a single endpoint had:

  • 38% CPU consumption inside reflection-related paths
  • 12% inside string encoding for JSON
  • 7% inside type assertion / conversion
  • 9% inside map iteration for struct tags

The business logic took only around 7%.

Reflection was eating the rest.


3. Why Reflection Hurts Performance (Deep Technical Breakdown)

Let’s break down the real reasons reflection is slow. Not generic “reflection is expensive” — real causes:


Reason 1 — Indirection and Pointer Chasing

Reflection wraps values in a reflect.Value struct.

Under the hood, operations look like:

  • load value pointer
  • check kind
  • check flag bits
  • convert representation
  • follow pointer again
  • perform value extraction

This chain of indirections:

  • kills CPU branch prediction
  • destroys cache locality
  • causes unpredictable latency

Reason 2 — Dynamic Type Inspection

Reflection needs to know:

  • type name
  • alignment
  • size
  • whether it's exported
  • struct tags
  • nested fields
  • pointer levels

All of this is discovered at runtime.

In encoding, the entire struct tree must be walked.

Every time.


Reason 3 — Interface Boxing and Unboxing

Reflection is always converting:

  • concrete → interface
  • interface → reflect.Value
  • reflect.Value → interface{}
  • interface → concrete

Each of these adds uncertainty and cost.


Reason 4 — Allocations

A huge portion of reflection overhead is:

allocating small, temporary reflect.Values.

These allocations:

  • blow up heap usage
  • trigger more GC cycles
  • cause latency spikes

In p95 latencies, GC pause amplification is clearly visible.


Reason 5 — Lack of Compiler Optimizations

Reflection prevents:

  • inlining
  • escape analysis from optimizing away pointers
  • constant folding
  • register allocation improvements

Go compiler sees reflection and says:

“Okay, I’ll assume nothing and optimize nothing.”


4. The Perfect Example: encoding/json

Go’s standard JSON library is built on reflection.

This means it repeats these operations for every single JSON encode/decode:

  • walk struct fields
  • fetch field names
  • interpret struct tags
  • extract values dynamically
  • convert to strings
  • produce temporary buffers
  • write escaping for strings
  • recursively encode nested fields

It’s a beautiful piece of engineering — but fundamentally slow.

When I swapped encoding/json for faster libraries, benchmarks improved instantly:

  • jsoniter → 35–45% faster
  • easyjson → 60–80% faster
  • gojay → even faster on large nested payloads
  • protobuf → 5–10× faster

Reflection is invisible until you benchmark — then impossible to ignore.


5. Real-World Case: Reflection in a Hot Path

Inside one of our Go services, we had a struct representing a trade event.
Pretty standard:

type Trade struct {
    ID        string    `json:"id"`
    Price     float64   `json:"price"`
    Quantity  int       `json:"qty"`
    Timestamp time.Time `json:"ts"`
    Meta      map[string]string `json:"meta"`
}
Enter fullscreen mode Exit fullscreen mode

The endpoint returning this was doing:

  • 40k trades/sec
  • each requiring JSON.encode
  • each payload around 500 bytes

Profiling revealed:

  • reflect.Value.Field: 17% CPU
  • reflect.Value.Interface: 8% CPU
  • encodeState: 10% CPU
  • allocation churn: 6% CPU

Switching to MessagePack + preallocated buffers reduced CPU cost by roughly 30% and p95 latency by 50%.


6. “But Reflection Is Flexible!” — Yes, And That’s the Problem

Reflection gives you dynamic behavior.

But at scale, dynamic = unpredictable.

In highload systems, unpredictability is your enemy.

Reflection causes:

  • unpredictable latency
  • unpredictable allocations
  • unpredictable GC
  • unpredictable code paths

You can’t guarantee p99 latency if reflection is on the hot path.


7. How to Avoid Reflection in Go (Practical Techniques)

Here are the best patterns we used to escape its cost.


Technique 1 — Switch to Binary Formats

This eliminates reflection entirely.

  • Protobuf → zero reflection
  • Msgpack → minimal reflection
  • Flatbuffers/Cap’n Proto → zero-copy, reflection-free

Binary formats often improve performance "for free".


Technique 2 — Use Code Generation

Reflection replaced with generated code.

Tools:

  • easyjson
  • ffjson
  • protoc for protobuf
  • msgp (MessagePack code generator)

Generated code:

  • uses typed access
  • avoids reflection
  • avoids allocations
  • predictable and fast

Technique 3 — Manual Marshaling for Critical Paths

Yes, it’s ugly.

Yes, it’s extremely fast.

Example:

func (t *Trade) MarshalBinary() ([]byte, error) {
    buf := make([]byte, 0, 64)
    buf = append(buf, strconv.FormatFloat(t.Price, 'f', -1, 64)...)
    buf = append(buf, ',')
    buf = append(buf, strconv.Itoa(t.Quantity)...)
    return buf, nil
}
Enter fullscreen mode Exit fullscreen mode

Hand-written encoders are the fastest possible technique in Go.


Technique 4 — Flatten Structs

Deep nesting = more reflection.
Flattening reduces:

  • recursion
  • allocations
  • pointer chasing

Technique 5 — Typed Maps Instead of interface{}

Avoid:

map[string]interface{}
Enter fullscreen mode Exit fullscreen mode

Use:

map[string]string
Enter fullscreen mode Exit fullscreen mode

or

map[string]float64
Enter fullscreen mode Exit fullscreen mode

Every interface{} introduces slow dynamic dispatch.


8. When Reflection Is Acceptable

Reflection is fine when:

  • performance doesn’t matter
  • data volume is small
  • correctness is more valuable than speed
  • you’re building prototypes or admin panels

Reflection becomes unacceptable when:

  • you are in a highload environment
  • p95/p99 latency matters
  • CPU cost matters
  • thousands of operations per second hit the same code
  • you're on the hot path

9. Key Takeaways (Senior-Level)

  • Reflection is elegant but expensive.
  • JSON encoding is slow because it's reflection-based.
  • Reflection leads to unpredictable latency and GC behavior.
  • Binary serialization avoids reflection completely.
  • Code generation is the biggest win short of binary protocols.
  • Flattening data structures helps more than expected.
  • Avoid reflection in hot paths at all costs.

Top comments (0)