How My “Worst” Complexity Solution Beat 100% Runtime — Binary Gap in Go
When I solved the Binary Gap problem, I expected the cleaner, theoretically optimal solution to dominate.
Instead, something unexpected happened. Guess which one ranked higher?
My O(n log n) solution beat 100% of submissions in runtime, while my O(n), O(1) solution used more memory.
That forced me to pause and rethink what I thought I understood about complexity.
Let’s break it down.
Problem Recap — What Is a Binary Gap?
A binary gap is the distance between two consecutive 1s in the binary representation of a number.
More precisely:
A binary gap is the number of consecutive
0s between two adjacent1s, plus one (to include the closing1).
Trailing zeros do not count, because a valid gap must be bounded by 1s on both sides.
Example
Given a binary list represented as:
Binary: 00010001001
Index: 01234567890
Valid gaps:
- Between index 3 and 7 →
000→ gap = 3 + 1 = 4 - Between index 7 and 10 →
00→ gap = 2 + 1 = 3
Largest gap = 4
First Approach — Store All Gaps and Sort (O(n log n))
Instead of tracking the maximum directly, I stored all computed gaps in a slice, sorted it, and returned the largest. At this point, I was only thinking about solving the problem... not making a great solution.
And yes, that introduced sorting.
Implementation
func binaryGap(n int) int {
binR := []rune(strconv.FormatInt(int64(n), 2))
count := 0
gaps := []int{}
gotOne := false
for _, x := range binR {
if x == '1' {
if gotOne {
gaps = append(gaps, count+1)
count = 0
}
gotOne = true
} else if x == '0' && gotOne {
count++
}
}
if len(gaps) == 0 {
return 0
}
sort.Ints(gaps)
return gaps[len(gaps)-1]
}
Complexity
- Time: O(n log n)
- Space: O(n)
As you know, this is theoretically worse, so I decided to refactor my code like...
Second Approach — Track the Maximum On the Fly (O(1) Space)
The idea:
- Convert the number to binary
- Count zeros between
1s - Track the maximum gap as we iterate
- Ignore leading and trailing zeros
Implementation
func binaryGap(n int) int {
bin := strconv.FormatInt(int64(n), 2)
ret := 0
count := -1
for _, ch := range bin {
if ch == '1' {
if count+1 > ret {
ret = count+1
}
count = 0
} else if count != -1 {
count++
}
}
return ret
}
Complexity
- Time: O(log n)
- Space: O(1)
This should be the optimal solution.
Or so I thought, but this is why theoretical complexity and observed runtime diverge...
The Surprise
When I submitted both:
| Approach | Runtime | Memory | Beets Runtime/Memory |
|---|---|---|---|
| O(1) Space | 0 ms | ~4.0 MB | 100% / 40.63% |
| O(n log n) | 0 ms | ~3.85 MB | 100% / 78.13% |
And the second one beat 100% of submissions in runtime.
If you are shocked by this, don't fret, I also were.
Why the “Worse” Algorithm Won?
The Input Size Is Tiny
The key detail:
The binary length of an integer is at most 32 bits.
That means:
- Maximum binary length ≈ 32
- Maximum number of gaps ≈ 16
Sorting 10–15 integers is effectively constant time.
So:
O(32 log 32) ≈ constant
In bounded problems, asymptotic complexity barely matters.
Online Judge Timing Granularity
Platforms measure runtime in milliseconds.
If both implementations run in less than 1 ms:
They both show 0 ms.
“Beats 100%” often just means it landed in the fastest measurable bucket.
Not that it is fundamentally superior.
The Memory RSS Differences Are Not Just About Variables
The platform measures total memory footprint, including:
- Go runtime overhead
- String allocation behaviour
- Slice capacity growth
- Garbage collector behaviour
A difference between:
4.0 MB vs 3.89 MB
is allocator-level noise.
It doesn’t necessarily reflect algorithmic superiority.
The Real Lesson
This tiny problem taught me more about real-world performance than many “hard” ones, and not just about binary gaps:
Big-O complexity only dominates when the input size grows.
When the input is capped and tiny:
- O(n)
- O(n log n)
- Even O(n²)
may behave identically in practice.
Always write for clarity first, then optimise only when needed. But understand why something is faster/slower.
The Last Optimal Refactor
After spending some time refactoring the code, I realised that I could eliminate string conversion and use bitwise operations, which led me to this solution:
func binaryGap(n int) int {
ret, prev := 0, -1
for i := 0; n > 0; n >>= 1 {
if n&1 == 1 {
if prev != -1 {
ret = max(ret, i-prev)
}
prev = i
}
i++
}
return ret
}
With this solution, we now have:
- No string allocation
- No slice allocation
- True O(1) extra space
- Pure bitwise operations
I also learnt that, stracturing your code differently can byfar improve your code. For exmple:
While using
for rangein go, passing a string directly uses more memory than changing the string to runs.-
You can write bitwise but if you write it correctly, it can enhance how:
- The Go compiler schedules registers/spills
- The micro-variations are generated in machine code
- And overally reduce the Noise factors that make your code perfomace fluctuate.
Final Takeaway
My “worse” algorithm didn’t magically become better.
It benefited from:
- Bounded input size
- Measurement granularity
- Runtime implementation details
And that’s the real lesson:
Never interpret performance badges without understanding why performance looks the way it does.
Sometimes, the numbers lie — or at least, they don’t tell the full story.

Top comments (0)