Array Internals & Memory Layout
Before you memorize complexity tables, it helps to understand what's actually happening in memory. Once you see it, the complexities stop feeling arbitrary.
Here's what this covers:
| Concept | Key insight |
|---|---|
| Contiguous memory | Elements sit side-by-side with no gaps between them |
| O(1) random access |
arr[i] is one arithmetic operation: baseAddress + i * elementSize
|
| O(n) insert/delete | Inserting in the middle means shifting every element after it |
| Amortized O(1) push | Resizing happens exponentially rarely, so it's cheap on average |
| V8 optimization | Dense arrays get a true contiguous backing store; sparse ones fall back to a hash map |
What "contiguous" actually means
Think of memory as a long strip of numbered boxes. A contiguous array reserves a block of consecutive boxes — no gaps, no pointers between elements.
Memory address: 1000 1008 1016 1024 1032
↓ ↓ ↓ ↓ ↓
Array contents: [ 10 ][ 20 ][ 30 ][ 40 ][ 50 ]
↑
baseAddress
Each box is 8 bytes. So:
-
arr[0]lives at1000 + 0 * 8 = 1000 -
arr[3]lives at1000 + 3 * 8 = 1024 -
arr[i]lives at1000 + i * 8
That's one multiply and one add, regardless of whether i is 0 or 500,000. That's why arrays have O(1) random access.
A linked list has to be traversed instead. Each node holds a pointer to the next:
[10 | *]──→[20 | *]──→[30 | *]──→[40 | *]──→[50 | null]
To reach node[3] you follow pointers: head → next → next → next. That's O(n) and there's no shortcut.
How V8 handles arrays internally
JavaScript arrays aren't always backed by contiguous memory. V8 picks a mode based on how you use them:
| Mode | When it applies | What it uses |
|---|---|---|
PACKED_SMI |
Dense array of small integers | True contiguous block — fastest |
PACKED_DOUBLE |
Dense array of floats | True contiguous block |
PACKED_ELEMENTS |
Dense mixed types | Contiguous block of pointers |
| Dictionary mode | Sparse array with gaps | Hash map — much slower |
const dense = [1, 2, 3, 4, 5] // PACKED_SMI — contiguous memory
const sparse = []
sparse[1000] = 1 // Dictionary mode — indices 0–999 are holes
Keep your arrays dense and consistent in type. The moment you introduce holes, V8 falls back to a hash map and you lose the performance benefits.
Operation complexities, with reasoning
Reading by index and appending are both fast:
const arr = [10, 20, 30, 40, 50]
// O(1) — CPU jumps directly to baseAddress + 3 * 8
console.log(arr[3]) // 40
// O(1) amortized — backing store has spare capacity, just write to the next slot
arr.push(60)
// [10, 20, 30, 40, 50, 60]
Inserting or removing in the middle is expensive because elements have to shift:
const arr = [10, 20, 30, 40, 50]
// O(n) — everything after index 1 moves right
arr.splice(1, 0, 15)
// Before: [10, 20, 30, 40, 50]
// After: [10, 15, 20, 30, 40, 50]
// ↑ ↑ ↑ ↑
// all shifted right
// O(n) — every single element shifts right
arr.unshift(5)
Here's what splice(1, 0, 15) actually does to the backing store:
Before: [10][20][30][40][50][ ]
Step 1 — shift right:
[10][ ][20][30][40][50]
Step 2 — write 15:
[10][15][20][30][40][50]
The more elements in the array, the more shifting — hence O(n).
Why arrays beat linked lists even when Big-O is the same
Sequential iteration is O(n) for both arrays and linked lists. In practice arrays can be 10–100× faster, and the reason is the CPU cache.
Modern CPUs don't fetch one byte at a time. They load a cache line — typically 64 bytes at once. When you read arr[0], the CPU automatically pulls in arr[0] through arr[7]. By the time the loop reaches arr[1], it's already in cache.
Cache line loaded when you read arr[0]:
┌────┬────┬────┬────┬────┬────┬────┬────┐
│[0] │[1] │[2] │[3] │[4] │[5] │[6] │[7] │
└────┴────┴────┴────┴────┴────┴────┴────┘
↑ ↑
cache hit cache hit — already there, no wait
Linked list nodes are scattered across memory. Each pointer-follow likely lands in a cold cache line — the CPU stalls while waiting for RAM to respond.
// Cache-friendly — next element is already loaded
function sum(arr) {
let total = 0
for (let i = 0; i < arr.length; i++) {
total += arr[i]
}
return total
}
// Cache-unfriendly — each jump is unpredictable
function randomSum(arr, indices) {
let total = 0
for (const i of indices) {
total += arr[i] // likely a cache miss each time
}
return total
}
Dynamic resizing and amortized O(1) push
JavaScript arrays grow automatically. When the backing store is full, V8 allocates a new array roughly 2× the current size and copies everything over. That single push is O(n), but it happens so rarely that the average cost across many pushes stays O(1).
Here's a concrete example showing how the cost spreads out:
| Push # | Cost | Resize? |
|---|---|---|
| 1–4 | O(1) each | No |
| 5 | O(4) — copies 4 elements | Yes, capacity → 8 |
| 6–8 | O(1) each | No |
| 9 | O(8) — copies 8 elements | Yes, capacity → 16 |
| 10–16 | O(1) each | No |
Each expensive push is paid for by all the cheap ones that came before it. Total work across n pushes is roughly 2n, which averages to O(1) per push.
class DynamicArray {
constructor() {
this.data = new Array(4)
this.size = 0
this.capacity = 4
}
push(val) {
if (this.size === this.capacity) {
this._resize()
}
this.data[this.size] = val
this.size++
}
_resize() {
this.capacity *= 2
const newData = new Array(this.capacity)
for (let i = 0; i < this.size; i++) {
newData[i] = this.data[i] // O(n) — but only triggered every n pushes
}
this.data = newData
}
get(i) {
return this.data[i]
}
}
One thing worth keeping in mind: amortized O(1) hides occasional O(n) spikes. In game loops or audio threads where every frame has a tight deadline, a resize at the wrong moment causes a drop. If you know the size upfront, pre-allocate:
const arr = new Array(10000).fill(0) // one allocation, no resizing
Choosing the right structure
A few things to watch out for in practice:
splice(0, 1) and shift() are both O(n) — they shift every element left. pop() is O(1) because nothing moves. If you need queue behavior, don't use shift() on large arrays; use a circular buffer instead.
Creating holes in an array pushes V8 into dictionary mode. a[1000] = 1 with nothing at indices 0–999 is the classic case. If you need an explicit empty value, use undefined rather than leaving a gap.
For raw numeric data, TypedArray gives you the most direct control:
const matrix = new Float64Array(1024 * 1024)
// True contiguous memory, fixed element type, zero boxing overhead
Test your knowledge
Q1: Why is accessing arr[500] the same speed as arr[0]?
- [ ] The CPU computes
baseAddress + 500 * elementSize— one arithmetic operation - [ ] JavaScript caches all indices at creation time
- [ ] The array is sorted so binary search finds it in O(log n)
- [ ] The garbage collector optimizes frequent accesses
Q2: What is the time complexity of arr.splice(0, 0, 'x') on an array of n elements?
- [ ] O(1)
- [ ] O(log n)
- [ ] O(n)
- [ ] O(n²)
Q3: What does this log?
const a = [1, 2, 3]
a.push(4)
a.unshift(0)
console.log(a.length, a[0], a[4])
Q4: Why are arrays faster than linked lists for sequential iteration, even though both are O(n)?
- [ ] Cache locality — contiguous memory means the CPU prefetches subsequent elements automatically
- [ ] Arrays have shorter pointers
- [ ] Linked lists require sorting before iteration
- [ ] Arrays use less memory per element
Q5: Why is push() O(1) amortized but not O(1) worst-case?
- [ ] Occasionally the backing store must double in size, copying all elements — O(n) for that single push
- [ ]
push()sorts the array after insertion - [ ] The garbage collector pauses on every push
- [ ] V8 recalculates hash codes for the array
Answers
A1: The CPU computes baseAddress + 500 * elementSize — one arithmetic operation. Contiguous layout means any index is reachable via a single address calculation with no traversal.
A2: O(n). Inserting at index 0 requires shifting all n existing elements one position right to make room.
A3: 5 0 4. After push(4): [1, 2, 3, 4]. After unshift(0): [0, 1, 2, 3, 4]. Length is 5, a[0] is 0, a[4] is 4.
A4: Cache locality. When you read arr[i], the CPU loads the surrounding elements into cache too. Linked list nodes are scattered in memory — each pointer-follow is likely a cache miss and the CPU has to wait for RAM.
A5: When the backing store fills up, the array doubles in size and copies all n elements over — that one push is O(n). But this only happens after n pushes, so the cost averages to O(1) across all of them.
Key takeaways
An array is a numbered row of lockers in a corridor. Walking to locker 500 is instant — you compute the number and go directly there. Inserting a new locker in the middle means sliding every subsequent locker down by one. And when the corridor runs out of space, it doubles in length — expensive once, but it happens less and less often as the corridor grows.
| Rule | Why it matters |
|---|---|
arr[i] = base + i * size
|
One arithmetic op — always O(1) |
| Insertion at index k shifts n−k elements | O(n) — unavoidable for mid-array changes |
pop() is O(1), shift() is O(n) |
Remove from the end when you can |
| Dense arrays stay in V8 fast mode | Holes trigger dictionary mode |
| Cache lines load 8+ elements at once | Sequential iteration is far faster than scatter |
| Resize doubles capacity | Keeps amortized push O(1) over n operations |

Top comments (0)