DEV Community

Cover image for Ordered Indices
Abdulhai Mohamed Samy
Abdulhai Mohamed Samy

Posted on

Ordered Indices

Difficulty: Advanced

Reading Time: 35 min read

Last Updated: September 01, 2025


⚡ Ordered Indices vs. B+ Trees — Why Indexing Had to Evolve

In the last article, we explored B-trees and B+ trees—brilliant self-balancing data structures that quietly power most databases. But before we celebrate them as heroes, let’s rewind.

Because in the beginning, databases relied on something simpler: the ordered index.


📖 Ordered Indices — The First Step

Think of an ordered index like a sorted list of records.

  • Searching is fast: binary search takes you to the right record quickly.
  • Range queries are effortless: just scan sequentially from one key to the next.

But there’s a catch.

What happens when new records arrive out of order? Or when data gets deleted?

Suddenly, you’re faced with shifting entire rows or relying on overflow blocks (spare storage areas for “out of place” records). This is manageable on small datasets—but at billions of records, it’s a nightmare of slow writes and messy fragmentation.

That’s where ordered indices hit their limit.


🌲 Enter the B+ Tree

The B+ tree took everything good about ordered indices (sorted access, fast range queries) and solved their biggest flaws:

  • Efficient Updates → No more shifting entire data blocks. Instead, inserts and deletes trigger local splits or merges.
  • Guaranteed Balance → The tree stays shallow, so searches are always O(log n). Even with millions of rows, only a few I/Os are needed.
  • Leaf Links → Since leaves are connected, range queries are still blazing fast.

Instead of a fragile, shifting structure, the B+ tree gave databases a self-organizing index that stays efficient under constant growth and churn.

This wasn’t just an improvement—it was the necessary evolution to make indexing viable at scale.


⚙️ Modern Optimizations

B+ trees didn’t stop at their original design. Databases have extended them with smart tricks, including:

  • Prefix compression → Shrinks string keys to save space.
  • Bulk loading → Builds an index much faster from sorted data.
  • Buffer management → Keeps hot nodes in memory, reducing disk I/O.
  • Cache-aware layouts → Align nodes with CPU cache lines for speed in in-memory DBs.
  • Storage adaptation → Wide nodes for magnetic disks, merge-friendly writes for SSDs, and hybrid layouts for RAM.

That’s why, decades later, the B+ tree remains the backbone of indexing in MySQL, PostgreSQL, Oracle, and even modern in-memory systems.

Read the full article in my Notion blog here:

📌 Note:

The full article lives in my Notion blog here, which serves as the single hub for all my articles and ensures consistent formatting across platforms. You can read this article directly in the Notion link above. Feel free to share your thoughts or feedback in the site comments—or drop me a note on LinkedIn.

separtor

About the Author

Abdul-Hai Mohamed | Software Engineering Geek’s.

Writes in-depth articles about Software Engineering and architecture.

Follow on GitHub and LinkedIn.

Top comments (0)