The Oracle at Delphi, or How I Learned to Stop Worrying and…accept…NoSQL
Dave Ross May 17 '17
My first exposure to enterprise-scale web development was in the early 2000s, when the company I worked for migrated our order processing & customer service system from green screens to J2EE. A rack at the corner of our datacenter housed its servers: 1U webheads named after Greek city-states like Athens and Sparta, and a giant 4U Sun V480 database server called Delphi, because it's where you would find Oracle.
Delphi contained two SPARC 9 CPUs and multiple gigabytes of RAM. For storage, gigabit Ethernet linked it to a SAN so densely populated we had to reinforce the floor beneath it. It was a hot rod of data processing power, dwarfing the Sun 220R web servers physically and in terms of capability.
The prevailing wisdom was to invest heavily in your database tier:
- Multiple fast CPUs to parse and execute queries
- Generous RAM to cache common queries
- Redundant, performant storage for all your valuable data
I was taking night classes toward a bachelor's degree at the time, and had two semesters on database theory. We learned things like relational algebra and how to partition data using formulas originally derived to track viral epidemics. I was heavily invested in the RDBMS world when NoSQL came along. NoSQL sounded like the messy Pick databases I dealt with at the dawn of my career. It threw away the accumulated wisdom from decades of RDBMS development. I was skeptical, but I'm starting to warm up to this SQL-less, often schema-free world.
My moment of clarity came when I spec'ed out a couple AWS instances for a web app. As I considered our needs for multiple cores and gigabytes of RAM, I realized all the power of Delphi was now in our web tier and Delphi's novel gigabit data pipeline was obsolete. Yesterday's hardware demanded that we crunch data in the database tier and send the smallest result set possible over the wire. Today's hosting options make it possible – and affordable – to pull a set of records out of storage and iterate over them in the web tier, using the same language the rest of the app is written in and taking advantage of the app's own caching infrastructure.
Remember DBAs? We used to need a full-time employee just to tune that database server for our workloads and tweak our queries for efficiency. These creatures were the bane of developers, and they lived to EXPLAIN how the DBMS executed each line of SQL. Moving the record selection logic into the codebase obviates the need for a dedicated DBA; optimizing selection becomes part of a code review instead of a dedicated role.
So, while I'll always miss the hum of that data crunching behemoth, I recognize it's a relic. And, so is the way I used to think about databases. NoSQL stuck around because it's not a fad, it reflects an industry adapting to the ridiculous amount of cheap processing power at our hands. If an old-timer like me can learn to – if not love, at least accept – this new world, I hope every other graying code monkey out there will take a second look and understand NoSQL's place in today's web stacks.