DEV Community

Cover image for Maximizing Performance: Best Practices for Database Scaling
kaustubh yerkade
kaustubh yerkade

Posted on

Maximizing Performance: Best Practices for Database Scaling

Scaling a database is crucial for maintaining performance, availability, and reliability as your system grows. Here are several strategies to scale databases effectively:

1.Indexing
2.Denormalization
3.Materialized views
4.Sharding
5.Partitioning
6.Replication
7.Caching
8.Scaling


1. Indexing:

Indexing in databases is like creating a shortcut to quickly find data without scanning the entire table. Imagine it like an index in a book that helps you jump to the right chapter instead of flipping through every page.

When you create an index on a column (or multiple columns), the database builds a data structure (usually a B-tree or hash table ) that allows it to search, retrieve, and sort data much faster.

When to use Indexing -
Frequently queried columns: If you're always filtering or searching by certain columns, index those.
Primary/foreign keys: Index primary keys by default and foreign keys for fast joins.

Let’s break it down with an example to make it super clear.

Imagine you have a table of users called users, and it looks like this:

CREATE TABLE users (
    user_id INT PRIMARY KEY,
    name VARCHAR(100),
    email VARCHAR(100),
    age INT,
    city VARCHAR(100)
);

Enter fullscreen mode Exit fullscreen mode

You’re running a lot of queries to find users by their email:

SELECT * FROM users WHERE email = 'john@example.com';
Enter fullscreen mode Exit fullscreen mode

Without Index:
The database will go through every row in the table to find the matching email. If you’ve got millions of users, this can be slow — like flipping through every page of a book to find a single word.

Adding an Index:

You can create an index on the email column so the database doesn’t need to check each row manually:

CREATE INDEX idx_email ON users (email);
Enter fullscreen mode Exit fullscreen mode

How It Works:
The database builds a special data structure like a B-tree that stores all the email addresses in sorted order. When you run the query, it can quickly look up the email in the index without searching the whole table.
Now, when you run this:

SELECT * FROM users WHERE email = 'john@example.com';
Enter fullscreen mode Exit fullscreen mode

It goes directly to the email index, finds the exact match, and pulls up the row much faster.

Another Example: Composite Index

Let’s say you also run a lot of queries filtering by city and age together:


SELECT * FROM users WHERE city = 'New York' AND age = 25;
Enter fullscreen mode Exit fullscreen mode

You can create a composite index on both columns for faster lookups:

CREATE INDEX idx_city_age ON users (city, age);
Enter fullscreen mode Exit fullscreen mode

Now, the database can use this index to speed up queries that filter by both city and age. It’s like having a more detailed roadmap that takes you straight to the neighborhood you’re looking for.


2. Denormalization

Denormalization is the process of restructuring a database to reduce the number of joins between tables by adding redundant data (duplicate data) or combining tables. It’s essentially doing the opposite of normalization, which aims to break down data into smaller, related tables to eliminate redundancy.

Denormalization improves read performance because the database doesn’t have to join multiple tables to get the needed data, but it can also increase the write complexity and storage requirements since you're storing the same data in multiple places.

Why Denormalize?

  • Performance boost for read-heavy applications: If your app needs to read data quickly, denormalizing can avoid slow joins and reduce query complexity. - Fewer joins = faster reads: Since joins can slow down queries, especially with large datasets, denormalization helps minimize them.

Example of Denormalization
Let’s start with two normalized tables.

Normalized Tables:

You have an e-commerce app with a users table and an orders table.

Users table (normalized)

CREATE TABLE users (
    user_id INT PRIMARY KEY,
    name VARCHAR(100),
    email VARCHAR(100)
);
Enter fullscreen mode Exit fullscreen mode

Orders table (normalized)

CREATE TABLE orders (
    order_id INT PRIMARY KEY,
    user_id INT,
    order_date DATE,
    amount DECIMAL(10, 2),
    FOREIGN KEY (user_id) REFERENCES users(user_id)
);
Enter fullscreen mode Exit fullscreen mode

Now, if you want to see a user’s order history with their name and email, you’ll need to join these two tables:

SELECT users.name, users.email, orders.order_date, orders.amount
FROM users
JOIN orders ON users.user_id = orders.user_id;
Enter fullscreen mode Exit fullscreen mode

This join can be slow when you have millions of users and orders.

Denormalized Table:

To speed things up, you denormalize by adding user data directly to the orders table so you don’t need to join it with the users table.

-- Orders table (denormalized)

CREATE TABLE orders (
    order_id INT PRIMARY KEY,
    user_id INT,
    name VARCHAR(100),     -- Redundant data
    email VARCHAR(100),    -- Redundant data
    order_date DATE,
    amount DECIMAL(10, 2)
);
Enter fullscreen mode Exit fullscreen mode

Now, you can query the orders table without a join:

SELECT name, email, order_date, amount
FROM orders
WHERE user_id = 123;
Enter fullscreen mode Exit fullscreen mode
  • Faster read: Since all the data is in one table, the query is faster — no joins needed.
  • More storage: You’re now storing the name and email for every order, leading to redundancy.

When to Denormalize:

  • Read-heavy applications: If your app has way more reads than writes and you care about fast read times.
  • Real-time data needs: If your app needs instant access to pre-computed or pre-joined data.
  • Reporting and analytics: For quick aggregations and queries across large data sets.

3. Materialized views

Materialized views are like precomputed snapshots of a query that are stored in the database, making it much faster to retrieve complex query results. Unlike a regular view (which is essentially a saved query that runs each time you request it), a materialized view actually stores the results of the query, so when you access it, the data is already there and doesn’t need to be recalculated.

  • Stored Results: Materialized views store the result of a query physically on disk, so you don't have to run the query every time.
  • Faster Reads: Since the data is precomputed, accessing a materialized view is much faster than running a complex query from scratch.
  • Periodic Refresh: The data in a materialized view can be refreshed at intervals, either manually or automatically, depending on how current you need the data to be.

When to Use Materialized Views:

  • Complex Aggregations: If your queries involve heavy computations, aggregations (like SUM, COUNT, AVG), or joins across large tables, materialized views are ideal for speeding up performance.
  • Data Warehousing: Frequently used in OLAP (Online Analytical Processing) systems where historical data needs to be analyzed and quick access to summary data is required.
  • Reporting: Perfect for dashboards or reports that query the same data repeatedly without needing it to be real-time.

Example of a Materialized View:
Let’s say you have a table of orders and you frequently run a query to calculate total sales per month:

SELECT SUM(amount) AS total_sales, DATE_TRUNC('month', order_date) AS month
FROM orders
GROUP BY DATE_TRUNC('month', order_date);
Enter fullscreen mode Exit fullscreen mode

This query might be slow if the orders table is huge, especially if it’s run multiple times a day. You can create a materialized view to store this result:

CREATE MATERIALIZED VIEW monthly_sales AS
SELECT SUM(amount) AS total_sales, DATE_TRUNC('month', order_date) AS month
FROM orders
GROUP BY DATE_TRUNC('month', order_date);
Enter fullscreen mode Exit fullscreen mode

Now, whenever you query the monthly_sales materialized view, it’s just reading the precomputed data instead of re-executing the entire query.
The query becomes super fast:

SELECT * FROM monthly_sales;
Enter fullscreen mode Exit fullscreen mode

Refreshing Materialized Views:
Since a materialized view stores a snapshot of the data, it may become outdated. You can refresh it periodically:

Manual Refresh: You can refresh it manually when needed:

REFRESH MATERIALIZED VIEW monthly_sales;
Enter fullscreen mode Exit fullscreen mode

Automatic Refresh: PostgreSQL allow you to set the materialized view to refresh on a schedule

Benefits of Materialized Views:

  • Faster query performance: Since the data is precomputed, queries on materialized views are much faster, especially for complex aggregations and joins.
  • Reduces computation load: By storing the results of heavy queries, you reduce the load on your database for recurring queries.

4. Sharding

Sharding is a database scaling technique where large datasets are split into smaller, more manageable pieces called shards , that are stored across multiple database servers. Each shard contains a portion of the total data, allowing for improved performance and scalability, especially for massive applications with a lot of users and data.

Instead of storing all your data in one giant database, sharding divides the data horizontally. Each shard acts like an independent database that holds only a subset of the overall data. This spreads out the load and allows the system to handle more traffic by distributing data and queries across multiple servers.

How Sharding Works:

  • Divide Data: Data is split based on a “shard key” (like user ID or geographic region).
  • Assign Shards: Each shard holds a subset of data. For example, users with IDs 1–100,000 might be in one shard, and users 100,001–200,000 in another.
  • Distribute Load: Queries and operations are directed to the appropriate shard based on the shard key, which reduces the load on any single database server.

Sharding Example:
Imagine you have a table with millions of users and you decide to shard it based on user_id:

Without Sharding (Monolithic Database):
All users are stored in one large database. As more users sign up, this database gets larger, making queries slower and increasing the strain on a single server.

With Sharding:
You split the users table across 3 different shards based on user_id:

Shard 1: Users with user_id between 1 and 100,000.
Shard 2: Users with user_id between 100,001 and 200,000.
Shard 3: Users with user_id between 200,001 and 300,000.

Now, when you need to look up a user with user_id = 150,000, your system automatically knows to query Shard 2. This reduces the load on Shards 1 and 3, allowing for faster reads and writes.

Types of Sharding:

1.Range-Based Sharding:
Data is partitioned based on a range of values for the shard key (e.g., users 1–100,000 in one shard, 100,001–200,000 in another).

Pros: Simple and intuitive.
Cons: Can lead to uneven distribution if data is skewed (e.g., if new users always get higher IDs, one shard will be much busier than others).

Hash-Based Sharding:
A hash function is applied to the shard key to distribute data across shards evenly.
Pros: Better load balancing, avoids "hot spots" by spreading data more evenly.
Cons: More complex to implement, and adding/removing shards can require rehashing the data.

Geographic Sharding:
Data is partitioned by geographic location. For example, all users in North America go to one shard, and users in Europe go to another.

Pros: Reduces latency by keeping data geographically closer to the user.
Cons: May cause imbalances if one region has significantly more traffic than others.

Imagine you’re building a social media app where users can post updates and follow each other. You have a users table that’s rapidly growing, and your database is struggling to handle the load. You decide to shard the users table to improve performance.

Initial Setup (Without Sharding):
You have a users table that stores user information:

CREATE TABLE users (
    user_id INT PRIMARY KEY,
    username VARCHAR(50),
    email VARCHAR(100),
    bio TEXT,
    created_at TIMESTAMP
);
Enter fullscreen mode Exit fullscreen mode

As your app grows, the number of users reaches millions, and queries like finding a user or inserting new users start to slow down.

Step 1: Shard Based on user_id
You decide to shard the users table into 3 shards based on user_id:

Shard 1: Users with user_id between 1 and 1,000,000.
Shard 2: Users with user_id between 1,000,001 and 2,000,000.
Shard 3: Users with user_id greater than 2,000,000.

So, instead of having one giant users table, you now have 3 separate tables:

Shard 1 (users_1):

CREATE TABLE users_1 (
    user_id INT PRIMARY KEY,
    username VARCHAR(50),
    email VARCHAR(100),
    bio TEXT,
    created_at TIMESTAMP
);
Enter fullscreen mode Exit fullscreen mode

Shard 2 (users_2):

CREATE TABLE users_2 (
    user_id INT PRIMARY KEY,
    username VARCHAR(50),
    email VARCHAR(100),
    bio TEXT,
    created_at TIMESTAMP
);
Enter fullscreen mode Exit fullscreen mode

Shard 3 (users_3):

CREATE TABLE users_3 (
    user_id INT PRIMARY KEY,
    username VARCHAR(50),
    email VARCHAR(100),
    bio TEXT,
    created_at TIMESTAMP
);
Enter fullscreen mode Exit fullscreen mode

Step 2: Distribute Data Across Shards
Now, when new users sign up, the app checks the user_id and directs the data to the correct shard.

If a user has user_id = 999,999, the data goes into Shard 1.
If a user has user_id = 1,500,000, the data goes into Shard 2.
If a user has user_id = 2,500,000, the data goes into Shard 3.

Step 3: Querying Across Shards
Your app’s logic now knows how to find the right shard based on user_id. Let’s say you want to look up the user with user_id = 1,500,000. The app knows to look in Shard 2 because the user_id falls in that range:

SELECT * FROM users_2 WHERE user_id = 1500000;
Enter fullscreen mode Exit fullscreen mode

This is much faster than querying a single table with millions of rows.

Step 4: Scaling Further
As the app continues to grow, you can easily add more shards. For example, if you get more than 3 million users, you might add Shard 4 for users with user_id > 3,000,000.

CREATE TABLE users_4 (
    user_id INT PRIMARY KEY,
    username VARCHAR(50),
    email VARCHAR(100),
    bio TEXT,
    created_at TIMESTAMP
);
Enter fullscreen mode Exit fullscreen mode

Step 5: Handling Cross-Shard Queries
While querying individual shards is fast, some operations (like analytics or reporting) might require data from multiple shards. In those cases, you’ll need to perform a cross-shard query. Here’s an example where you want to find all users who signed up in the last month:

SELECT * FROM users_1 WHERE created_at > NOW() - INTERVAL '1 month'
UNION ALL
SELECT * FROM users_2 WHERE created_at > NOW() - INTERVAL '1 month'
UNION ALL
SELECT * FROM users_3 WHERE created_at > NOW() - INTERVAL '1 month';
Enter fullscreen mode Exit fullscreen mode

This query combines results from all shards. Cross-shard queries can be slower than querying a single shard, but this tradeoff is often worth it to get the benefit of sharding for performance and scalability.


5. Partitioning

Partitioning is a database strategy that splits large tables into smaller, more manageable pieces, called partitions, to improve performance, manageability, and scalability. Each partition contains a subset of the table's data based on specific criteria, like ranges of values or hashes of a column. Partitioning is typically implemented within a single database, unlike sharding, which distributes data across multiple databases.

Types of Partitioning:
Range Partitioning: Data is divided based on ranges of values in a column (like dates, numeric values, or IDs). This is useful when queries often focus on specific ranges of data.

Example: A table storing order data can be partitioned by the order date:

Partition 1: Orders from 2020.
Partition 2: Orders from 2021.
Partition 3: Orders from 2022.

CREATE TABLE orders (
    order_id INT PRIMARY KEY,
    order_date DATE,
    customer_id INT,
    amount DECIMAL(10, 2)
) PARTITION BY RANGE (order_date) (
    PARTITION p2020 VALUES LESS THAN ('2021-01-01'),
    PARTITION p2021 VALUES LESS THAN ('2022-01-01'),
    PARTITION p2022 VALUES LESS THAN ('2023-01-01')
);
Enter fullscreen mode Exit fullscreen mode

List Partitioning: Data is divided based on a list of discrete values. This is useful when the data can be categorized into specific groups.

Example: A table storing sales data for different regions can be partitioned by region:

Partition 1: North America.
Partition 2: Europe.
Partition 3: Asia.

CREATE TABLE sales (
    sale_id INT PRIMARY KEY,
    region VARCHAR(50),
    amount DECIMAL(10, 2)
) PARTITION BY LIST (region) (
    PARTITION na PARTITION VALUES ('North America'),
    PARTITION eu PARTITION VALUES ('Europe'),
    PARTITION asia PARTITION VALUES ('Asia')
);
Enter fullscreen mode Exit fullscreen mode

Hash Partitioning: Data is divided based on a hash function applied to a column (usually an ID). This ensures an even distribution of data across partitions and is ideal when the data does not follow any natural order.

Example: Partitioning a large users table based on user_id:

Partition 1: Users whose user_id hashes to an even number.
Partition 2: Users whose user_id hashes to an odd number.

CREATE TABLE users (
    user_id INT PRIMARY KEY,
    username VARCHAR(50),
    email VARCHAR(100)
) PARTITION BY HASH (user_id) PARTITIONS 2;

Enter fullscreen mode Exit fullscreen mode

Composite Partitioning: Combines multiple partitioning strategies. For example, you could first partition by range, and then further partition by hash within each range.

Example: Partition a large order table first by year (range), then by user ID (hash):

CREATE TABLE orders (
    order_id INT PRIMARY KEY,
    order_date DATE,
    user_id INT,
    amount DECIMAL(10, 2)
) PARTITION BY RANGE (order_date) SUBPARTITION BY HASH (user_id) (
    PARTITION p2020 VALUES LESS THAN ('2021-01-01'),
    PARTITION p2021 VALUES LESS THAN ('2022-01-01')
) SUBPARTITIONS 4;
Enter fullscreen mode Exit fullscreen mode

Example: Range Partitioning for a transactions table
Imagine you have a transactions table with millions of rows. Without partitioning, querying for transactions in a specific year would require scanning the entire table.

With range partitioning by transaction_date, the data is split into yearly partitions, making it faster to retrieve data from specific years:

CREATE TABLE transactions (
    transaction_id INT PRIMARY KEY,
    transaction_date DATE,
    user_id INT,
    amount DECIMAL(10, 2)
) PARTITION BY RANGE (transaction_date) (
    PARTITION p2019 VALUES LESS THAN ('2020-01-01'),
    PARTITION p2020 VALUES LESS THAN ('2021-01-01'),
    PARTITION p2021 VALUES LESS THAN ('2022-01-01'),
    PARTITION p2022 VALUES LESS THAN ('2023-01-01')
);
Enter fullscreen mode Exit fullscreen mode

If a query asks for all transactions in 2020, only Partition p2020 will be scanned-

SELECT * FROM transactions WHERE transaction_date >= '2020-01-01' AND transaction_date < '2021-01-01';
Enter fullscreen mode Exit fullscreen mode

This makes the query much faster compared to scanning an entire, non-partitioned table.


6.Caching

Caching is the practice of storing frequently accessed data in memory or other high-speed storage systems to improve query performance and reduce the load on the database. By caching frequently used query results or parts of the data, the database can avoid executing expensive queries repeatedly, which speeds up data retrieval and reduces overall server resource consumption.

Types of Database Caching:

Result Set Caching:
Caches the result of a query. When the same query is executed again, the database returns the cached result instead of recalculating it.
Example: A query like SELECT * FROM users WHERE country = 'India' might be cached. When someone runs the same query, the result is fetched from the cache.

Row-Level Caching:
Caches individual rows or records from the database. When a specific row is requested frequently, the system can serve it directly from the cache.
Example: Caching user profile data in an application. If users frequently request the profile for user ID 123, this profile can be cached for quick retrieval.

Page-Level Caching:
This type of caching stores whole data pages (blocks of rows or records) in memory. Databases often cache entire pages of data in RAM to reduce disk I/O.

Example: Databases like MySQL and PostgreSQL store commonly accessed data pages in memory buffers to serve requests faster without needing to hit the disk.

Database Caching Techniques:

Database Buffer Cache:
Most modern databases have an internal memory area called the buffer cache (or buffer pool) where frequently accessed data pages are stored. This is an automatic caching mechanism within the database.
Example: MySQL’s InnoDB buffer pool stores frequently accessed data and indexes in memory, which reduces the need for disk access.

Query Caching:
Some databases have query caching built in, where the results of a query are cached and served for subsequent identical queries.
Example: MySQL’s query cache stores the results of SELECT queries in memory, so if the same query is run again with the same parameters, the cached result is returned. (Note: As of MySQL 8.0, query cache has been removed due to inefficiencies in modern workloads.)

Third-Party In-Memory Caching:
Databases can also leverage external in-memory caches like Redis or Memcached. In this case, application developers store frequently accessed data in memory and retrieve it from there instead of querying the database every time.
Example: A Redis cache might store user session data that is frequently requested by a web application to avoid repeated database queries.

Write-Through and Write-Back Cache:

  • Write-Through: Every time data is written to the database, it is also written to the cache simultaneously.
  • Write-Back: Data is written to the cache first, and only asynchronously written to the database later. This improves write performance but adds complexity in terms of data consistency.

Example: In an e-commerce platform, product details might be cached. When a product is updated, both the cache and the database are updated simultaneously using the write-through strategy to ensure consistency.

Example of Caching with PostgreSQL-
In PostgreSQL, the shared buffer cache is used to store frequently accessed data pages. When a query requests data, PostgreSQL first checks if the relevant page is in memory (buffer cache). If it is, the data is served from the buffer, which is much faster than reading from disk.

Let’s consider an example where PostgreSQL caches a query result.

Initial Query: When a user queries for all products in the "Electronics" category:

SELECT * FROM products WHERE category = 'Electronics';
Enter fullscreen mode Exit fullscreen mode

PostgreSQL will check the buffer cache. If the requested data is not in the buffer (cache miss), it fetches the data from the disk, serves it to the user, and stores the page in the buffer cache.

Subsequent Query: The next time someone runs the same query, PostgreSQL checks the buffer cache. If the data is still there (cache hit), it will return the result without querying the disk, improving performance.

Example of Third-Party Caching with Redis-
Let’s say we are using Redis to cache query results for frequently accessed data in an e-commerce database. Here’s how it works:

Initial Query (Cache Miss):

A user requests details for product_id = 101.
The application checks Redis for the product data: GET product:101.
If Redis returns nothing (cache miss), the app queries the database.
After fetching the product data from the database, the app stores it in

Redis:

client.setex('product:101', 3600, JSON.stringify(productData));
Enter fullscreen mode Exit fullscreen mode

The product details are now cached in Redis with a TTL (Time-to-Live) of 1 hour (3600 seconds).
Subsequent Query (Cache Hit):

The next time the user requests product_id = 101, the app first checks Redis:

client.get('product:101', (err, product) => {
  if (product) {
    // Cache hit: Serve from Redis
    return JSON.parse(product);
  } else {
    // Cache miss: Fetch from database and update Redis
  }
});
Enter fullscreen mode Exit fullscreen mode

Since the product details are cached, Redis serves the data without hitting the database.

Caching Solutions: Aerospike , Redis , Hazelcast , Memcached


7. Replication

Replication is the process of copying and maintaining database objects, such as data and schema, across multiple database servers or instances. The primary goal of replication is to enhance data availability, fault tolerance, and scalability. By duplicating data across different locations, organizations can ensure that their applications remain accessible and perform well even under high loads or in the event of server failures.

Types of Replication
Master-Slave Replication:

In this model, one database server (the master) handles all write operations, while one or more other servers (the slaves) replicate the data from the master. Read operations can be distributed among the master and slaves.

Master-Master Replication (Multi-Master Replication):
In this setup, two or more database servers can accept write operations. Each server replicates changes to the others, allowing for distributed writes.

Asynchronous Replication:
Changes made on the master database are not immediately propagated to the slaves. There may be a delay between when a change is made and when it appears on the slave.

Synchronous Replication:
Changes made on the master database are immediately sent to the slaves, which must confirm receipt before the transaction is considered complete.

Log Shipping:
A form of replication where transaction logs from the primary database are periodically copied to a secondary database. The secondary database can be used for reporting or disaster recovery.

How Replication Works

  • Data Change Detection:
    The master database tracks changes made to data, usually through a transaction log or a change data capture mechanism.

  • Data Transfer:
    The changes detected are sent to the slave(s) or replicas over a network. Depending on the replication type, this may happen synchronously or asynchronously.

  • Data Application:
    The slave databases receive the changes and apply them to their copies of the data. In the case of log shipping, the logs are applied to the secondary database to update it.

MongoDB example - https://www.mongodb.com/docs/manual/replication/
Postgresql example - https://www.postgresql.org/docs/current/runtime-config-replication.html
OracleDB example-
https://docs.oracle.com/cd/A84055_01/EE806DOC/doc/database.804/a58227/ch_repli.htm


8.Scaling

here are two primary approaches to scaling databases: vertical scaling (scale-up) and horizontal scaling (scale-out). Let’s explore both strategies along with common methods and best practices for scaling databases effectively.

1. Vertical Scaling
Vertical scaling involves adding more resources (CPU, RAM, storage) to a single database server to improve its performance. This approach is straightforward but can become limited by the hardware capabilities and often leads to a single point of failure.

2. Horizontal Scaling
Horizontal scaling involves adding more database servers to distribute the load across multiple instances. This approach can significantly improve performance and resilience but requires more complex architecture and management.

Top comments (0)