DEV Community

Cover image for **8 Python Database Optimization Techniques to 10x Your Application Performance**
Nithin Bharadwaj
Nithin Bharadwaj

Posted on

**8 Python Database Optimization Techniques to 10x Your Application Performance**

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

Working with databases in Python, I’ve learned that speed isn’t just a nice-to-have—it’s essential. When an application slows down, it’s often the database calls that are the culprit. Over time, I’ve gathered a set of practical methods that make a real difference. These aren’t just theories; they are techniques I use regularly to keep applications responsive, even as they grow. Let me walk you through eight of the most effective ones.

First, you need to see what the database is actually doing. It’s easy to assume a query is fine, but the database might be working much harder than necessary. In PostgreSQL, you can use EXPLAIN ANALYZE before a query. This doesn't run the query for real; instead, it shows you the plan the database will use and estimates the cost.

import psycopg2

# Connect to your database
conn = psycopg2.connect(database="myapp", user="app_user", password="secret")
cur = conn.cursor()

# Ask the database to explain its plan for a query
query = "SELECT * FROM user_orders WHERE user_id = 456;"
cur.execute(f"EXPLAIN ANALYZE {query}")
execution_plan = cur.fetchall()

for line in execution_plan:
    print(line[0])

# Look for lines about "Seq Scan" (slow) vs "Index Scan" (fast)
# Also check the estimated cost; a lower number is better.
Enter fullscreen mode Exit fullscreen mode

Running this shows me the internal steps. If I see “Seq Scan on user_orders,” it means the database is reading every single row in the table. For a table with thousands of rows, that’s very slow. The goal is to see “Index Scan” instead. This simple check is my starting point for any performance issue.

The most common fix for a slow query is adding an index. Think of an index like a book’s index. Instead of reading every page to find a topic, you look it up in the back and go directly to the right page. If you often search for users by email, an index on the email column is vital.

from sqlalchemy import create_engine, text

engine = create_engine('postgresql://user:pass@localhost/myapp')
with engine.connect() as conn:
    # Create a basic index on a single column
    conn.execute(text("CREATE INDEX idx_user_email ON users(email);"))

    # For queries that filter by two things, like city and status, use a composite index
    conn.execute(text("CREATE INDEX idx_city_active ON customers(city, account_status) WHERE account_status = 'active';"))

    print("Indexes created.")
Enter fullscreen mode Exit fullscreen mode

But be careful. Indexes speed up reading, but they slow down writing because the database must update the index every time you insert or update a row. I only add indexes for columns that are frequently used in WHERE, ORDER BY, or JOIN clauses.

When your application gets many users, each one opening and closing database connections, you can run out of connections or slow everything down with overhead. A connection pool solves this. It keeps a set of open connections ready to use, so your app doesn't have to establish a new one for every request.

from sqlalchemy import create_engine
from sqlalchemy.pool import QueuePool

# Set up an engine with a connection pool
engine = create_engine(
    'postgresql://user:pass@localhost/myapp',
    poolclass=QueuePool,
    pool_size=10,       # 10 connections always ready
    max_overflow=20,    # Allow up to 20 extra if needed
    pool_timeout=30,    # Wait 30 seconds for a free connection
    pool_recycle=1800   # Recycle connections after 30 minutes
)

# Using it is the same as always
with engine.connect() as conn:
    result = conn.execute(text("SELECT name FROM products"))
    for row in result:
        print(row[0])
Enter fullscreen mode Exit fullscreen mode

I set this up once when my application starts. The pool manages the connections in the background. This one change can prevent a lot of “too many connections” errors in a busy web app.

If you need to insert hundreds or thousands of rows, doing it one at a time is a disaster for performance. Each individual insert is a round trip to the database. Batching combines them into a single operation.

import psycopg2

conn = psycopg2.connect(database="myapp", user="app_user", password="secret")
cur = conn.cursor()

# Data to insert
new_logs = [
    ('error', '2023-10-26 10:00:00', 'Payment failed'),
    ('info', '2023-10-26 10:00:01', 'User logged in'),
    ('warning', '2023-10-26 10:00:02', 'Cache nearly full'),
]

# Use executemany to insert all at once
cur.executemany(
    "INSERT INTO app_logs (level, timestamp, message) VALUES (%s, %s, %s)",
    new_logs
)
conn.commit()
print(f"Inserted {cur.rowcount} log entries efficiently.")
Enter fullscreen mode Exit fullscreen mode

The difference is massive. Inserting 10,000 rows individually might take minutes. Doing it in batches can take just a few seconds. This also applies to updates. You can use a CASE statement to update many rows with different values in one go.

Sometimes you have a very complex query that joins many tables and does heavy calculations. If this query is run often, but the data doesn't change every second, a materialized view is perfect. It’s like a snapshot of the query result that gets stored as a real table.

from sqlalchemy import create_engine, text
from datetime import date

engine = create_engine('postgresql://user:pass@localhost/myapp')

with engine.connect() as conn:
    # Create a materialized view for a complex report
    conn.execute(text("""
        CREATE MATERIALIZED VIEW weekly_sales_report AS
        SELECT 
            product_id,
            SUM(quantity) as total_units,
            SUM(quantity * unit_price) as total_revenue
        FROM order_details
        WHERE order_date > CURRENT_DATE - 7
        GROUP BY product_id
        ORDER BY total_revenue DESC;
    """))

    # Refresh the data when needed (e.g., every hour via a scheduler)
    conn.execute(text("REFRESH MATERIALIZED VIEW weekly_sales_report;"))

    # Now querying is instant
    result = conn.execute(text("SELECT * FROM weekly_sales_report LIMIT 5;"))
    for row in result:
        print(f"Product {row[0]}: ${row[2]:.2f} revenue")
Enter fullscreen mode Exit fullscreen mode

The first creation and each refresh run the slow query, but every select from the materialized view is as fast as reading from a regular table. I use this for dashboards and reports.

For data that is read very, very often but changes rarely, why ask the database every time? A cache stores the result the first time, then serves it from fast memory. I often use Redis for this.

import redis
import json
import hashlib
import psycopg2

# Connect to Redis
cache = redis.Redis(host='localhost', port=6379, db=0)

# Connect to the database
db_conn = psycopg2.connect(database="myapp", user="user", password="pass")
db_cur = db_conn.cursor()

def get_top_products(limit=10, cache_seconds=300):
    # Create a unique key for this specific query
    query_signature = f"top_products_{limit}"
    cache_key = hashlib.md5(query_signature.encode()).hexdigest()

    # Step 1: Try the cache
    cached_result = cache.get(cache_key)
    if cached_result is not None:
        print("Result loaded from cache.")
        return json.loads(cached_result)

    # Step 2: If not in cache, query the database
    db_cur.execute("""
        SELECT product_id, product_name, COUNT(*) as order_count
        FROM order_items
        GROUP BY product_id, product_name
        ORDER BY order_count DESC
        LIMIT %s
    """, (limit,))

    result = db_cur.fetchall()

    # Step 3: Store the result in cache for next time
    cache.setex(cache_key, cache_seconds, json.dumps(result))
    print("Result queried from database and cached.")
    return result

# Usage
products = get_top_products(limit=5)
for prod_id, name, count in products:
    print(f"{name}: ordered {count} times")
Enter fullscreen mode Exit fullscreen mode

I set a time-to-live on the cache so it doesn’t serve stale data forever. This is great for homepage listings, leaderboards, or any public data that doesn't change instantly.

The way you write a query can force the database to do extra, unnecessary work. Learning to spot and rewrite these patterns is a valuable skill. A common one is using SELECT DISTINCT when you don't truly need it, or using a subquery where a JOIN would be better.

# Instead of a slow IN subquery like this:
# SELECT * FROM employees WHERE department_id IN (SELECT id FROM departments WHERE location='NYC')

# Rewrite it as a JOIN, which databases often optimize better:
optimized_query = """
SELECT employees.* 
FROM employees 
INNER JOIN departments ON employees.department_id = departments.id 
WHERE departments.location = 'NYC';
"""

# Another example: Be specific in SELECT
# Instead of SELECT * (which fetches all columns), only ask for what you need.
specific_query = "SELECT id, first_name, email FROM users;"
Enter fullscreen mode Exit fullscreen mode

I make it a habit to review queries, especially the slow ones from monitoring, and ask: “Can this be written more directly?” Often, the answer is yes.

All these optimizations mean nothing if you don't measure their impact. I implement simple monitoring to track how long queries take. This helps me find new slow queries and verify that my fixes worked.

import time
import logging
from contextlib import contextmanager

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

@contextmanager
def monitor_query(query_tag):
    """A context manager to time a database operation."""
    start_time = time.perf_counter()
    try:
        yield
    finally:
        end_time = time.perf_counter()
        duration = end_time - start_time
        logger.info(f"Query '{query_tag}' took {duration:.4f} seconds")
        if duration > 0.5:  # Log a warning for slow queries
            logger.warning(f"Slow query alert: '{query_tag}'")

# Use it in your code
with monitor_query("fetch_recent_orders"):
    cur.execute("SELECT * FROM orders WHERE order_date > NOW() - INTERVAL '1 day'")
    orders = cur.fetchall()

print(f"Fetched {len(orders)} orders.")
Enter fullscreen mode Exit fullscreen mode

I log these times to a file or a monitoring system. Over time, I can see trends and catch performance regressions before users complain. It turns performance from a mystery into something I can manage.

Putting it all together, database performance is about being intentional. You start by measuring to find the problem. You apply targeted fixes like indexes. You support scale with patterns like pooling and caching. Finally, you keep watching to make sure things stay fast. I don’t use all eight techniques on every project, but knowing they are there gives me a toolbox to handle almost any slowdown. The best part is that you can start small. Pick one slow query today, explain it, and see if an index helps. That first success will show you how powerful these methods can be.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Top comments (0)