DEV Community

archi-jain
archi-jain

Posted on

Day 2/100: From In-Memory to PostgreSQL - Database Integration with SQLAlchemy

Part of my 100 Days of Code journey. Yesterday I built a REST API. Today I make it real.

The Challenge

Take the Task Management API from Day 1 and upgrade it from in-memory storage to a production-ready PostgreSQL database using SQLAlchemy ORM.

The Problem: In-memory storage disappears when the server restarts. Not exactly production-ready.

The Solution: Integrate PostgreSQL with proper ORM patterns, session management, and environment configuration.

Why Databases Matter

Let me be blunt: your API is useless without persistent storage.

Day 1's in-memory dictionary was perfect for learning HTTP and REST patterns. But the moment you restart your server, poof—all data gone. That's not an API. That's a expensive notepad.

Today, we fix that.

What We're Building

We're keeping all the functionality from Day 1:

  • Create, read, update, delete tasks
  • Mark tasks complete/incomplete
  • Proper error handling

But now with:

  • ✅ Data that survives server restarts
  • ✅ Scalable database architecture
  • ✅ Production-ready patterns
  • ✅ Environment-based configuration

Technology Stack

Database: PostgreSQL 15

ORM: SQLAlchemy

Environment Config: python-dotenv

Database Driver: psycopg2

Step-by-Step Implementation

Step 1: Installing PostgreSQL

I used Docker because it's the easiest way to get PostgreSQL running without cluttering my system:

docker run --name postgres-tasks \
  -e POSTGRES_USER=postgres \
  -e POSTGRES_PASSWORD=postgres \
  -e POSTGRES_DB=tasks_db \
  -p 5432:5432 \
  -d postgres:15
Enter fullscreen mode Exit fullscreen mode

What this does:

  • Creates a PostgreSQL container named postgres-tasks
  • Sets up a database called tasks_db
  • Exposes it on port 5432 (PostgreSQL default)
  • Runs in detached mode (background)

Verify it's running:

docker ps
Enter fullscreen mode Exit fullscreen mode

You should see your postgres-tasks container running.

Step 2: Installing Python Dependencies

pip install sqlalchemy psycopg2-binary python-dotenv
Enter fullscreen mode Exit fullscreen mode

Package purposes:

  • sqlalchemy: The ORM that converts Python to SQL
  • psycopg2-binary: PostgreSQL adapter (connects Python to PostgreSQL)
  • python-dotenv: Loads environment variables from .env files

Step 3: Project Structure

days/002/
├── database.py      # Database connection setup
├── models.py        # SQLAlchemy ORM models
├── schemas.py       # Pydantic models (API validation)
├── main.py          # FastAPI application
├── .env             # Environment variables (DON'T COMMIT!)
└── requirements.txt
Enter fullscreen mode Exit fullscreen mode

Step 4: Environment Configuration

Create .env file:

DATABASE_URL=postgresql://postgres:postgres@localhost:5432/tasks_db
Enter fullscreen mode Exit fullscreen mode

Breaking down the URL:

  • postgresql:// - Database type
  • postgres:postgres - username:password
  • @localhost:5432 - host:port
  • /tasks_db - database name

Security Note: Never commit .env files! Add to .gitignore:

echo ".env" >> .gitignore
Enter fullscreen mode Exit fullscreen mode

Step 5: Database Connection Setup

Create database.py:

from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
import os
from dotenv import load_dotenv

load_dotenv()

DATABASE_URL = os.getenv("DATABASE_URL")

engine = create_engine(
    DATABASE_URL,
    pool_pre_ping=True,
    echo=True
)

SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

Base = declarative_base()

def get_db():
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()
Enter fullscreen mode Exit fullscreen mode

Understanding each part:

create_engine:

  • Creates a connection pool to the database
  • pool_pre_ping=True: Checks connections before using (prevents stale connections)
  • echo=True: Logs all SQL queries (great for learning, remove in production)

SessionLocal:

  • Factory for creating database sessions
  • A session is like a workspace for database operations
  • autocommit=False: We control when to save changes
  • autoflush=False: We control when to send changes to database

Base:

  • Base class that all our models inherit from
  • SQLAlchemy uses this to track all models

get_db() function:

  • FastAPI dependency that provides database sessions
  • yield gives the session to the route
  • finally ensures session always closes (even on errors)

This pattern is crucial - it prevents database connection leaks.

Step 6: Creating the Database Model

Create models.py:

from sqlalchemy import Column, String, Boolean, DateTime
from sqlalchemy.dialects.postgresql import UUID
from datetime import datetime
import uuid
from database import Base

class Task(Base):
    __tablename__ = "tasks"

    id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
    title = Column(String(100), nullable=False)
    description = Column(String(500), nullable=True)
    completed = Column(Boolean, default=False, nullable=False)
    created_at = Column(DateTime, default=datetime.utcnow, nullable=False)
    updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False)

    def __repr__(self):
        return f"<Task(id={self.id}, title={self.title})>"
Enter fullscreen mode Exit fullscreen mode

Deep dive into the model:

tablename = "tasks":

  • Tells SQLAlchemy what to name the database table

id Column:

Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
Enter fullscreen mode Exit fullscreen mode
  • UUID(as_uuid=True): Uses PostgreSQL's native UUID type
  • primary_key=True: This column uniquely identifies each row
  • default=uuid.uuid4: Auto-generates UUID when creating new tasks

Why UUIDs over integers?

  • Globally unique (can merge databases without ID conflicts)
  • Hard to guess (security)
  • Can generate client-side
  • Better for distributed systems

title and description:

Column(String(100), nullable=False)
Column(String(500), nullable=True)
Enter fullscreen mode Exit fullscreen mode
  • String(100): Maximum 100 characters
  • nullable=False: Required field
  • nullable=True: Optional field

completed:

Column(Boolean, default=False, nullable=False)
Enter fullscreen mode Exit fullscreen mode
  • Boolean type (true/false)
  • Defaults to False (new tasks are incomplete)
  • Cannot be null

Timestamps:

created_at = Column(DateTime, default=datetime.utcnow, nullable=False)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False)
Enter fullscreen mode Exit fullscreen mode
  • default=datetime.utcnow: Sets timestamp when task created
  • onupdate=datetime.utcnow: Auto-updates timestamp on any modification
  • Always use UTC in databases! Convert to local time in the frontend

repr method:

  • Defines how the object appears when printed
  • Useful for debugging
  • Example: <Task(id=uuid, title="Learn FastAPI")>

Step 7: Pydantic Schemas

Create schemas.py (similar to Day 1, but with one important change):

from pydantic import BaseModel, Field
from typing import Optional
from datetime import datetime
from uuid import UUID

class TaskCreate(BaseModel):
    title: str = Field(..., max_length=100)
    description: Optional[str] = Field(None, max_length=500)

class TaskUpdate(BaseModel):
    title: Optional[str] = Field(None, max_length=100)
    description: Optional[str] = Field(None, max_length=500)

class TaskResponse(BaseModel):
    id: UUID
    title: str
    description: Optional[str]
    completed: bool
    created_at: datetime
    updated_at: datetime

    class Config:
        from_attributes = True
Enter fullscreen mode Exit fullscreen mode

The critical addition:

class Config:
    from_attributes = True
Enter fullscreen mode Exit fullscreen mode

This tells Pydantic to convert SQLAlchemy model objects to Pydantic models.

Why do we need both models.py and schemas.py?

  • models.py (SQLAlchemy): Defines database structure
  • schemas.py (Pydantic): Defines API structure

They're similar but serve different purposes:

  • SQLAlchemy talks to the database
  • Pydantic validates API requests/responses

Step 8: Creating the FastAPI Application

Create main.py:

from fastapi import FastAPI, HTTPException, status, Depends
from sqlalchemy.orm import Session
from typing import List
from uuid import UUID
import models
import schemas
from database import engine, get_db

# Create all tables
models.Base.metadata.create_all(bind=engine)

app = FastAPI(
    title="Task Management API with PostgreSQL",
    version="2.0.0"
)

@app.post("/tasks", response_model=schemas.TaskResponse, status_code=status.HTTP_201_CREATED)
def create_task(task: schemas.TaskCreate, db: Session = Depends(get_db)):
    new_task = models.Task(
        title=task.title,
        description=task.description
    )
    db.add(new_task)
    db.commit()
    db.refresh(new_task)
    return new_task
Enter fullscreen mode Exit fullscreen mode

Let's break down the create endpoint:

1. Function signature:

def create_task(task: schemas.TaskCreate, db: Session = Depends(get_db)):
Enter fullscreen mode Exit fullscreen mode
  • task: schemas.TaskCreate: FastAPI auto-validates request body
  • db: Session = Depends(get_db): FastAPI injects database session

2. Creating the model instance:

new_task = models.Task(
    title=task.title,
    description=task.description
)
Enter fullscreen mode Exit fullscreen mode
  • Creates SQLAlchemy Task object
  • Note: We DON'T set id, created_at, updated_at, completed
  • Those are auto-generated by the database!

3. Saving to database:

db.add(new_task)
db.commit()
db.refresh(new_task)
Enter fullscreen mode Exit fullscreen mode
  • db.add(): Stages the task for insertion
  • db.commit(): Actually saves to database
  • db.refresh(): Fetches auto-generated values (id, timestamps)

Why three separate calls?

  • Allows batching multiple operations
  • Can rollback before commit if needed
  • Refresh gets database-generated values

4. Returning the task:

return new_task
Enter fullscreen mode Exit fullscreen mode

FastAPI automatically:

  • Converts SQLAlchemy model to Pydantic schema
  • Serializes to JSON
  • Sets proper Content-Type header

The Other Endpoints

Get All Tasks:

@app.get("/tasks", response_model=List[schemas.TaskResponse])
def get_tasks(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)):
    tasks = db.query(models.Task).offset(skip).limit(limit).all()
    return tasks
Enter fullscreen mode Exit fullscreen mode

Understanding the query:

  • db.query(models.Task): Start building a query
  • .offset(skip): Skip first N records (pagination)
  • .limit(limit): Return max N records
  • .all(): Execute query, return all results

SQL equivalent:

SELECT * FROM tasks OFFSET 0 LIMIT 100;
Enter fullscreen mode Exit fullscreen mode

Get Single Task:

@app.get("/tasks/{task_id}", response_model=schemas.TaskResponse)
def get_task(task_id: UUID, db: Session = Depends(get_db)):
    task = db.query(models.Task).filter(models.Task.id == task_id).first()

    if not task:
        raise HTTPException(status_code=404, detail="Task not found")

    return task
Enter fullscreen mode Exit fullscreen mode

Query breakdown:

  • .filter(models.Task.id == task_id): WHERE clause
  • .first(): Returns first result or None

SQL equivalent:

SELECT * FROM tasks WHERE id = 'some-uuid' LIMIT 1;
Enter fullscreen mode Exit fullscreen mode

Update Task:

@app.put("/tasks/{task_id}", response_model=schemas.TaskResponse)
def update_task(task_id: UUID, task_update: schemas.TaskUpdate, db: Session = Depends(get_db)):
    task = db.query(models.Task).filter(models.Task.id == task_id).first()

    if not task:
        raise HTTPException(status_code=404, detail="Task not found")

    if task_update.title is not None:
        task.title = task_update.title

    if task_update.description is not None:
        task.description = task_update.description

    db.commit()
    db.refresh(task)
    return task
Enter fullscreen mode Exit fullscreen mode

Why check is not None?

Consider this request:

{
  "title": "New Title"
  // description not sent
}
Enter fullscreen mode Exit fullscreen mode

We want to update title but keep description unchanged.

if task_update.title is not None:  # True
    task.title = task_update.title

if task_update.description is not None:  # False
    task.description = task_update.description  # Skipped!
Enter fullscreen mode Exit fullscreen mode

This enables partial updates!

Delete Task:

@app.delete("/tasks/{task_id}", status_code=status.HTTP_204_NO_CONTENT)
def delete_task(task_id: UUID, db: Session = Depends(get_db)):
    task = db.query(models.Task).filter(models.Task.id == task_id).first()

    if not task:
        raise HTTPException(status_code=404, detail="Task not found")

    db.delete(task)
    db.commit()
    return None
Enter fullscreen mode Exit fullscreen mode

Why return None for 204?

  • HTTP 204 means "success, no content"
  • Returning None tells FastAPI not to send a response body
  • Still returns 204 status code

Testing the API

Start the server:

uvicorn main:app --reload
Enter fullscreen mode Exit fullscreen mode

Create a task:

curl -X POST "http://localhost:8000/tasks" \
  -H "Content-Type: application/json" \
  -d '{"title": "Learn SQLAlchemy", "description": "Integrate PostgreSQL with FastAPI"}'
Enter fullscreen mode Exit fullscreen mode

The magic moment:

  1. Stop the server (Ctrl+C)
  2. Start it again
  3. Get all tasks:
curl http://localhost:8000/tasks
Enter fullscreen mode Exit fullscreen mode

The task is still there! 🎉

That's the power of persistent storage.

Checking the Database Directly

docker exec -it postgres-tasks psql -U postgres -d tasks_db

SELECT * FROM tasks;
Enter fullscreen mode Exit fullscreen mode

You'll see your task with all columns:

  • id (UUID)
  • title
  • description
  • completed (false)
  • created_at (timestamp)
  • updated_at (timestamp)

Key Concepts Learned

ORM (Object-Relational Mapping)

Without ORM (raw SQL):

cursor.execute("INSERT INTO tasks (title, description) VALUES (%s, %s)", (title, desc))
Enter fullscreen mode Exit fullscreen mode

With ORM:

new_task = Task(title=title, description=desc)
db.add(new_task)
db.commit()
Enter fullscreen mode Exit fullscreen mode

Benefits:

  • Type-safe (Python catches errors before database does)
  • Prevents SQL injection automatically
  • Database-agnostic (can switch from PostgreSQL to MySQL)
  • Easier to maintain and test

Database Sessions

Think of a session as a transaction workspace:

db = SessionLocal()  # Open workspace
try:
    db.add(task1)
    db.add(task2)
    db.commit()  # Save all changes
except:
    db.rollback()  # Undo all changes
finally:
    db.close()  # Clean up
Enter fullscreen mode Exit fullscreen mode

FastAPI's Depends(get_db) handles this automatically!

Connection Pooling

SQLAlchemy maintains a pool of database connections:

  • Reuses connections (faster than creating new ones)
  • Limits max connections (prevents overwhelming database)
  • Auto-reconnects if connection dies

All handled automatically by create_engine().

Common Mistakes and How I Avoided Them

Mistake 1: Not Closing Sessions

Wrong:

db = SessionLocal()
db.add(task)
db.commit()
# Forgot to close!
Enter fullscreen mode Exit fullscreen mode

Right (with Depends):

def create_task(db: Session = Depends(get_db)):
    # Session auto-closes when function ends
Enter fullscreen mode Exit fullscreen mode

Mistake 2: Forgetting to Commit

Wrong:

db.add(task)
return task  # Task not saved!
Enter fullscreen mode Exit fullscreen mode

Right:

db.add(task)
db.commit()  # Actually save
db.refresh(task)  # Get auto-generated fields
return task
Enter fullscreen mode Exit fullscreen mode

Mistake 3: Hardcoding Database URL

Wrong:

DATABASE_URL = "postgresql://user:pass@localhost/db"  # In code!
Enter fullscreen mode Exit fullscreen mode

Right:

DATABASE_URL = os.getenv("DATABASE_URL")  # From .env
Enter fullscreen mode Exit fullscreen mode

Never commit credentials!

What I Learned Today

Technical Skills:

✅ PostgreSQL setup and configuration

✅ SQLAlchemy ORM models and relationships

✅ Database session lifecycle management

✅ Environment-based configuration

✅ Query building with SQLAlchemy

✅ UUID vs integer primary keys

✅ Timestamp auto-management

Conceptual Understanding:

✅ Why ORMs exist and when to use them

✅ Database connection pooling

✅ Transaction management

✅ The difference between staging and committing

✅ Why sessions must be closed

Production Patterns:

✅ Environment variable configuration

✅ Dependency injection for resources

✅ Proper error handling

✅ Security best practices (no hardcoded credentials)

The "Aha!" Moments

1. Sessions are NOT connections

  • A session is a workspace for database operations
  • Connections are managed by the engine's pool
  • One session can use multiple connections

2. ORM saves so much boilerplate

  • No manual SQL string building
  • No manual parameter binding
  • Type checking at Python level

3. Dependency injection is elegant

def my_route(db: Session = Depends(get_db)):
Enter fullscreen mode Exit fullscreen mode

That one line:

  • Gets a database session
  • Provides it to the function
  • Ensures it's closed afterward
  • Handles errors automatically

Beautiful.

Next Steps

Tomorrow (Day 3), I'm planning to add:

  • Advanced query features (filtering, searching)
  • User model with relationships
  • Authentication basics

But today? Today I celebrate having a real, production-ready database powering my API. 🎉

Resources That Helped

Full Code

Complete code available on GitHub:
100-days-of-python/days/002


Time Investment: 2.5 hours

Knowledge Gained: Production database integration skills

Feeling: Like a real backend developer 💪

Day 2/100 complete!

Tomorrow: Advanced queries and relationships


Following my 100 Days journey?

📝 Blog | 🐦 Twitter | 💼 LinkedIn

Tags: #100DaysOfCode #Python #FastAPI #PostgreSQL #SQLAlchemy #BackendDevelopment #DatabaseDesign #LearningInPublic

Top comments (2)

Collapse
 
swoopsavvy profile image
SwoopSavvy

Great course, love it!

Collapse
 
archijain profile image
archi-jain

Thank you! Glad you found it helpful. I’m documenting my daily learning, so more detailed posts are coming soon.