Hi there! I'm Maneshwar. Right now, I’m building LiveAPI, a first-of-its-kind tool that helps you automatically index API endpoints across all your repositories. LiveAPI makes it easier to discover, understand, and interact with APIs in large infrastructures.
When you're working with multiple PostgreSQL environments — dev, staging, prod — you often need to sync schema without touching the data.
This post shows how to use a pure Python script to dump an entire schema, including tables, views, triggers, sequences, constraints, and stored procedures.
No external dependencies. No ORM. No pg_dump
black box.
What This Script Does
- Connects to any PostgreSQL database.
-
Extracts complete schema info:
- Tables, Columns, Indexes
- Views, Sequences, Triggers
- Functions & Procedures
- Constraints (PK, FK, etc.)
-
Outputs schema as:
- Pretty terminal dump
- JSON file (
schema_dump.json
) - Raw SQL DDL file (
schema_dump.sql
)
Setup
Install the dependency:
pip install psycopg2
Set your connection config at the top:
engine = "postgresql"
name = "your_db"
user = "your_user"
password = "your_pass"
host = "your_host"
port = "5432"
How to Use
Just run the script:
python dump_schema.py
This will:
- Print a human-readable breakdown of your entire schema
-
Write two files in the current directory:
schema_dump.json
schema_dump.sql
JSON Output
You’ll get structured JSON that looks like this:
{
"database_info": {
"name": "liveapi",
"host": "34.57.96.60",
"port": "5432",
"engine": "postgresql",
"dump_date": "2025-07-18 18:00:00"
},
"schemas": {
"public": {
"tables": {
"users": {
"columns": [...],
"primary_keys": [...],
"foreign_keys": [...],
"indexes": [...],
"triggers": [...]
}
},
"views": [...],
"functions": [...],
"triggers": [...],
"sequences": [...],
"constraints": [...]
}
}
}
Use it for:
- Schema diffs between environments
- Version control on schema metadata
- Visual schema documentation
SQL Output
The schema_dump.sql
file contains raw SQL:
CREATE TABLE public.users (
id uuid NOT NULL DEFAULT gen_random_uuid(),
email text NOT NULL,
PRIMARY KEY (id)
);
ALTER TABLE public.users ADD CONSTRAINT fk_users_org FOREIGN KEY (org_id) REFERENCES public.org(id);
CREATE UNIQUE INDEX idx_users_email ON public.users(email);
You can pipe this into psql
on another server:
psql -h target_host -U target_user -d target_db -f schema_dump.sql
This lets you sync schema from one DB to another in a controlled way — no data, just structure.
Why Use This Over pg_dump
Feature | pg_dump --schema-only |
This Script |
---|---|---|
JSON Output | ❌ | ✅ |
Selective Extraction | ❌ (manual) | ✅ |
Programmatic Access | ❌ | ✅ |
Easy to Extend | ❌ | ✅ |
Custom Logic | ❌ | ✅ |
Triggers, Views, FKs, Indexes | ✅ | ✅ |
If you're building tooling, dashboards, or want to integrate schema dumping into your CI/CD, this Python approach is flexible and extendable.
Final Thoughts
You can use this Python script to:
- Mirror your schema across environments
- Export schema snapshots for backup or review
- Debug schema issues with a visual/textual snapshot
- Generate schema migration plans manually
Bonus: Want to Sync Schema Automatically?
Take the schema_dump.sql
generated here and write an Ansible task or a Flyway/Liquibase wrapper to apply it to staging/prod in your pipeline.
LiveAPI helps you get all your backend APIs documented in a few minutes.
With LiveAPI, you can generate interactive API docs that allow users to search and execute endpoints directly from the browser.
If you're tired of updating Swagger manually or syncing Postman collections, give it a shot.
Top comments (0)