We've all been there: you're starting a new project, and the first thing you need is a database schema. You open a text editor, start typing CREATE TABLE users (...), then CREATE TABLE orders (...), then you realize you forgot a foreign key, go back, fix the constraint names, wonder if you should use UUID or BIGINT... and 45 minutes later you have 6 tables and a headache.
What if you could just describe what you need in plain English and have your schema appear visually — tables, columns, relationships, and all?
That's exactly what I've been doing for the past few months, and in this post I'll walk you through the workflow.
The Problem with Traditional Database Design
Most developers design databases in one of two ways:
-
SQL-first: Write
CREATE TABLEstatements by hand, run them, realize you forgot something, write anALTER TABLE, repeat. - GUI tools: Open MySQL Workbench or pgAdmin, click through dozens of dialogs to create each column one by one.
Both approaches share the same problem: they're slow and disconnected from your actual development workflow. Your schema lives in one place, your code lives in another, and your migrations are a third thing you need to keep in sync manually.
Enter MCP: The Protocol That Changed Everything
MCP (Model Context Protocol) is an open standard that lets AI coding assistants interact with external tools. Think of it as a universal plug that connects your AI assistant to any service — databases, APIs, design tools, you name it.
Here's what this means in practice: your AI assistant can read, create, and modify your database schema directly, and you see the changes visually in real-time.
I use ER Flow as my database design tool. It provides an MCP Server that exposes 25+ tools for schema management. When connected to an AI coding assistant like Cursor, the workflow looks like this:
The Setup (2 Minutes)
Step 1: Create a data model on ER Flow and copy the MCP Server URL from the settings panel.
Step 2: Add this to your project's .cursor/mcp.json (or equivalent for your AI tool):
{
"mcpServers": {
"erflow": {
"url": "https://app.erflow.io/api/mcp/YOUR_UUID_HERE"
}
}
}
That's it. No packages to install, no API keys to configure, no Docker containers to spin up.
The Workflow in Action
Designing a SaaS Schema from Scratch
Here's a real example. I'm building a project management SaaS and I tell Cursor:
"Create the database schema for a project management app. I need: users with email and password, organizations that users belong to, projects within organizations, tasks within projects with status, priority, assignee, and due date. Add proper indexes and foreign keys."
The AI reads the current schema (empty), then calls the ER Flow batch operations tool to create everything at once. Within seconds, I see this on my ER Flow canvas:
-
users table with
id,email,password_hash,name,created_at -
organizations table with a pivot table
organization_user -
projects table with
organization_idFK -
tasks table with
project_idandassignee_idFKs,statusenum,priority,due_date - All foreign keys with proper
ON DELETE CASCADE - Indexes on
email,status,assignee_id,due_date
The entire schema appears visually with relationship lines connecting the tables. No typing CREATE TABLE. No clicking through dialogs.
Iterating on the Design
The real power shows up when you iterate:
"Add a comments system to tasks. Users should be able to comment on tasks, and comments should support threading (replies to other comments)."
The AI checks the existing schema, understands the relationships, and adds:
-
task_comments table with
task_id,user_id,parent_comment_id(self-referencing for threads),body,created_at - Foreign keys to
tasks,users, and self-referencing FK - Index on
task_idfor fast comment loading
Catching Mistakes Before They Happen
What I love most is that the AI can see my existing schema before making changes. So if I ask:
"Add a notifications table"
It won't just create a generic table. It'll see my existing users and organizations tables and create something like:
-
user_idFK pointing to my actualuserstable -
notifiable_typeandnotifiable_idfor polymorphic relations (if it sees that pattern in my schema) - Proper indexes based on the queries it anticipates
Generating Migrations
Once my schema looks right on the ER Flow canvas, I create a checkpoint and generate migrations. ER Flow supports Laravel and Phinx migration formats with both up() and down() methods.
The checkpoint-based diffing detects:
- Table creates, drops, and renames
- Column additions, modifications, and removals
- Index and foreign key changes
So I design visually, iterate with AI, and export production-ready migration files. No manual migration writing.
Why This Beats Other Approaches
| Approach | Schema Visibility | AI Integration | Collaboration | Migrations |
|---|---|---|---|---|
| Raw SQL files | None | Copy-paste | Git conflicts | Manual |
| pgAdmin / Workbench | Local GUI | None | Not real-time | Manual |
| dbdiagram.io | Visual | None | View-only | None |
| ER Flow + MCP | Visual, real-time | Native (25+ tools) | Real-time (CRDTs) | Auto-generated |
Getting Started
- Sign up at erflow.io (free plan available)
- Create a data model and choose your database engine (PostgreSQL, MySQL, Oracle, SQL Server, or SQLite)
- Copy the MCP URL from your model settings
-
Add it to your AI tool (
.cursor/mcp.json, Windsurf config, or Claude Code MCP settings) - Start describing your schema in natural language
The MCP Server works with any MCP-compatible AI tool: Cursor, Windsurf, Claude Code (Claude's official CLI), and more.
Tips for Better Results
After months of using this workflow, here are my tips:
- Start broad, then refine. Describe the full schema first, then iterate on specific tables.
- Be specific about constraints. Say "email should be unique" or "cascade delete on tasks when project is deleted" — the AI will set the right constraints.
-
Use
get-data-model-dbmlexplicitly if the AI seems confused about the current state. This tool returns the full schema in DBML format. -
Batch operations are your friend. For creating multiple tables at once, the AI should use
batch-operationsinstead of individualcreate-tablecalls.

Top comments (0)