Every time you "like" something on any platform, a database somewhere executes:
INSERT INTO likes (user_id, post_id) VALUES (you, that_thing_you_liked);
We wrap it in a heart icon. Make it feel human. Emotional. Meaningful.
It's not. It's a row in a table.
So I built a social network that stops pretending. On SQLNet.cc, you want to post something? You have to type:
INSERT INTO posts (author_id, content)
VALUES (me(), 'I just got fired. Here is what it taught me about B2B sales.');
Hit enter. It posts. That's the interface.
Why build this?
The idea is simple: what if the interface was the query?
Social platforms have spent years abstracting away what's actually happening. Tap a heart, swipe right, scroll infinitely. Underneath, it's all just database operations dressed up in gradients and dopamine loops.
SQLNet works differently. You see the query. You write the query. You understand exactly what you're asking the system to do.
This isn't user-hostile design for its own sake. It's honesty about what social media actually is, and in that honesty, something interesting happens. More on that later.
The architecture of collective delusion
The core problem: give every user their own database while making it feel like one shared social network.
I call it the Three Database Model.
Database 1: System DB (The Bouncer)
Accounts. Passwords. Tenant mappings. The backend queries it, users never see it. Very professional. Very boring. Moving on.
Database 2: Primary DB (The One True Reality)
This is where truth lives. Every post, like, comment, and follow that actually happened. Canonical schema (plus some internal sync fields I'm omitting for sanity):
posts(id, author_id, content, created_at, updated_at)
likes(id, post_id, user_id, created_at)
comments(id, post_id, user_id, content, created_at, updated_at)
follows(follower_id, following_id, created_at)
Database 3: Tenant DBs (The Beautiful Lies)
When you register, I provision an entire SQLite database just for you using Turso. Not a schema. Not a namespace. A whole database with its own schema, seeded with everyone else's data.
And YOUR database has extra columns that the primary doesn't:
posts(id, author_id, content, created_at, updated_at, like_count, comment_count)
Why? So you can write ORDER BY like_count DESC without a JOIN. Convenience! User experience! Things I care about, apparently!
The twist: You query YOUR database directly. But your database is a carefully maintained illusion synchronized with everyone else's reality.
You're in the Matrix, but the Matrix runs on SQL.
The sync engine: an undocumented pragma and a Discord help
I needed Change Data Capture—a way to know every INSERT, UPDATE, DELETE happening across potentially thousands of databases. Without polling like a caveman.
The problem: Turso's CDC is not available on the Cloud. I didn't know that. I was originally planning to create branches of the primary instance for each tenant and add some extra columns, tables for convenience.
How on earth is this supposed to work?
The solution: I messaged the Turso team on Discord. They pointed me in the right direction.
I can use local SQLite files instead of Cloud Turso, since CDC is not supported.
PRAGMA unstable_capture_data_changes_conn('full');
See that unstable_ prefix? That's not branding. It's a warning. This feature is:
- Not fully supported
- Might break tomorrow
- Exactly what I needed
When you run this pragma, Turso creates a table called turso_cdc that captures every mutation. You read it like this:
SELECT
table_name,
change_type, -- 1=INSERT, 0=UPDATE, -1=DELETE
id,
bin_record_json_object(table_columns_json_array(table_name), before) as before_json,
bin_record_json_object(table_columns_json_array(table_name), after) as after_json
FROM turso_cdc
ORDER BY change_id ASC
Those functions—bin_record_json_object and table_columns_json_array—are Turso internals that convert binary records into JSON. I don't fully understand how they work. I just know they do.
THE CATCH THAT ALMOST KILLED ME:
This pragma must execute on every. single. connection.
Not per database. Per connection. Miss it once, and that connection is blind to changes. I spent four hours debugging why sync "randomly" stopped working before realizing one connection pool wasn't running the pragma.
I harassed the Turso team on Discord to get some help with that, and they told me to run pragma PRAGMA unstable_capture_data_changes_conn('full'); on every established connection.
db, err := sql.Open("turso", dbPath)
if err != nil {
return nil, err
}
// Forget this line and enjoy debugging for 4 hours
_, err = db.Exec("PRAGMA unstable_capture_data_changes_conn('full');")
Bidirectional sync: beautiful in theory, terrifying at scale
Two directions. Two kinds of problems.
UPSTREAM: You → Primary (Publishing Your Thoughts to the Void)
You run an INSERT. NATS fires a trigger. The sync worker catches it up, and:
- Reads your
turso_cdctable for changes - Filters out changes that came FROM the primary (loop prevention)
- Strips non-canonical fields (nice try,
like_count) - Writes to primary database
- Recalculates
post_statsfor affected posts - Clears your CDC table
Your Database Primary Database
┌──────────────────────────┐ ┌──────────────────────────┐
│ turso_cdc │ │ │
│ ┌──────────────────────┐ │ │ │
│ │ INSERT INTO posts │ │ extract │ │
│ │ content: "hello" │─┼─ canonical─▶ INSERT INTO posts │
│ │ like_count: 0 │ │ fields │ (without like_count) │
│ │ (you tried) │ │ (lol no) │ │
│ └──────────────────────┘ │ │ → recalc post_stats │
└──────────────────────────┘ └──────────────────────────┘
DOWNSTREAM: Primary → Everyone (Your Reality Check)
A background goroutine on an interval:
- Reads primary's
turso_cdctable - Figures out who originated each change
- For EVERY OTHER TENANT: opens connection, executes
INSERT OR REPLACE - Clears primary's CDC table
- Repeats until heat death of universe (or server restart)
Primary Database Every Single Tenant
┌──────────────────────────┐ ┌─────────────────────────┐
│ turso_cdc │ │ Tenant A │
│ ┌──────────────────────┐ │ write │ INSERT OR REPLACE │
│ │ New post appeared │─┼── to ────▶ ├─────────────────────────┤
│ └──────────────────────┘ │ each │ Tenant B │
│ │ one │ INSERT OR REPLACE │
│ │ (yes ├─────────────────────────┤
│ │ really) │ Tenant C │
│ │ │ INSERT OR REPLACE │
│ │ ├─────────────────────────┤
│ │ │ ... Tenant N │
│ │ │ You get the idea │
└──────────────────────────┘ └─────────────────────────┘
Loop prevention happens via a marker field. Downstream sync stamps every write with _sync_origin = "primary". Upstream sync checks for this marker and skips anything that came from primary:
// Downstream marks its writes
newDataCopy["_sync_origin"] = "primary"
// Upstream checks: did this change come from primary?
func isReplicatedChange(change CDCChange) bool {
if change.Operation == "INSERT" {
if origin, ok := change.NewData["_sync_origin"].(string); ok {
return origin == "primary"
}
}
// UPDATE: was null before, is "primary" now? That's a sync write.
if change.Operation == "UPDATE" {
oldOrigin, _ := change.OldData["_sync_origin"].(string)
newOrigin, _ := change.NewData["_sync_origin"].(string)
return oldOrigin == "" && newOrigin == "primary"
}
return false
}
It's more sophisticated than I originally planned. Still works. Still not apologizing.
This will absolutely not scale, and I'm telling you now
Let's do napkin math together. It'll be fun. (It won't be fun.)
For every change in the primary database, downstream sync:
- Lists all tenant databases (filesystem read)
- Looks up originator (system DB query)
- For each of N tenants: opens connection, writes data
That's O(changes × tenants) complexity.
Let's plug in numbers:
- 1,000 users
- 100 changes per sync interval
- = 100,000 database writes per interval
The sync interval is a few seconds.
for _, change := range changes {
for _, tenantName := range tenants { // N iterations, baby
if tenantName == originatorTenant {
continue
}
d.replicateToTenant(ctx, tenantName, change) // Disk go brrr
}
}
No parallelization. No batching. No sharding. Two nested for-loops.
At 10,000 users: Slow.
At 100,000 users: Unusable.
At 1,000,000 users: Physically impossible (Didn't check this, just guessing at this point).
But if this ever gets popular enough to break, that's a problem worth having. Optimization can wait for users.
The vandalism feature (yes, it's a feature now)
You know what's fun? This works:
UPDATE posts SET like_count = 999999 WHERE id = 'some-post';
Your database accepts it. You see 999999 likes. You feel powerful. You feel like a god.
For about thirty seconds.
Then downstream sync runs, reads the real post_stats from primary, and overwrites your delusions with cold, hard, normalized reality.
Why this works mechanically:
The upstream filter:
func (r *Replicator) isCanonicalField(field string) bool {
excludedFields := map[string]bool{
"like_count": true,
"comment_count": true,
"_sync_origin": true,
}
return !excludedFields[field]
}
Your like_count = 999999 never leaves your database. It's stripped on the upstream. Then downstream corrects you on the next cycle.
It's digital graffiti that cleans itself up.
I call it eventual consistency with a side of humiliation.
The terminal aesthetic
sqlnet> SELECT p.content, u.username, p.like_count
FROM posts p
JOIN users u ON p.author_id = u.id
ORDER BY p.created_at DESC LIMIT 5;
┌─────────────────────────────────┬──────────┬────────────┐
│ content │ username │ like_count │
├─────────────────────────────────┼──────────┼────────────┤
│ Just built SQL social network │ vladlen │ 42 │
│ Anyone else debug at 3am │ devghost │ 18 │
│ SELECT * FROM motivation... │ burnout │ 7 │
└─────────────────────────────────┴──────────┴────────────┘
3 rows returned (23ms)
Social media optimizes for engagement. Tap to post. Swipe to scroll. Algorithmic feeds designed to keep you scrolling.
SQLNet is the opposite. You type a query. You get a result. Then silence until you ask for more.
The friction is the point. You have to think before you post because you have to write the query first.
The me() function: three characters that make it personal
My favorite implementation detail. In any query:
SELECT * FROM posts WHERE author_id = me();
me() returns your user ID. Backend intercepts your SQL, injects the value from your JWT, executes. You never memorize a UUID.
Three characters. But they make the whole system feel yours.
You're not querying some abstract database. You're querying YOUR world. And me() is always there, a tiny reminder that this space belongs to you.
Local tables: your private chaos
Not everything syncs. Some tables exist only in YOUR database:
-- Your drafts. Yours alone.
CREATE TABLE drafts (
id TEXT PRIMARY KEY,
content TEXT,
created_at DATETIME
);
-- Your saved queries. Your algorithms.
CREATE TABLE saved_queries (
id TEXT PRIMARY KEY,
name TEXT,
query TEXT
);
Upstream sync ignores these tables entirely. Your drafts are yours. Your queries are yours.
You're curating your own feed algorithm:
INSERT INTO saved_queries (id, name, query) VALUES (
uuid(),
'Hot takes this week',
'SELECT * FROM posts WHERE created_at > date(''now'', ''-7 days'') ORDER BY like_count DESC'
);
Take that, recommendation systems.
What I learned building this
1. Abstraction has costs.
Every time we hide complexity, we hide control. Sometimes that's good. Sometimes you're building a skinner box with pretty CSS and calling it "user experience."
2. CDC is magic until you need it to work locally.
Massive shoutout to the Turso team for answering Discord messages during holidays. Open source communities are incredible. Go buy them coffee.
3. Building weird things is its own reward.
Not everything needs market fit. Sometimes you build something just to see if you can.
The stack (for those who care)
- Backend: Go. Fast and boring. Perfect.
- HTTP: Echo framework
- Auth: JWT, nothing fancy
- Databases: Turso (SQLite with superpowers)
-
CDC:
PRAGMA unstable_capture_data_changes_conn('full')+turso_cdctable - Messaging: NATS for sync triggers
- DI: Uber's fx
Who is this for?
People who:
- Read database documentation for fun
- Have looked at an algorithmic feed and thought "I could do this better with a WHERE clause"
If that's you: welcome. Your database is waiting.
Twitter: https://x.com/lenvladyslav
Project Website: https://sqlnet.cc/


Top comments (0)