DEV Community

Cover image for Postgres for everything? not really..and here are some of the problems.
<devtips/>
<devtips/>

Posted on

Postgres for everything? not really..and here are some of the problems.

The love letter that turns into a reality check (with a few war stories along the way)

Last time I talked about Postgres, I painted it as the ultimate all-in-one database that could replace half your tool stack if you just let it.
That was the dream. Here’s a short video you can watch:

But dreams tend to collide with reality, and reality comes with corporate networks, locked-down policies, and infrastructure that feels like it was designed by someone who hates low latency.
The truth is, Postgres itself isn’t the problem. The ecosystem around it? That’s where things get messy.

Imagine you finally get access to a “serious” production environment.
Twelve network hops to the database. Firewalls like medieval castle gates. Antivirus that treats every query like suspicious activity. And latency numbers that mock your milliseconds budget.

Even when you spin up your own Postgres inside Kubernetes, you’re suddenly managing Helm charts that break, YAML files that betray you, and resource limits that throttle your performance without warning.
Postgres is still Postgres but now it’s stuck in an environment designed for frustration.

This isn’t about bashing Postgres. It’s about understanding when it shines… and when the environment turns it into just another bottleneck.

You can watch a video here on the problems:

Table of contents

  1. The dream vs. the dungeon
  2. Kubernetes: the Postgres performance lottery
  3. Row-level security: when theory trolls reality
  4. Stored procedures & in-database logic: git → guesswork
  5. The real bottleneck isn’t Postgres; it’s policy
  6. When to not use Postgres
  7. The sweet spot: when Postgres is the right answer
  8. Conclusion + resources

The dream vs. the dungeon

The dream is simple: one database to rule them all.
You cut down on tech sprawl, ditch half your third-party dependencies, and run your entire backend off the glorious power of Postgres. No juggling five different storage engines, no extra moving parts, just clean SQL and a coffee-fueled smile.

That’s the dream I sold myself and others more than once.
And honestly? It’s not impossible. When you’re running your own infra or hacking away on a side project, Postgres really can be the Swiss Army knife. I even wrote about it before in

You don’t need 20 tools. Just use Postgres (seriously!)

One boring old SQL database might be the best backend in 2025.

medium.com

But then reality shows up usually in the form of corporate infrastructure.
You get dropped into a production environment that’s basically a dungeon crawler for database queries:

  • Twelve network hops before your request even touches the DB
  • Firewalls that block you like a bouncer at the world’s most boring nightclub
  • Antivirus software that somehow slows down SQL execution
  • Latency numbers that would make your Redis cry

And the worst part? You’re not the database admin. That means:

  • No installing PGVector for AI search
  • No PG Cron for internal scheduling
  • No PG Crypto for secure operations
  • Sometimes, you don’t even know which region your database is in

Suddenly, your “one tool for everything” dream feels like playing Elden Ring… with a potato for a GPU.

3. Kubernetes: the Postgres performance lottery

So you can’t get the extensions you want on the company’s “official” Postgres instance. Fine. You spin up your own in Kubernetes. That’s freedom, right?

Wrong.
Now you’re not just running a database you’re managing a mini-boss fight against Helm charts that only work in documentation screenshots, secrets that refuse to rotate, and resource limits set by someone who thinks 512MB RAM is “plenty for testing.”

And let’s not forget YAML. The config file format that’s all whitespace and no mercy. It will betray you. Maybe not today, maybe not tomorrow but one day, a missing space will take your cluster down faster than a bad deploy on Friday afternoon.

The big problem? Kubernetes loves to share resources. That’s great for stateless microservices… but databases? They hate unpredictability.
One minute, Postgres is humming along. The next, it’s fighting for CPU time because some unrelated service decided it was batch-processing day.

It’s not that Postgres is slow it’s that shared infrastructure makes it behave unpredictably, and databases need consistency like gamers need stable FPS.

In the K8s world, running Postgres often feels like buying a GPU in a scalper market: you might get a good one, or you might get throttled into oblivion.

4. Row-level security: when theory trolls reality

On paper, Row Level Security (RLS) sounds brilliant.
You set rules so each user only sees the rows they’re allowed to. No more messing around with manual filters in your app layer the database enforces it for you.

Then you turn it on in production… and suddenly, every query plan looks like it just rolled a critical fail.
What used to be a fast SELECT now feels like it’s doing a 14-table join for fun.
Indexes? Still there, but apparently on vacation.

Debugging it is like trying to solve a puzzle where half the pieces are invisible.
You can’t just check one place for performance issues anymore the “hidden” RLS rules are baked into every query’s execution, so you’re basically playing detective inside your own database.

I’ve heard people call it Regret Later System, and honestly… fair.
The intention is solid, but unless your data set is small and your traffic is light, RLS can quietly wreck performance in ways that are hard to untangle.

It’s not that you should never use RLS. It’s that you should treat it like production database triggers: powerful, but one wrong move and your DBA is hunting you down with an EXPLAIN ANALYZE report.

5. Stored procedures & in-database logic git → guesswork

Some developers love the idea of pushing all the business logic into the database.
Stored procedures. Functions. Triggers. Jobs. The works.
The pitch?

  • Centralized logic = less duplication.
  • Runs close to the data = faster performance.

And in theory, it’s not terrible. In a perfect world, you could put your job schedulers, validation rules, and custom data transformations all in Postgres, never touch the app layer for it, and sleep well at night.

But here’s what actually happens:

  • Debugging becomes a questline from hell no proper stack traces, no modern logging tools, just squinting at SQL like it’s an ancient scroll.
  • Version control turns into duct tape your stored procedure updates live in random .sql files or, worse, only exist in production and nobody remembers who wrote them.
  • Your CI/CD pipeline? Doesn’t even know this logic exists, so testing is… let’s say “optional.”

One developer summed it up perfectly:

“If you maintain jobs and business logic inside Postgres, you’re giving up git for guesswork.”

It’s not that stored procedures are evil they’re great for certain performance-critical cases but making them the default place for all your application logic is asking for future pain.
When that pain hits, it doesn’t matter how elegant your SQL was you’ll be the one spelunking through functions at 2 AM while prod is down.

6. The real bottleneck isn’t Postgres it’s policy

If you’ve worked in a big company, you know the pain:
The technology is fine. The database is fine. The policies are the real boss fight.

You want to add an extension?

“We don’t support that in prod.”

Need to adjust a schema?

“Please open a ticket, wait two weeks, and maybe we’ll consider it next quarter.”

Want to experiment with something like PGVector or Timescale?

“Security review says no.”

At that point, Postgres stops being your all-in-one platform and starts acting like another choke point in the pipeline.
Not because it can’t do the job but because you’re not allowed to make it do the job.

The reality is, in most locked-down environments:

  • Your DB access is read-only for half the things you need
  • Schema changes require a full change management process
  • Any “non-standard” feature is instantly suspect
  • Operations teams hold the keys to the castle, and they’re not keen on letting you inside

By the time you’re done navigating the approvals, you’ve lost the momentum (and sometimes the will) to build the thing you originally wanted.

And that’s why so many teams start looking for workarounds lighter, more portable solutions that don’t require begging for permission just to try something new.

7. When to not use Postgres

I get it Postgres is powerful, flexible, and rock-solid. But that doesn’t mean it’s always the right tool for the job. Sometimes, reaching for it is like bringing a full-size gaming rig just to check your email.

Here are a few times when it’s better to leave Postgres on the bench:

Portable dev setups → SQLite wins

If you need something quick, portable, and zero-config, SQLite is a no-brainer. No servers. No passwords. No ops tickets. Your database is just a file in your project folder. For dev tools, small apps, or embedded systems, SQLite is the “throw it in your backpack and go” database.

Simple caching → Redis wins

If all you need is a lightning-fast key-value store for caching, Redis will smoke anything you try to hack together in Postgres.
It’s fast, it’s simple, and it’s built for that exact use case.
Trying to force Postgres into being a cache is like trying to use Excel as a game engine technically possible, but why would you?

Specialized search → Elastic or Meilisearch wins

Yes, Postgres has full-text search. No, it’s not going to outshine Elasticsearch or Meilisearch for large-scale, search-heavy workloads. These tools are built for indexing, ranking, and lightning-fast queries across massive datasets.

The point isn’t to hate on Postgres it’s to remind yourself that sometimes a smaller, more focused tool gets you there faster without fighting your infrastructure.

8. The sweet spot when Postgres is the right answer

All this talk about limitations doesn’t mean Postgres is only good for small side projects or local dev work. Far from it.
When you control the environment, Postgres can be an absolute monster in the best way possible.

The magic happens when:

  • You can install the extensions you need without begging for approval
  • Your ops team actually supports Postgres in production (and knows how to scale it)
  • You have stable, predictable infrastructure instead of “whatever’s free in the cluster”
  • Your database is treated like a critical service, not an afterthought

That’s when Postgres turns into a real platform not just a database. You can go from analytics with TimescaleDB to AI-powered search with PGVector without juggling multiple systems.

I’ve seen it firsthand. In one project, scaling issues nearly broke everything queries slowed to a crawl, replication lag piled up, and the ops team was on edge. The fix didn’t involve ditching Postgres. It involved upgrading, tuning, and enabling the right features. I wrote the whole story in

Scaling broke my stack, but PostgreSQL 18 showed me how to fix it

PostgreSQL 18 is your new mentor (if you listen)

medium.com

and the TL;DR was simple: Postgres works wonders when it’s given room to breathe.

In the right conditions, Postgres really can be your single source of truth for relational data, time-series data, geospatial queries, and more without feeling like you’re pushing it uphill.

9. Conclusion control beats capability

Postgres is an incredible database. It can handle analytics, geospatial data, time-series workloads, and even AI-driven search all without breaking a sweat. But the real takeaway here? Capability doesn’t matter if you don’t control the environment.

In a side project or a well-run infrastructure, Postgres can absolutely replace a bunch of other tools and simplify your stack. In a locked-down corporate maze with policy roadblocks, unpredictable clusters, and extension bans, it’s just another bottleneck with a different name.

So next time someone says, “Let’s use Postgres for everything”, just smile and ask:

“Do we actually control it?”

Because if the answer is no, it might be time to pick the right tool for the job instead of forcing the Swiss Army knife to be a chainsaw.

Resources

The love letter that turns into a reality check (with a few war stories along the way)

Last time I talked about Postgres, I painted it as the ultimate all-in-one database that could replace half your tool stack if you just let it.
That was the dream. Here’s a short video you can watch:

But dreams tend to collide with reality, and reality comes with corporate networks, locked-down policies, and infrastructure that feels like it was designed by someone who hates low latency.
The truth is, Postgres itself isn’t the problem. The ecosystem around it? That’s where things get messy.

Imagine you finally get access to a “serious” production environment.
Twelve network hops to the database. Firewalls like medieval castle gates. Antivirus that treats every query like suspicious activity. And latency numbers that mock your milliseconds budget.

Even when you spin up your own Postgres inside Kubernetes, you’re suddenly managing Helm charts that break, YAML files that betray you, and resource limits that throttle your performance without warning.
Postgres is still Postgres but now it’s stuck in an environment designed for frustration.

This isn’t about bashing Postgres. It’s about understanding when it shines… and when the environment turns it into just another bottleneck.

You can watch a video here on the problems:

Table of contents

  1. The dream vs. the dungeon
  2. Kubernetes: the Postgres performance lottery
  3. Row-level security: when theory trolls reality
  4. Stored procedures & in-database logic: git → guesswork
  5. The real bottleneck isn’t Postgres; it’s policy
  6. When to not use Postgres
  7. The sweet spot: when Postgres is the right answer
  8. Conclusion + resources

The dream vs. the dungeon

The dream is simple: one database to rule them all.
You cut down on tech sprawl, ditch half your third-party dependencies, and run your entire backend off the glorious power of Postgres. No juggling five different storage engines, no extra moving parts, just clean SQL and a coffee-fueled smile.

That’s the dream I sold myself and others more than once.
And honestly? It’s not impossible. When you’re running your own infra or hacking away on a side project, Postgres really can be the Swiss Army knife. I even wrote about it before in

You don’t need 20 tools. Just use Postgres (seriously!)

One boring old SQL database might be the best backend in 2025.

medium.com

But then reality shows up usually in the form of corporate infrastructure.
You get dropped into a production environment that’s basically a dungeon crawler for database queries:

  • Twelve network hops before your request even touches the DB
  • Firewalls that block you like a bouncer at the world’s most boring nightclub
  • Antivirus software that somehow slows down SQL execution
  • Latency numbers that would make your Redis cry

And the worst part? You’re not the database admin. That means:

  • No installing PGVector for AI search
  • No PG Cron for internal scheduling
  • No PG Crypto for secure operations
  • Sometimes, you don’t even know which region your database is in

Suddenly, your “one tool for everything” dream feels like playing Elden Ring… with a potato for a GPU.

3. Kubernetes: the Postgres performance lottery

So you can’t get the extensions you want on the company’s “official” Postgres instance. Fine. You spin up your own in Kubernetes. That’s freedom, right?

Wrong.
Now you’re not just running a database you’re managing a mini-boss fight against Helm charts that only work in documentation screenshots, secrets that refuse to rotate, and resource limits set by someone who thinks 512MB RAM is “plenty for testing.”

And let’s not forget YAML. The config file format that’s all whitespace and no mercy. It will betray you. Maybe not today, maybe not tomorrow but one day, a missing space will take your cluster down faster than a bad deploy on Friday afternoon.

The big problem? Kubernetes loves to share resources. That’s great for stateless microservices… but databases? They hate unpredictability.
One minute, Postgres is humming along. The next, it’s fighting for CPU time because some unrelated service decided it was batch-processing day.

It’s not that Postgres is slow it’s that shared infrastructure makes it behave unpredictably, and databases need consistency like gamers need stable FPS.

In the K8s world, running Postgres often feels like buying a GPU in a scalper market: you might get a good one, or you might get throttled into oblivion.

4. Row-level security: when theory trolls reality

On paper, Row Level Security (RLS) sounds brilliant.
You set rules so each user only sees the rows they’re allowed to. No more messing around with manual filters in your app layer the database enforces it for you.

Then you turn it on in production… and suddenly, every query plan looks like it just rolled a critical fail.
What used to be a fast SELECT now feels like it’s doing a 14-table join for fun.
Indexes? Still there, but apparently on vacation.

Debugging it is like trying to solve a puzzle where half the pieces are invisible.
You can’t just check one place for performance issues anymore the “hidden” RLS rules are baked into every query’s execution, so you’re basically playing detective inside your own database.

I’ve heard people call it Regret Later System, and honestly… fair.
The intention is solid, but unless your data set is small and your traffic is light, RLS can quietly wreck performance in ways that are hard to untangle.

It’s not that you should never use RLS. It’s that you should treat it like production database triggers: powerful, but one wrong move and your DBA is hunting you down with an EXPLAIN ANALYZE report.

5. Stored procedures & in-database logic git → guesswork

Some developers love the idea of pushing all the business logic into the database.
Stored procedures. Functions. Triggers. Jobs. The works.
The pitch?

  • Centralized logic = less duplication.
  • Runs close to the data = faster performance.

And in theory, it’s not terrible. In a perfect world, you could put your job schedulers, validation rules, and custom data transformations all in Postgres, never touch the app layer for it, and sleep well at night.

But here’s what actually happens:

  • Debugging becomes a questline from hell no proper stack traces, no modern logging tools, just squinting at SQL like it’s an ancient scroll.
  • Version control turns into duct tape your stored procedure updates live in random .sql files or, worse, only exist in production and nobody remembers who wrote them.
  • Your CI/CD pipeline? Doesn’t even know this logic exists, so testing is… let’s say “optional.”

One developer summed it up perfectly:

“If you maintain jobs and business logic inside Postgres, you’re giving up git for guesswork.”

It’s not that stored procedures are evil they’re great for certain performance-critical cases but making them the default place for all your application logic is asking for future pain.
When that pain hits, it doesn’t matter how elegant your SQL was you’ll be the one spelunking through functions at 2 AM while prod is down.

6. The real bottleneck isn’t Postgres it’s policy

If you’ve worked in a big company, you know the pain:
The technology is fine. The database is fine. The policies are the real boss fight.

You want to add an extension?

“We don’t support that in prod.”

Need to adjust a schema?

“Please open a ticket, wait two weeks, and maybe we’ll consider it next quarter.”

Want to experiment with something like PGVector or Timescale?

“Security review says no.”

At that point, Postgres stops being your all-in-one platform and starts acting like another choke point in the pipeline.
Not because it can’t do the job but because you’re not allowed to make it do the job.

The reality is, in most locked-down environments:

  • Your DB access is read-only for half the things you need
  • Schema changes require a full change management process
  • Any “non-standard” feature is instantly suspect
  • Operations teams hold the keys to the castle, and they’re not keen on letting you inside

By the time you’re done navigating the approvals, you’ve lost the momentum (and sometimes the will) to build the thing you originally wanted.

And that’s why so many teams start looking for workarounds lighter, more portable solutions that don’t require begging for permission just to try something new.

7. When to not use Postgres

I get it Postgres is powerful, flexible, and rock-solid. But that doesn’t mean it’s always the right tool for the job. Sometimes, reaching for it is like bringing a full-size gaming rig just to check your email.

Here are a few times when it’s better to leave Postgres on the bench:

Portable dev setups → SQLite wins

If you need something quick, portable, and zero-config, SQLite is a no-brainer. No servers. No passwords. No ops tickets. Your database is just a file in your project folder. For dev tools, small apps, or embedded systems, SQLite is the “throw it in your backpack and go” database.

Simple caching → Redis wins

If all you need is a lightning-fast key-value store for caching, Redis will smoke anything you try to hack together in Postgres.
It’s fast, it’s simple, and it’s built for that exact use case.
Trying to force Postgres into being a cache is like trying to use Excel as a game engine technically possible, but why would you?

Specialized search → Elastic or Meilisearch wins

Yes, Postgres has full-text search. No, it’s not going to outshine Elasticsearch or Meilisearch for large-scale, search-heavy workloads. These tools are built for indexing, ranking, and lightning-fast queries across massive datasets.

The point isn’t to hate on Postgres it’s to remind yourself that sometimes a smaller, more focused tool gets you there faster without fighting your infrastructure.

8. The sweet spot when Postgres is the right answer

All this talk about limitations doesn’t mean Postgres is only good for small side projects or local dev work. Far from it.
When you control the environment, Postgres can be an absolute monster in the best way possible.

The magic happens when:

  • You can install the extensions you need without begging for approval
  • Your ops team actually supports Postgres in production (and knows how to scale it)
  • You have stable, predictable infrastructure instead of “whatever’s free in the cluster”
  • Your database is treated like a critical service, not an afterthought

That’s when Postgres turns into a real platform not just a database. You can go from analytics with TimescaleDB to AI-powered search with PGVector without juggling multiple systems.

I’ve seen it firsthand. In one project, scaling issues nearly broke everything queries slowed to a crawl, replication lag piled up, and the ops team was on edge. The fix didn’t involve ditching Postgres. It involved upgrading, tuning, and enabling the right features. I wrote the whole story in

Scaling broke my stack, but PostgreSQL 18 showed me how to fix it

PostgreSQL 18 is your new mentor (if you listen)

medium.com

and the TL;DR was simple: Postgres works wonders when it’s given room to breathe.

In the right conditions, Postgres really can be your single source of truth for relational data, time-series data, geospatial queries, and more without feeling like you’re pushing it uphill.

9. Conclusion control beats capability

Postgres is an incredible database. It can handle analytics, geospatial data, time-series workloads, and even AI-driven search all without breaking a sweat. But the real takeaway here? Capability doesn’t matter if you don’t control the environment.

In a side project or a well-run infrastructure, Postgres can absolutely replace a bunch of other tools and simplify your stack. In a locked-down corporate maze with policy roadblocks, unpredictable clusters, and extension bans, it’s just another bottleneck with a different name.

So next time someone says, “Let’s use Postgres for everything”, just smile and ask:

“Do we actually control it?”

Because if the answer is no, it might be time to pick the right tool for the job instead of forcing the Swiss Army knife to be a chainsaw.

Resources

Top comments (1)

Collapse
 
lucasmoreau profile image
Lucas Moreau

Quick tip: run PgBouncer near your app with transaction pooling and keep Postgres max_connections low—it hides connection churn/latency and stabilizes performance in locked-down or K8s setups.