DEV Community

Spencer Pauly
Spencer Pauly

Posted on

Why is Anthropic's archived Postgres MCP server still getting 312k installs a month?

Last month, @modelcontextprotocol/server-postgres was downloaded 312,391 times. The month before that, 301,440. November 2025 it was 100,059. Installs have tripled in five months and they're still climbing.

The package is archived.

Anthropic moved it into a repo called servers-archived and told you, in big letters, not to use it in production. The npm command still works. The dev community keeps reaching for it.

So who's installing it, and why?

What the server is

If you've never used it: @modelcontextprotocol/server-postgres is a small Model Context Protocol server that lets a model (Claude, Cursor, Windsurf, anything that speaks MCP) read your Postgres database. You give it a connection string. It exposes one tool called query that runs read-only SQL, plus a resource for each table's schema so the model knows what columns exist.

It's the official one. Anthropic published it. It's the first hit on Google for "postgres mcp server" and the first answer ChatGPT will give you if you ask. Wire it into your claude_desktop_config.json and you can ask Claude "what were our top 10 customers by revenue last month" and get an actual answer pulled from your actual database.

The use case is obvious. Devs have spent the last year hooking agents up to everything else they own: Slack, GitHub, Linear, Datadog. The database is the most useful thing left to connect, and this is the ready-made way to do it.

Why it got archived

In early 2025, Anthropic split their reference server repo. The actively maintained servers repo now contains seven servers: everything, fetch, filesystem, git, memory, sequentialthinking, time. Every one of them is either a protocol demo or a local-only utility.

The other 14 servers — Postgres, GitHub, GitLab, Slack, Sentry, Google Drive, Redis, Puppeteer, and the rest — got moved into servers-archived with a single notice at the top:

NO SECURITY GUARANTEES ARE PROVIDED FOR THESE ARCHIVED SERVERS.
These servers are no longer maintained. No security updates or bug fixes will be provided. Use at your own risk.

There's no specific "we archived Postgres because X" statement. But the pattern is clear when you compare the two lists. Every server that survived is something with no real security surface — it talks to your local filesystem or just demos a protocol feature. Every server that got archived is something that talks to a third-party system with credentials.

The honest read: if a reference server has a real security surface, the team would rather not own it. That's a defensible call. The awkward part is that Anthropic told users to "refer to the actively maintained servers repository" for production use, and the actively maintained repo has nothing that replaces the archived ones. The official recommendation is "use the maintained version." There is no maintained version. The implicit answer is "find something in the community."

Why it's still getting installed

Three reasons, all reasonable.

Connecting an agent to your database is critical. Devs are wiring AI agents to their databases right now. Every team has the same workflow problem: the model is in a chat thread, you need data to debug something, you tab over to your DB client, copy a row, paste it back. After the third time you wire up anything to skip that step, and the path of least resistance is the official-looking package on npm.

The disclaimer is in the wrong place. When you npx @modelcontextprotocol/server-postgres, the command works. The npm page doesn't say "archived." It says "PostgreSQL." You have to navigate to the GitHub repo to find the warning, and the repo is modelcontextprotocol/servers-archived — a different name and a different URL than the one most tutorials linked to a year ago. If you came in via a YouTube walkthrough or an LLM's suggestion, you may never visit the source at all.

It works. For demos, prototypes, "let me see if this is even useful" experiments, the server does what it says. Read-only queries against a small dev database are fine. The trouble starts when "fine for a demo" turns into "this is now in front of a production database," which is exactly the bridge a lot of teams quietly cross.

So 312k installs a month isn't 312k people making a security mistake. It's a mix of demos, CI runs, fresh installs in new projects, and a slowly growing group of teams running it against something real because they didn't know there was a better option.

A look under the hood

The whole thing is 130 lines of TypeScript. The query handler looks like this:

await client.query("BEGIN TRANSACTION READ ONLY");
const result = await client.query(sql);
// ...
client.query("ROLLBACK").catch(...);
Enter fullscreen mode Exit fullscreen mode

That's the security model. A READ ONLY Postgres transaction stops writes, so the agent can't DROP TABLE. It doesn't stop anything else. There's no statement timeout, so a query like SELECT pg_sleep(3600) ties up a connection for an hour. No row cap either, so SELECT * FROM events returns the whole table as a JSON blob and tries to fit it into the model's context window. And no allow-list of tables or block-list of columns, so if your users table has an api_key or a stripe_secret, the model can read it and pass it on to whatever third-party model provider it's running against.

Schema introspection is two columns: column_name and data_type. No primary keys, no foreign keys, no indexes, no enums, no COMMENT ON COLUMN text. The information_schema query that fetches columns also forgets to filter on table_schema, so if you have a table called users in two schemas, the model gets the columns from both merged together. The discovery query hard-codes WHERE table_schema = 'public', so any table outside public is invisible.

Errors are thrown out of the handler instead of returned as tool results, so the model sees an opaque JSON-RPC error rather than the Postgres message it could use to fix its own query. The inputSchema doesn't mark sql as required, so the agent can call query({}) and crash the server. The connection string is passed as a command-line argument, so anyone who can run ps on the host can read the password.

None of this is interesting on its own. Together it tells you what the file is: a demonstration of how to build an MCP server. Not a production database gateway. Anthropic was right to treat it that way.

The alternative: QueryBear

I'm not a neutral observer here, because I spent the last year building QueryBear for this exact gap. It's the read-only SQL gateway I wanted to exist when I first hooked an agent to my own database, and it ships with the things the official server doesn't.

Real read-only enforcement. Every query is parsed before it runs. Anything that isn't a SELECT (or a WITH resolving to one) is rejected, multi-statement input included. The transaction wrapper is still there as a second line of defense; the parser is the primary one.

Table allow-lists and column block-lists. You decide which tables an agent can read and which columns are off-limits. users.password_hash and customers.stripe_secret stay invisible unless you opt them in.

Statement timeouts and row caps. Configurable per connection. A bad query fails fast and returns a bounded result, so no more accidental full-table scans landing in the model's context window.

Schema introspection that includes the real schema. Foreign keys, indexes, enums, column comments, multi-schema. The model writes better SQL when it has more to work with, and your COMMENT ON COLUMN documentation is finally useful to something.

Errors as tool results. When a query fails, the error message comes back to the model in the tool response. The agent can fix its query and try again. The loop actually closes.

Audit log. Every query the agent ran, with timestamp, agent identity, and connection. If something goes sideways you can answer "what ran?" instead of guessing.

Encrypted credentials at rest. Connection strings are encrypted in storage, not sitting in process.argv and not visible to ps.

Drop-in MCP server. QueryBear runs as an MCP server, so it slots into the same claude_desktop_config.json spot the official one occupies. Same shape, different security model.

That's the trade. The official server is a 130-line demo. QueryBear is a service built to do this job for real.

Back to the question

So why does an archived MCP server keep growing? Because the need is real, the disclaimer is invisible, and the official-looking thing on npm works well enough to feel fine. None of that is anyone's fault. It's just the default when a reference implementation gets adopted as production infrastructure.

If you're using @modelcontextprotocol/server-postgres against your dev database for tinkering, carry on. If you're using it against anything that holds real customer data, you should know what you're using — and that you don't have to.

Top comments (0)