PostgreSQL is secure by default in many ways, but "secure by default" doesn't mean "secure enough for production." Most breaches happen not because PostgreSQL has vulnerabilities but because of misconfiguration: overly permissive access, weak authentication, unencrypted connections, or accounts with more privileges than they need.
This is a list of practical things to check and fix before your database handles real traffic. Some of these take five minutes. Others require more planning. All of them matter.
1. Switch to SCRAM-SHA-256 authentication
The default authentication method in older PostgreSQL setups is MD5. It's been considered weak for years and provides almost no real protection if someone captures the authentication traffic. SCRAM-SHA-256 is the modern replacement — it uses a proper challenge-response mechanism that never sends the actual password over the wire.
Check what you're currently using:
SHOW password_encryption;
If it returns md5, change it in postgresql.conf:
password_encryption = scram-sha-256
Then update pg_hba.conf to require SCRAM:
host all all 0.0.0.0/0 scram-sha-256
Existing passwords stay as MD5 hashes until users change them. You'll need to reset passwords for all accounts to actually enforce SCRAM. Do it for the application user first, then work through the rest.
2. Configure pg_hba.conf carefully
pg_hba.conf is the file that controls who can connect to your database, from where and how. It's one of the most important security controls in PostgreSQL and also one of the most commonly misconfigured.
The authentication methods available are:
| Method | Description | Use in production? |
|---|---|---|
trust |
No password required | Never |
password |
Plaintext password | No |
md5 |
MD5-hashed password | Avoid |
scram-sha-256 |
Secure challenge-response | Yes |
peer |
OS user match (Unix sockets) | Yes, for local admin access |
ident |
OS user via identd | Rarely |
reject |
Always deny | Yes, for explicit blocks |
The trust method is the most dangerous. It means anyone who can reach the socket or port is in. It's often left on for local connections like this:
local all all trust
That's fine on a developer laptop. On a server where multiple users or processes share the same OS, it's a serious problem. Replace it with peer for Unix sockets or scram-sha-256 for TCP connections.
Be specific with IP ranges. Allow only the hosts that actually need access:
host mydb myapp 10.0.1.50/32 scram-sha-256
3. Use SSL for all remote connections
PostgreSQL supports SSL natively. Without it, credentials and query results travel in plaintext over the network. On a private network that might seem like a low risk, but it's worth almost nothing to enable SSL so there's no reason to skip it.
Set these in postgresql.conf:
ssl = on
ssl_cert_file = 'server.crt'
ssl_key_file = 'server.key'
Then enforce SSL in pg_hba.conf by using hostssl instead of host:
hostssl all all 0.0.0.0/0 scram-sha-256
The hostssl record matches only SSL connections. Any attempt to connect without SSL gets rejected. You can also set ssl_min_protocol_version = 'TLSv1.2' to block older TLS versions.
If you're using a managed PostgreSQL service, SSL is usually already enabled. But check whether it's enforced — some providers allow non-SSL connections unless you explicitly require them.
4. Follow the principle of least privilege
Most applications connect to PostgreSQL with way more permissions than they actually need. A read-heavy reporting app that connects as a superuser, or an API that connects with a role that can DROP TABLE — these are accidents waiting to happen.
Create dedicated roles with only what's required:
-- Read-only role
CREATE ROLE app_readonly;
GRANT CONNECT ON DATABASE mydb TO app_readonly;
GRANT USAGE ON SCHEMA public TO app_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO app_readonly;
-- Read-write role
CREATE ROLE app_readwrite;
GRANT CONNECT ON DATABASE mydb TO app_readwrite;
GRANT USAGE ON SCHEMA public TO app_readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_readwrite;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO app_readwrite;
A few things to pay attention to:
- Never let your application connect as
postgresor any superuser - Don't grant
SUPERUSERto roles that don't need it - Use
REVOKEto remove defaultPUBLICschema permissions if you want tighter isolation - Consider separate roles per microservice, not one shared role for everything
The table below shows common privilege mistakes and safer alternatives:
| Mistake | Better approach |
|---|---|
App connects as postgres
|
Create a dedicated app role with minimal permissions |
| Single role for all services | One role per service with only what that service needs |
GRANT ALL ON DATABASE |
Grant only CONNECT, then schema-level permissions separately |
| Schema accessible by public | REVOKE CREATE ON SCHEMA public FROM PUBLIC |
Least privilege doesn't prevent all attacks, but it limits the blast radius when something does go wrong.
5. Restrict network-level access
PostgreSQL by default listens on localhost only. That's good. But it's common to open it up to accept external connections by setting listen_addresses = '*', and then forget about it.
Think carefully about what actually needs network access:
- If your app and database are on the same server, use Unix sockets only
- If they're on separate hosts, restrict
listen_addressesto the specific network interface and use firewall rules to limit which IPs can reach port 5432 - Never expose PostgreSQL directly to the public internet
Even on private networks, a firewall rule at the OS or cloud security group level adds a useful layer. Something like:
# Allow only the app server
iptables -A INPUT -p tcp --dport 5432 -s 10.0.1.50 -j ACCEPT
iptables -A INPUT -p tcp --dport 5432 -j DROP
Port scanning is trivial. If your database port is visible to the internet, you will get probed. Even if authentication is solid, there's no reason to advertise that a database exists.
6. Enable and monitor audit logging with pgAudit
PostgreSQL's default logging can tell you that a query ran and how long it took. It doesn't tell you who changed what data, which is what you need for security audits and compliance.
pgAudit is an extension that adds detailed audit logging. It hooks into PostgreSQL's executor and logs session-level or object-level activity. To install it:
CREATE EXTENSION pgaudit;
Then configure it in postgresql.conf:
pgaudit.log = 'write, ddl, role'
The logging categories are:
-
read— SELECT and COPY FROM -
write— INSERT, UPDATE, DELETE, TRUNCATE, COPY TO -
function— function calls -
role— GRANT, REVOKE, CREATE/DROP/ALTER ROLE -
ddl— CREATE, DROP, ALTER (anything that changes schema) -
misc— DISCARD, FETCH, CHECKPOINT, etc.
For most production databases, logging write, ddl and role is a reasonable starting point. It captures data changes and schema modifications without flooding your logs with SELECT noise.
Without some form of audit logging, you often can't answer basic questions after an incident: what was deleted, when and by which account. That matters both for debugging and for regulatory requirements.
7. Keep PostgreSQL patched
PostgreSQL releases patch versions regularly. These aren't feature releases — they fix bugs including security vulnerabilities. Running an unpatched version is one of the most common and avoidable risks.
The version numbering is straightforward: 16.2 means major version 16, patch release 2. Patches within the same major version are safe to apply with just a service restart. No dump and restore needed.
The PostgreSQL project maintains each major version for 5 years. After that, no more patches. If you're on PostgreSQL 11 or 12, you're already past or near the end of support. Plan a migration.
Check your current version:
SELECT version();
Compare it against the latest patch releases at postgresql.org/support/versioning. If you're more than one patch behind, schedule an update.
8. Protect backups and test restores
A database with excellent access controls but unencrypted, unprotected backups gives attackers an easier target. Backups often end up in object storage, local disks or shared network paths with more permissive access than the database itself.
A few things to check:
- Are backup files encrypted? If not, anyone with storage access can read your data
- Who has access to the storage bucket or directory?
- Are backups being taken at all? And recently?
- Have you actually tested a restore?
The last one is underrated. Backups you've never tested are just files that might work. A PostgreSQL backup tool like Databasus handles this end-to-end — it's the industry standard for automated PostgreSQL backups, used by both individual developers and enterprise teams. It supports encryption, scheduled backups to S3, Google Drive, FTP and other storages, retention policies and failure notifications, so you're not relying on scripts you wrote and forgot about.
Even without a dedicated tool, make restore testing a regular habit. Pick a backup, restore it to a staging environment, verify the data. Do it at least quarterly.
Putting it together
Security hardening is not a one-time checklist. Configuration drift, new services connecting to the database, dependency updates — things change. Set a reminder to review your setup periodically.
The biggest wins come from the basics: strong authentication, proper privileges, SSL and network restrictions. Get those right and you've addressed most of the real-world attack surface. The rest — audit logging, patching, backup hygiene — builds on top of that foundation.
Start with what you can fix today. Most of these don't require downtime.

Top comments (0)