PostgreSQL is a powerful, production-grade relational database, but even the best systems encounter limits. One of the more frustrating and cryptic issues you might face in high-concurrency or large-data workloads is the dreaded:
FATAL: could not open file: Too many open files
This guide walks you through the why, how and how to fix the “Too Many Open Files” error in PostgreSQL on Linux systems.
Understanding the issue
PostgreSQL uses file descriptors for a variety of internal operations:
- Opening tables, indexes, and system catalogs
- Handling client connections via sockets
- Managing WAL (Write-Ahead Log) files
- Working with temporary files for sorting or hashing
When PostgreSQL or the underlying operating system exceeds its file descriptor limits, you get errors like:
FATAL: could not open file "base/16384/2600": Too many open files
This is a resource utilization issue, not a bug.
Step 1: Verify the Current Limits
OS-Level File Descriptor Limit
Check the shell limit for open files:
ulimit -n
For PostgreSQL’s active process:
cat /proc/$(pidof postgres | awk '{print $1}')/limits | grep "Max open files"
PostgreSQL Internal Limit
Check the database-side maximum files per backend:
SHOW max_files_per_process;
Step 2: Increase File Descriptor Limits
If the current limits are too low (commonly 1024), it’s time to raise them.
A. Modify System-Wide Limits (Linux)
Edit /etc/security/limits.conf
:
postgres soft nofile 4096
postgres hard nofile 65535
Ensure PAM uses these settings:
echo "session required pam_limits.so" | sudo tee -a /etc/pam.d/common-session
B. Configure systemd for PostgreSQL
If PostgreSQL runs as a systemd
service (default on modern Linux distros):
sudo systemctl edit postgresql
Add or modify the override file:
[Service]
LimitNOFILE=65535
Reload systemd and restart PostgreSQL:
sudo systemctl daemon-reexec
sudo systemctl restart postgresql
Step 3: Tune PostgreSQL Settings
Update the following parameters in postgresql.conf
if applicable:
max_files_per_process = 4096
max_connections = 200
Note: A higher
max_connections
leads to more simultaneous file handles. Use with care and monitor performance.
After changes, restart the PostgreSQL service:
sudo systemctl restart postgresql
Step 4: Analyze and Monitor Open File Usage
Use lsof
to examine open files:
sudo lsof -u postgres | wc -l
Or target a specific PostgreSQL process:
sudo lsof -p $(pidof postgres | awk '{print $1}')
Check for excessive use of:
- Temp files (
/tmp
or PostgreSQL temp dirs) - Frequent or long-running queries
- Poorly tuned connection pools
Step 5: Best Practices to Prevent Recurrence
- Use a connection pooler (e.g., PgBouncer) to reduce concurrent connections
- Analyze slow queries to prevent unnecessary temp file usage
- Audit extensions or custom code that might open too many handles
- Monitor file usage via
pg_stat_activity
andpg_stat_file
Summary
Check file limits - ulimit -n
, cat /proc/*/limits
Raise limits in Linux - /etc/security/limits.conf
Adjust systemd settings - LimitNOFILE=65535
PostgreSQL file limit - max_files_per_process = 4096
Restart PostgreSQL - sudo systemctl restart postgresql
Final Thought
The “Too many open files” error is PostgreSQL telling you it needs more breathing room. With just a few configuration updates on both the OS and database sides, you can lift this ceiling and allow PostgreSQL to scale comfortably with your workload.
Got thousands of connections or huge data volume? Consider connection pooling and query optimization as your next frontier.
Top comments (0)