The Problem
Running a test suite on an older CI machine with slow disks revealed PostgreSQL as a major bottleneck. Each test run was taking over 1 hour to complete. The culprit? Tests performing numerous database operations, with TRUNCATE commands cleaning up data after each test.
With slow disk I/O, PostgreSQL was spending most of its time syncing data to disk - operations that were completely unnecessary in a ephemeral CI environment where data persistence doesn't matter.
Catching PostgreSQL in the Act
Running top during test execution revealed the smoking gun:
242503 postgres 20 0 184592 49420 39944 R 81.7 0.3 0:15.66 postgres: postgres api_test 10.89.5.6(43216) TRUNCATE TABLE
PostgreSQL was consuming 81.7% CPU just to truncate a table! This single TRUNCATE operation ran for over 15 seconds. On a machine with slow disks, PostgreSQL was spending enormous amounts of time on fsync operations, waiting for the kernel to confirm data was written to physical storage - even though we were just emptying tables between tests.
The Solution
Three simple PostgreSQL configuration tweaks made a dramatic difference:
services:
postgres:
image: postgres:16.11-alpine
environment:
POSTGRES_INITDB_ARGS: "--nosync"
POSTGRES_SHARED_BUFFERS: 256MB
tmpfs:
- /var/lib/postgresql/data:size=1g
1. --nosync Flag
The --nosync flag tells PostgreSQL to skip fsync() calls during database initialization. In a CI environment, we don't care about data durability - if the container crashes, we'll just start over. This eliminates expensive disk sync operations that were slowing down database setup.
2. Increased Shared Buffers
Setting POSTGRES_SHARED_BUFFERS: 256MB (up from the default ~128MB) gives PostgreSQL more memory to cache frequently accessed data. This is especially helpful when running tests that repeatedly access the same tables.
3. tmpfs for Data Directory (The Game Changer)
The biggest performance win came from mounting PostgreSQL's data directory on tmpfs - an in-memory filesystem.
This completely eliminates disk I/O for database operations:
tmpfs:
- /var/lib/postgresql/data:size=1g
With tmpfs, all database operations happen in RAM. This is especially impactful for:
- TRUNCATE operations - instant cleanup between tests
- Index updates - no disk seeks required
- WAL (Write-Ahead Log) writes - purely memory operations
- Checkpoint operations - no waiting for disk flushes
The 1GB size limit is generous for most test databases. Adjust based on your test data volume.
The Impact
Before: ~60 minutes per test run
After: ~10 minutes per test run
Improvement: 6x faster! π
Real Test Performance Examples
You should have seen my surprise when I first saw a single test taking 30 seconds in containers.
I knew something was terribly wrong. But when I applied the in-memory optimization and
saw the numbers drop to what you'd expect on a normal machine - I literally got tears in my eyes.
Before tmpfs optimization:
API::FilamentSupplierAssortmentsTest#test_create_validation_negative_price = 25.536s
API::FilamentSupplierAssortmentsTest#test_list_with_a_single_assortment = 29.996s
API::FilamentSupplierAssortmentsTest#test_list_missing_token = 25.952s
Each test was taking 25-30 seconds even though the actual test logic was minimal!
Most of this time was spent waiting for PostgreSQL to sync data to disk.
After tmpfs optimization:
API::FilamentSupplierAssortmentsTest#test_list_as_uber_curator = 0.474s
API::FilamentSupplierAssortmentsTest#test_list_as_assistant = 0.466s
API::FilamentSupplierAssortmentsTest#test_for_pressman_without_filament_supplier = 0.420s
These same tests now complete in 0.4-0.5 seconds - a 50-60x improvement per test! π
Where the Time Was Going
The biggest gains came from reducing disk I/O during:
- TRUNCATE operations between tests - PostgreSQL was syncing empty table states to disk
- Database initialization at the start of each CI run
- INSERT operations during test setup - creating test fixtures (users, roles, ...)
- Transaction commits - each test runs in a transaction that gets rolled back
- Frequent small writes during test execution
With slow disks, even simple operations like creating a test user or truncating a table would take seconds instead of milliseconds. The top output above shows a single TRUNCATE TABLE operation taking 15+ seconds and consuming 81.7% CPU - most of that was PostgreSQL waiting for disk I/O. Multiply that across hundreds of tests, and you get hour-long CI runs.
The Math
- 24 tests in this file alone
- Before: ~27 seconds average per test = ~648 seconds (10.8 minutes) for one test file
- After: ~0.45 seconds average per test = ~11 seconds for the same file
- Per-file speedup: 59x faster!
With dozens of test files, the cumulative time savings are massive.
Why This Works for CI
In production, you absolutely want fsync() enabled and conservative settings to ensure data durability. But in CI:
- Data is ephemeral - containers are destroyed after each run
- Speed matters more than durability - faster feedback loops improve developer productivity
- Disk I/O is often the bottleneck - especially on older/slower CI machines
By telling PostgreSQL "don't worry about crashes, we don't need this data forever," we eliminated unnecessary overhead.
Key Takeaways
- Profile your CI pipeline - we discovered disk I/O was the bottleneck, not CPU or memory
- CI databases don't need production settings - optimize for speed, not durability
-
tmpfsis the ultimate disk I/O eliminator - everything in RAM means zero disk bottleneck - Small configuration changes can have big impacts - three settings saved us 50 minutes per run
- Consider your hardware - these optimizations were especially important on older machines with slow disks
-
Watch your memory usage -
tmpfsconsumes RAM; ensure your CI runners have enough (1GB+ for the database)
Implementation in Woodpecker CI
Here's our complete PostgreSQL service configuration:
services:
postgres:
image: postgres:16.11-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: dbpgpassword
POSTGRES_DB: api_test
POSTGRES_INITDB_ARGS: "--nosync"
POSTGRES_SHARED_BUFFERS: 256MB
ports:
- 5432
tmpfs:
- /var/lib/postgresql/data:size=1g
Note: The tmpfs field is officially supported in Woodpecker CI's backend (defined in pipeline/backend/types/step.go). If you see schema validation warnings, they may be from outdated documentation - the feature works perfectly.
Lucky us! Not all CI platforms support tmpfs configuration this easily. Woodpecker CI makes it trivial with native Docker support - just add a tmpfs: field and you're done. If you're on GitHub Actions, GitLab CI, or other platforms, you might need workarounds like docker run with --tmpfs flags or custom runner configurations.
Simple, effective, and no code changes required - just smarter configuration for the CI environment.
Why Not Just Tune PostgreSQL Settings Instead of tmpfs?
TL;DR: I tried. tmpfs is still faster AND simpler.
After seeing the dramatic improvements with tmpfs, I wondered: "Could we achieve similar performance by aggressively tuning PostgreSQL settings instead?" This would be useful for environments where tmpfs isn't available or RAM is limited.
Tested Aggressive Disk-Based Tuning
Experimenting with disabling all durability features:
services:
postgres:
command:
- postgres
- -c
- fsync=off # Skip forced disk syncs
- -c
- synchronous_commit=off # Async WAL writes
- -c
- wal_level=minimal # Minimal WAL overhead
- -c
- full_page_writes=off # Less WAL volume
- -c
- autovacuum=off # No background vacuum
- -c
- max_wal_size=1GB # Fewer checkpoints
- -c
- shared_buffers=256MB # More memory cache
The Results: tmpfs Still Wins
Even with all these aggressive settings, tmpfs was still faster.
Disk-based (even with fsync=off):
- β File system overhead - ext4/xfs metadata operations
- β Disk seeks - mechanical latency on HDDs, limited IOPS on SSDs
- β Kernel buffer cache - memory copies between user/kernel space
- β Docker overlay2 - additional storage driver overhead
- β Complexity - 7+ settings to manage and understand
tmpfs-based:
- β Pure RAM operations - no physical storage involved
- β Zero disk I/O - everything happens in memory
- β
Simple configuration - just one
tmpfsline - β Maximum performance - nothing faster than RAM
Bonus: Other PostgreSQL CI Optimizations to Consider
If you're still looking for more speed improvements:
- Disable query logging - reduces I/O overhead:
command:
- postgres
- -c
- log_statement=none # Don't log any statements
- -c
- log_min_duration_statement=-1 # Don't log slow queries
-
Use
fsync=offin postgresql.conf - similar to--nosyncbut for runtime (redundant withtmpfs) -
Increase
work_mem- helps with complex queries in tests


Top comments (0)