When Bitnami announced they would stop providing updates to their open-source charts and container images, many enterprises faced a critical decision. At Harness, we provide a Self-Managed Enterprise Edition (SMP) as an on-premises solution for enterprises with strict security and regulatory requirements. We couldn't afford breaking changes—every customer has different overrides, and they all needed to keep working.
This is how we migrated from Bitnami's PostgreSQL chart to our own, using official PostgreSQL images (with RapidFort support) without breaking a single customer deployment.
Helm Charts available here - https://github.com/bansalgokul/open-helm-charts
The Challenge: Enterprise Migration Without Breaking Changes
Creating a simple PostgreSQL Helm chart? That's about an hour of work. But migrating an enterprise-grade deployment off Bitnami while preserving all existing functionality? That's a completely different challenge.
The constraints:
- No breaking changes: Every customer's existing Helm values must continue working
- No custom Docker images: We wanted to use official PostgreSQL images to avoid maintenance overhead
- Full feature parity: All Bitnami-supported features (TLS, LDAP, replication, audit) must work
- Enterprise-grade: Backup jobs, proper probes, configuration management
Bitnami's approach relies heavily on custom Docker images with scripts that parse environment variables and update configuration files. We needed to achieve the same result using standard PostgreSQL images and Helm templates alone.
Why We Migrated
When Bitnami announced their transition to Bitnami Secure Images, enterprises had limited options:
- Buy Bitnami Secure: Continue operating without issues, but at significant cost
- Build your own charts: Requires upfront investment, but provides independence
We chose option 2. The benefits were clear:
- Independence: No dependency on Bitnami's roadmap or pricing
- Security: Support for both official PostgreSQL images and RapidFort's hardened variants
- Standardization: Align with upstream PostgreSQL, not vendor-specific implementations
- Portability: Eliminate vendor lock-in and custom image maintenance overhead
The POC: Proving It Was Possible
I started with a proof-of-concept. The approach seemed feasible:
- Clone Bitnami's charts as a starting point
- Extend ConfigMaps with helper functions to parse old environment variables directly in Helm templates (no Docker image scripts needed)
- Create backup job templates in the chart for backup functionality
-
Replace Bitnami probes with
pg_isreadyinstead of file-based checks - Move configuration to ConfigMaps instead of environment-variable parsing
The POC worked. But then I hit the blocker.
The Blocker: Configuration File Mounting
Here's where things got interesting. Bitnami stores configuration files in a separate directory using volume mounts, then uses scripts in their Docker images to:
- Copy config files to PostgreSQL's required directory
- Parse environment variables
- Generate configuration lines
- Append them to
postgresql.conf
For our use case, I needed to:
- Create ConfigMaps with the configuration
- Mount them directly to PostgreSQL's data directory
The problem:
-
During upgrades: This works perfectly. The database is already initialized, so PostgreSQL's
initdbscripts don't run. Config files mount and work as expected. -
During fresh installs: PostgreSQL runs
initdb, which requires the data directory to be empty. But since we're mounting config files, the directory isn't empty, and the install fails.
This couldn't be solved with Helm init containers or init scripts because we needed the config files to be absent during initdb, then injected afterward. This seemed to require Docker image customization—exactly what we wanted to avoid.
The Breakthrough: PostgreSQL's Hidden Configuration Option
After hitting the blocker, I took a break. The next day, I dove deep into PostgreSQL's documentation, looking for any way to separate the configuration directory from the data directory.
I found that PostgreSQL allows you to specify paths for different components:
-
postgresql.conflocation -
pg_hba.conflocation - Data directory
My first attempt: Put postgresql.conf in a separate conf directory and reference it from the data directory. But PostgreSQL started treating the conf directory as the main directory, which broke everything.
Then I found it: PostgreSQL supports specifying the config file path via command-line arguments.
By passing -c config_file=/path/to/postgresql.conf in the container's command, PostgreSQL will:
- Read
postgresql.conffrom the specified path (our ConfigMap mount) - Keep the data directory at its standard location
- Allow
initdbto run successfully (since the data directory itself is empty)
This was the solution. We could:
- Mount config files to a separate
confdirectory - Pass the config file path via command-line arguments
- Let
initdbrun normally (data directory is empty) - PostgreSQL reads config from the mounted ConfigMap
The key insight: PostgreSQL's command-line argument support for config_file allows us to decouple configuration from the data directory, solving the fresh install problem without any Docker image customization.
Implementation Details: What We Built
With the configuration path solution in hand, here's what we implemented:
Configuration Management
-
ConfigMaps for all configuration:
postgresql.confandpg_hba.confgenerated from Helm templates -
Command-line config file path:
postgres -c config_file=/conf/postgresql.confallows mounting configs separately -
Helper functions in templates: Parse old Bitnami environment variables (
POSTGRESQL_*) and convert to ConfigMap entries - No Docker image scripts: Everything handled in Helm templates
Path Compatibility
-
Configurable paths: Support both Bitnami (
/bitnami/postgresql/data) and standard (/var/lib/postgresql/data) paths - Migration support: Detect existing Bitnami data and migrate to new paths
- Secrets and certs: Configurable mount paths for flexibility
Feature Parity
-
Backup jobs: Template-based backup CronJobs using
pg_dump -
Readiness probes:
pg_isreadyinstead of file-based checks -
Password updates: Direct
psqlcommands withPGPASSWORDinstead of Bitnami scripts - TLS, LDAP, Replication: All configured via ConfigMaps, not environment variables
Dual Image Support
- Official PostgreSQL images: Standard upstream images
- RapidFort hardened images: Enhanced security with same functionality
- No vendor lock-in: Easy to switch between image providers
Results: Success Without Breaking Changes
The migration was successful. From the user's perspective:
- ✅ Existing Helm values continue working: All Bitnami-style overrides are supported
- ✅ No breaking changes: Upgrades don't require rewriting values
- ✅ Data preserved: All databases and data intact during migration
- ✅ Feature complete: TLS, LDAP, replication, audit all functional
- ✅ Image flexibility: Support for both official and RapidFort images
We achieved independence from Bitnami while maintaining full backward compatibility.
Open Source and Future Plans
I am planning to open-source these charts in a GitHub repository. The approach developed can be applied to other databases as well. We have plans to migrate:
- MongoDB: Similar challenges with configuration and initialization
- ClickHouse: Another database in our stack
- Additional databases: As needed
The major benefit is that these charts support both official images and RapidFort images for additional security, providing true independence from any single vendor.
I'd love for others to contribute and extend this approach to whatever images they need. The pattern is proven: use Helm templates to replace Docker image scripts, leverage PostgreSQL's native configuration options, and maintain backward compatibility.
Helm Charts available here - https://github.com/bansalgokul/open-helm-charts
Key Lessons
-
Image changes are never "just image changes."
- Base OS changes (Debian → Alpine) introduce compatibility concerns
- Extension availability varies by image base
- Default configurations differ between image families
-
Backward compatibility needs explicit design.
- Authentication methods must be preserved
- Configuration defaults can silently change
- Template defaults need security review
-
Standard PostgreSQL configs are more portable (even if more verbose).
- ConfigMaps are more explicit than env vars
- Easier to audit and version control
- Less hidden behavior
-
Data path mismatches are the #1 risk in Postgres migrations.
- But authentication and extension availability are close seconds
- Always verify both data integrity AND configuration integrity
-
Comprehensive snapshots are essential.
- Pre-upgrade: Document everything (databases, tables, rows, configs)
- Post-upgrade: Compare systematically
- Automation makes this practical
-
Security defaults matter.
-
trustvsmd5authentication is a critical security difference - Default templates need security review
- LDAP templates should also use secure defaults
-
-
Deep documentation dives pay off.
- PostgreSQL's
-c config_fileoption was the key to solving our blocker - Sometimes the solution is in the docs, not in custom code
- PostgreSQL's
Top comments (0)