The problem didn’t show up during installation.
It showed up months later.
A container restart happened during routine maintenance.
LDAP came back up. slapd was running. Ports were open.
But authentication started behaving strangely.
Some users could log in.
Others couldn’t.
A few queries were suddenly slow.
Nothing looked broken. But things were clearly different after the restart.
That was the real issue.
Not failure.
Unpredictability.
Why many LDAP Docker setups drift over time
Most OpenLDAP containers are designed for the first startup, not for long-running environments.
They assume things like:
the database directory starts empty
initialization scripts only run once
container restarts don’t change filesystem ownership
configuration stored in slapd.d always matches the environment
Those assumptions slowly break down.
For example:
Mounted volumes can keep old ownership after restarts.
Initialization scripts may try to recreate base objects that already exist.
Schema loading might run twice and fail silently.
Attributes used in authentication filters may not be indexed.
Nothing crashes.
But authentication and searches start behaving differently from what you expect.
The problem we focused on
We didn’t try to add features.
We focused on one thing:
make OpenLDAP behave the same way every time the container starts.
That meant removing the common sources of drift.
Permission drift after container restarts
One issue appears when volumes are reused.
If the database directory was created with a different user or UID, a restart can leave the ldap process without proper access.
So before slapd starts, our container reconciles permissions on mounted directories.
chown -R ldap:ldap /var/lib/ldap
This simple step removes a surprising number of “LDAP started but authentication fails” situations.
Initialization that can run more than once
Many setups treat initialization as a one-time action.
That works only when the database is empty.
In our container, initialization is idempotent.
Instead of blindly applying configuration, startup checks whether:
the database already exists
the base DN is already present
schemas have already been loaded
If those elements exist, configuration is validated rather than recreated.
This prevents duplicate objects, schema conflicts, and partial state.
Replication that is explicit, not assumed
Replication problems often come from unclear node roles.
Our configuration requires explicit identifiers like:
SERVER_ID
defined replication peers
This keeps cluster configuration predictable and avoids situations where nodes silently stop syncing.
Preventing slow authentication later
Authentication queries often depend on attributes like:
uid
cn
member
memberOf
If those attributes are not indexed, directories work fine when small but degrade as usage grows.
So the container applies those indices early, along with query limits and connection timeouts, to avoid slow searches turning into authentication delays.
What “predictable” actually means
For us, predictable LDAP means something simple:
restarting the container does not change behavior
existing databases are validated, not overwritten
authentication queries behave the same after deployment as they did before
When directory infrastructure becomes predictable, it fades into the background.
And that’s exactly where identity systems belong.
If you run OpenLDAP in containers today, try a simple test:
restart the container and watch authentication.
Does it behave exactly the same as before?
Reference
If you're curious about the implementation details, the container setup is available here:
https://vibhuvioio.com/openldap-docker/getting-started/

Top comments (0)