DEV Community

Cover image for How to Build a Dynamic Configuration Management System in Go Without Downtime
Nithin Bharadwaj
Nithin Bharadwaj

Posted on

How to Build a Dynamic Configuration Management System in Go Without Downtime

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

Let’s talk about what happens when your application needs to change its behavior without stopping. Imagine you’re running a website, and you need to switch to a different database, change a timeout setting, or update an API key. In the old way, you’d change a config file, then restart your entire service. That means downtime, even if it’s just for a few seconds. For a modern system that should be always available, that’s not good enough.

I need a way to update settings on the fly, while the app is running. I also need these settings to come from different places: a file, environment variables, maybe even a central database or an HTTP endpoint. And when something changes, the parts of my app that care about that setting should know right away. That’s what a dynamic configuration management system does.

Let me build one in Go. Go is great for this because of its strong support for concurrency and its standard library. I’ll create a central manager that can pull settings from multiple sources, watch them for changes, validate new values, and let other parts of my program subscribe to updates.

First, I need a structure to hold everything together. I’ll call it ConfigManager. It’s the brain of the operation.

type ConfigManager struct {
    sources   []ConfigSource
    store     *ConfigStore
    validator *ConfigValidator
    notifier  *ChangeNotifier
    reloadLock sync.RWMutex
}
Enter fullscreen mode Exit fullscreen mode

The store is where current configuration values live in memory. The validator checks if a new value is allowed. The notifier tells subscribers when something changes. The reloadLock is a read-write mutex; it lets many parts of my app read configs at the same time, but only one thing can update them at a time. This prevents weird conflicts.

A ConfigSource tells the manager where to look for settings.

type ConfigSource struct {
    Type     SourceType  // "file", "environment", "http", "database"
    Location string      // like "/app/config.yaml" or "https://config.server.com"
    Priority int         // Higher number wins if two sources have the same key
    Format   ConfigFormat // "yaml", "json"
    Watcher  *SourceWatcher // Watches this source for changes
}
Enter fullscreen mode Exit fullscreen mode

Priority is important. If I have a setting in a file with priority 10, and the same setting as an environment variable with priority 20, the environment variable’s value will be used. This lets me override defaults for specific deployments.

When I start the manager, I add sources. Here’s how I might set it up:

func main() {
    config := NewConfigManager()

    // Add a config file. It has lower priority.
    config.AddSource(SourceTypeFile, "configs/app.yaml", 10)
    // Environment variables override the file. They have higher priority.
    config.AddSource(SourceTypeEnv, "", 20)

    // Now I can get a value
    port, err := config.Get("server.port")
    if err != nil {
        port = 8080 // a sensible default
    }
    fmt.Printf("Starting server on port %v\n", port)
}
Enter fullscreen mode Exit fullscreen mode

The AddSource method does a few things. It figures out the file format from the extension (.yaml, .json). It creates the right type of watcher. It loads the settings from that source for the first time. Then, if the source can be watched (like a file), it starts a background goroutine to monitor it.

Loading from a file looks like this:

func loadFromFile(path string, format ConfigFormat) (map[string]interface{}, error) {
    data, err := os.ReadFile(path)
    if err != nil {
        return nil, err
    }

    var rawConfig map[string]interface{}
    switch format {
    case FormatYAML:
        err = yaml.Unmarshal(data, &rawConfig)
    case FormatJSON:
        err = json.Unmarshal(data, &rawConfig)
    }
    if err != nil {
        return nil, err
    }
    // Flatten nested maps into dot-separated keys
    return flattenConfig(rawConfig), nil
}
Enter fullscreen mode Exit fullscreen mode

A YAML file might have nested sections:

server:
  port: 8080
  host: "0.0.0.0"
database:
  url: "postgres://localhost/app"
Enter fullscreen mode Exit fullscreen mode

My flattenConfig function turns this into a map where the key "server.port" points to 8080. This dot-notation is a common, convenient way to handle hierarchy.

Now, what about environment variables? A common pattern is to prefix them, like APP_SERVER_PORT. My loader transforms that APP_SERVER_PORT into the key server.port.

func loadFromEnvironment() map[string]interface{} {
    config := make(map[string]interface{})
    for _, env := range os.Environ() {
        if strings.HasPrefix(env, "APP_") {
            parts := strings.SplitN(env, "=", 2)
            key := strings.ToLower(strings.TrimPrefix(parts[0], "APP_"))
            key = strings.ReplaceAll(key, "_", ".")
            config[key] = parts[1]
        }
    }
    return config
}
Enter fullscreen mode Exit fullscreen mode

So setting APP_SERVER_PORT=9090 in the shell would override the 8080 from the YAML file.

Once values are loaded, they go into the ConfigStore. This is a thread-safe in-memory store.

type ConfigStore struct {
    configs   map[string]*ConfigEntry
    versions  map[string]int64
    checksums map[string]string
    mu        sync.RWMutex
}

type ConfigEntry struct {
    Key       string
    Value     interface{}
    Source    string
    Timestamp time.Time
    Version   int64
}
Enter fullscreen mode Exit fullscreen mode

I keep a version number for each key. Every time it updates, the version goes up. I also keep a checksum (a SHA-256 hash) of the value. Before I update anything, I calculate the checksum of the new value. If it’s the same as the old one, I skip the update. This avoids sending unnecessary “change” notifications if the file is saved with identical content.

The real magic for “dynamic” updates is the watcher. For files, I use the fsnotify package.

type FileWatcher struct {
    watcher *fsnotify.Watcher
    path    string
    events  chan struct{}
}

func (fw *FileWatcher) watch() {
    for {
        select {
        case event, ok := <-fw.watcher.Events:
            if !ok { return }
            // Check if our specific file was written to
            if event.Name == fw.path && event.Op&fsnotify.Write == fsnotify.Write {
                select {
                case fw.events <- struct{}{}:
                default: // channel is full, drop the event
                }
            }
        case err, ok := <-fw.watcher.Errors:
            if !ok { return }
            log.Printf("File watcher error: %v", err)
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

When the watcher detects a change, it sends a signal on its events channel. The ConfigManager has a goroutine listening:

func (cm *ConfigManager) watchSource(source ConfigSource) {
    changes := source.Watcher.Watch()
    for range changes { // When a change signal arrives
        if err := cm.loadSource(source); err != nil {
            log.Printf("Failed to reload source %s: %v", source.Location, err)
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

It reloads the entire source file. Then it merges the new values. The merge process is smart about priority. It only overwrites an existing value if the new source has a higher priority, or if the value is actually different.

Before a new value enters the store, it must pass validation. I don’t want someone accidentally setting server.port to "not-a-number" and breaking the app. So I define schemas.

type ConfigSchema struct {
    Type        string // "string", "integer", "boolean"
    Required    bool
    Default     interface{}
    Min         interface{} // for numbers
    Max         interface{}
    Pattern     string      // a regex for strings
}

func (cm *ConfigManager) AddSchema(key string, schema *ConfigSchema) {
    cm.validator.AddSchema(key, schema)
}
Enter fullscreen mode Exit fullscreen mode

In my main setup, I might do:

config.validator.AddSchema("server.port", &ConfigSchema{
    Type:     "integer",
    Required: true,
    Min:      1024,
    Max:      65535,
    Default:  8080,
})
Enter fullscreen mode Exit fullscreen mode

When a new value comes from any source, the validator checks it. If it’s a string like "9090" for an integer field, the validator can convert it. If it’s "99999", that’s above the max, so the update is rejected. The app keeps using the old, valid value.

Now, the other half of the system: letting other components know about changes. This is the ChangeNotifier.

type ChangeNotifier struct {
    subscribers map[string][]ConfigChangeHandler
    changes     chan ConfigChange
    mu          sync.RWMutex
}

type ConfigChangeHandler func(ConfigChange)

type ConfigChange struct {
    Key      string
    OldValue interface{}
    NewValue interface{}
    Source   string
    Time     time.Time
}
Enter fullscreen mode Exit fullscreen mode

A component, like my HTTP server, can subscribe to changes for a specific key.

// Subscribe to changes for server.port
subID := config.Subscribe("server.port", func(change ConfigChange) {
    fmt.Printf("Port changed from %v to %v\n", change.OldValue, change.NewValue)
    // Here, I could gracefully restart the HTTP listener on the new port
})

// Subscribe to ALL changes with a wildcard
config.Subscribe("*", func(change ConfigChange) {
    fmt.Printf("Something changed: %s\n", change.Key)
})
Enter fullscreen mode Exit fullscreen mode

The notifier has a goroutine that processes a channel of changes and calls the relevant handler functions. I run handlers in their own goroutines (go handler(change)) so a slow subscriber doesn’t block notifications for everyone else.

Let’s see all the pieces work together in a more complete example.

func main() {
    config := NewConfigManager()

    // Define what my important settings should look like
    config.validator.AddSchema("server.port", &ConfigSchema{Type: "integer", Min: 1024, Max: 65535, Default: 8080})
    config.validator.AddSchema("feature.enabled", &ConfigSchema{Type: "boolean", Default: false})

    // Set up sources
    config.AddSource(SourceTypeFile, "./config.yaml", 10)
    config.AddSource(SourceTypeEnv, "", 20) // Env vars override files

    // My web server subscribes to port changes
    config.Subscribe("server.port", func(c ConfigChange) {
        log.Printf("TODO: Rebind HTTP server to new port: %v", c.NewValue)
    })

    // A feature flag subscriber
    config.Subscribe("feature.enabled", func(c ConfigChange) {
        if c.NewValue == true {
            log.Printf("Enabling new experimental feature!")
        } else {
            log.Printf("Turning off the experimental feature.")
        }
    })

    // Main application loop
    for {
        // In a real app, this would be an HTTP server or a worker
        time.Sleep(1 * time.Second)
        // App logic reads configs
        if config.GetWithDefault("feature.enabled", false).(bool) {
            fmt.Println("Experimental feature is active.")
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

What about sources that aren’t files? The design is extensible. I could write an HTTPWatcher that polls a URL every 30 seconds. A DatabaseWatcher might use a LISTEN/NOTIFY command in PostgreSQL or poll a table for changes. The ConfigManager treats them all the same through the SourceWatcher interface.

Performance is a key concern. Configuration reads happen all the time, potentially on every web request. My store uses a read-write lock (sync.RWMutex). Many goroutines can read at the same time (store.mu.RLock()). Only an update needs the full lock (store.mu.Lock()). Reads are very fast—just a map lookup.

For safety, the entire reload process is guarded by the manager’s reloadLock. This ensures I don’t try to merge configs from two sources at the exact same time, which could leave the store in a weird, inconsistent state.

In a production system, I’d add metrics. How many reads per second? How many updates? How many validation errors? I’d add a health check that verifies all configured sources are reachable. I’d also add a way to dump the current configuration state (with sources and versions) for debugging.

The beauty of this system is that it’s transparent. The rest of my application just calls config.Get("some.key"). It doesn’t need to know if the value came from a file, an environment variable, or a database. It doesn’t need to poll or check for changes. It just subscribes and reacts. This separation of concerns makes the application code simpler and more robust.

Building this taught me a lot about concurrency patterns in Go. Channels for watcher events, goroutines for background monitoring, mutexes for protecting shared state, and handlers for callbacks. It’s a practical example of how Go’s features come together to build a reliable, real-time system.

You can start simple. Maybe just file and environment variable support with no validation. Then, as your needs grow, you can add validation, then notifications, then more source types. Each piece is independent. That’s the Go philosophy: build small, composable parts that work together.

I find systems like this indispensable. They turn configuration from a static, deployment-time concern into a dynamic part of the application’s runtime behavior. It gives operators and developers tremendous flexibility. They can tune performance, toggle features, or respond to emergencies without the risk and delay of a full restart. In a world where uptime is critical, that’s a powerful capability to have.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Top comments (0)