DEV Community

Kevin Mack
Kevin Mack

Posted on • Originally published at welldocumentednerd.com on

Configuration can be a big stumbling block when its comes to availability.

So let’s face it, when we build projects, we make trade-offs. And many times those trade-offs come in the form of time and effort. We would all build the most perfect software ever… if time and budget were never a concern.

So along those lines, one thing that I find gets glossed over quickly, especially with Kubernetes and micro services … configuration.

Configuration, something where likely you are looking and saying, “That’s the most ridiculous thing I’ve ever heard.” We put our configuration in a YAML file, or a web.config, and manage those values through our build pipelines. And while that might seem like a great practice, in my experience it can cause a lot more headaches in the long run than your probably expecting.

The problem with storing configuration in YAML files, or Web.configs, is that they create an illusion of being able to change these settings on the fly. An illusion that can actually cause significant headaches when you start reaching for higher availability.

The problems these configuration files can cause is the following:

Changing these files is a deployment activity

If you need to change a value for these applications, it requires changing a configuration file. Changes to configuration files usually are tightly connected to different restart process. Take App Service as a primary example, if you store your configuration in a web.config and you make a change to that file. App Service will automatically trigger a restart, which will cause a downtime even for you and or your customers.

This is further difficult in a kubernetes cluster, in that if you use a YAML file, it requires the deployment agent changing the cluster. This makes it very hard to change these values due to a change in application behavior.

For example, if you wanted to change your SQL database connection if performance degrades below a certain point. That is a lot harder to do when you referencing a connection string in a config file on pods that are deployed across a cluster.

Storing Sensitive Configuration is a problem

Let’s face it, people make mistakes. And of the biggest problems I’ve seen come up several times is that I hear the following statement, “We store normal configuration in a YAML file, and then sensitive configuration in a key vault.”

The problem here is that the concept of what “sensitive” means and that it means different things to different people. So the odds of something being miss-classified. It’s much easier to manage if you tell your team that for all settings, treat them as sensitive. It makes management a lot easier and limits you to a single store.

So what do we do…

The best way I’ve found to mitigate these issues, is to use an outside service like KeyVault to store your configuration settings, or azure configuration management service.

But that’s just step 1, step 2 is to on startup cache the configuration settings for each micro service in memory in the container, and make sure that you configure it to expire after so much time.

This helps by providing an option where by your microservices startup after deployment, reach out to a secure store, and cache the configuration settings in memory.

This also gains us several benefits that mitigate the problems above.

  • Allow for changing configuration settings on the fly: For example, if I wanted to change a connection string over to a read replica, that can be done by simply updating the configuration store, and allowing the application to move services over as they expire the cache. Or if you want even further control, you could build in a web hook that would force it to dump the configuration and re-pull it.
  • By treating all configuration as sensitive you ensure there is no accidental leaks. This also ensures that you can manage these keys at deployment time, and not have them ever be seen by human eyes.

So this is all great, but what does this actually look like from an architecture standpoint.

For AKS, its a fairly easy implementation, to create a side car for retrieving configuration, and then deploy that sidecar with any pod that is deployed.

Given this, its easy to see how you would implement separate sidecar to handle this configuration. Each service within the pod is completely oblivious to how it gets its configuration, it calls a micro-service to get it.

I personally favor the sidecar implementation here, because it allows you to easily bundle this with your other containers and minimizes latency and excessive network communication.

Latency will be low because its local to every pod, and then if you ever decide to change your configuration store, its easy to do.

Let’s take a sample here using Azure Key Vault. If you look at the following code samples, you can see how here’s a configuration could be managed.

Here’s some sample code that could easily be wrapped in a container for your configuration to keyvault:

public class KeyVaultConfigurationProvider : IConfigurationProvider { private string \_clientId = Environment.GetEnvironmentVariable("clientId"); private string \_clientSecret = Environment.GetEnvironmentVariable("clientSecret"); private string \_kvUrl = Environment.GetEnvironmentVariable("kvUrl"); public KeyVaultConfigurationProvider(IKeyVaultConfigurationSettings kvConfigurationSettings) { \_clientId = kvConfigurationSettings.ClientID; \_clientSecret = kvConfigurationSettings.ClientSecret; \_kvUrl = kvConfigurationSettings.KeyVaultUrl; } public async Task<string> GetSetting(string key) { KeyVaultClient kvClient = new KeyVaultClient(async (authority, resource, scope) => { var adCredential = new ClientCredential(\_clientId, \_clientSecret); var authenticationContext = new AuthenticationContext(authority, null); return (await authenticationContext.AcquireTokenAsync(resource, adCredential)).AccessToken; }); var path = $"{this.\_kvUrl}/secrets/{key}"; var ret = await kvClient.GetSecretAsync(path); return ret.Value; } }

Now the above code uses a single service principal to call upon keyvault to pull configuration information. This could be modified to leverage the specific pod identities for even greater security and cleaner implementation.

The next step of the above implementation would be to leverage a cache for your configuration. This could be done piecemeal as needed or in a group. There are a lot of directions you could take this but it will ultimately help you to manage configuration easier.

Top comments (0)