Last month a customer came to us confused about why their credentials kept leaking between services in their OSC stack. They had put an API key into the parameter store, wired up three service instances, and couldn't figure out why a service that had no business seeing those credentials could read them anyway.
The confusion is understandable. The parameter store UI looks a lot like a .env file. But it isn't one — and that mental model leads directly to the mistake this customer made.
Here's how the two secret mechanisms in OSC actually work, why they're different, and how to pick the right one.
The parameter store: workspace-wide config with an encryption option
The parameter store in OSC (backed by app-config-svc, which uses Valkey under the hood) is a key-value store for environment variables that get passed to your service instances and MyApp deployments. You add keys in the UI, and they show up as env vars at runtime.
The critical word is shared. The parameter store is scoped to your workspace, not to individual service instances. Any service running in that workspace can read any key in the store.
OSC now supports marking individual variables as secure. When you flag a variable as secure, the value is encrypted at rest. But to read a secure variable, the caller needs a param store API key. When you create a MyApp, you provide this key and it gets stored as a service secret on that MyApp instance. That keeps the API key itself protected.
So the parameter store gives you two modes:
-
Non-secure variables: plaintext, available to any service in the workspace. Good for feature flags, timeouts, base URLs — configuration you'd be comfortable putting in a
docker-compose.ymlon a dev machine. - Secure variables: encrypted at rest, readable only with the param store API key. Better for values that need protection but are genuinely shared across multiple services in the workspace.
The key thing both modes share: they are workspace-scoped. Scoping is not per-service. If five services are running in your workspace and they all have (or are given) the param store API key, all five can read all secure variables.
Service secrets: instance-scoped and encrypted
Service secrets work differently. They are scoped to a specific service instance, encrypted at rest, masked in logs, and audited. When you bind a secret to a service, only that instance can use it.
The canonical use case is service-to-service wiring. Say you have a transcoder service and an auth service running in the same workspace. The transcoder needs a bearer token to call the auth service. You create a secret binding on the transcoder, referencing the credential from the auth service. The transcoder gets the token; nothing else in the workspace does.
That isolation is the point. If a service instance is compromised or misconfigured, the blast radius of a leaked credential is bounded to that instance, not your entire workspace.
The param store API key itself is a good example of this pattern in action: OSC stores it as a service secret on your MyApp instance, not as a plain variable, precisely because it should be scoped to that app.
The mistake: wrong tool for the threat model
Back to the customer. They were running a multi-service media stack: an ingest service, a packaging service, and a third-party CDN integration. They needed the CDN API key in the ingest service.
They opened the parameter store, added CDN_API_KEY, and moved on. It worked, so they didn't think twice about it.
The problem: the packaging service and every other service in the workspace also had access to that key. It was workspace-scoped with no restriction on which instance could read it. From their perspective the parameter store looked like a .env — keys in, env vars out — so they treated it like one.
The fix was to move the CDN key to a service secret scoped to the ingest instance. If they had used a secure parameter store variable instead, the value would be encrypted — but still readable by any service in the workspace with the param store API key. That might be acceptable in some setups, but it wasn't the right call when only one service needed that credential.
The mental model in one picture
The parameter store sits at the workspace level — every service in the workspace can reach non-secure variables, and secure variables are readable by anyone who has the param store API key. Service secrets live inside a specific instance and never leave it.
How to decide which to use
The decision comes down to two questions: does this value need to be encrypted, and how tightly do you need to control which services can see it?
| What you're storing | Where it goes |
|---|---|
| Feature flags, timeouts, base URLs, non-sensitive config | Parameter store (non-secure) |
| Credentials for a MyApp deployment or agent task | Parameter store (secure variable) |
| Shared config that needs encryption, readable by multiple services | Parameter store (secure variable) |
| Credentials scoped to one specific service instance | Service secrets |
| Credentials passed between two service instances | Service secrets |
| The param store API key itself | Service secret (MyApp stores it automatically) |
If a value is consumed by MyApp or an agent task, the secure parameter store is the right fit. If a value needs to be scoped strictly to one service instance, use a service secret — the param store is workspace-wide regardless of the secure flag.
Try it
If you're running multi-service stacks on OSC and want to review how your credentials are currently stored, the parameter store and service secrets are both available in the OSC dashboard. If you hit a question the UI doesn't answer, reach us via the community Slack at slack.osaas.io or use the chat bubble in the web console — one of our human handlers will get back to you. We'd rather help you get it right before there's a problem than after.
Top comments (0)