DEV Community

Cover image for Server Environments
Eyas
Eyas

Posted on • Originally published at blog.eyas.sh

Server Environments

When maintaining large software systems, you will likely have multiple environments with names like Prod, Staging, Dev, Eval, UAT, Daily, Nightly, or some remix of these names. To distinguish this type of environment from the dozen other things in software development that we give that same name, these are often formally referred to as Deployment Environments.

One question I've never been asked directly is: "What is an Environment?" This is surprising; because not understanding what Deployment Environments actually are is one of the most common pitfalls I see in my day-to-day work.

What is a Deployment Environment

A Deployment Environment is a consistently connected set of

  1. processes
  2. datastores, and
  3. any ecosystem around them (e.g., cron jobs, analytics, etc.)1

making up a fully functioning software system.

This definition is quite intuitive, but the devil is in the details. In this case, that detail is the phrase "consistently connected".

What consistently connected entails

Ideally, environments should be perfectly isolated; no data or RPC should leak between processes and data stores across environments during normal operation2.

A Case Study

This concrete example convinces the uninitiated that you should never mix dependencies across environments (e.g., Dev instances should only call other Dev instances; never prod instances). If this is already intuitive for you, you can
skip this section and go straight to the principles of consistent connectedness.

An Architecture Diagram representing a simple To-do system. An HTTP endpoint calls TodoApi and PeopleApi. Todo API is connected to a database called TodoStore, while PeopleApi is connected to some People graph service called PeopleStore. There's also a ReminderService that reads TodoStore and sends push notifications to the HTTP endpoint. The user communicates only with the HTTP endpoint via a browser.

Imagine the architecture above represents a system you maintain. Each box here represents either a service or a datastore. Right now, you have one instance of each of these endpoints, and they're connected as you see above. Let's say these systems are connected by directly addressing each other (e.g., by calling specific URIs for each service). Real people are about to use this service, but you want to continue deploying more recent versions.

You might decide to create a set of Dev instances to help you out.

You might wonder: How many instances do I need to set up and configure to have a viable Dev environment?

What is viable will certainly depend on which endpoints you care about testing.

Let's assume you want to test TodoApi. This service:

  • Calls PeopleApi
  • Reads and writes to TodoStore
  • Is called by PeopleApi
  • Is called by HttpBackend

Generally, if you can exercise the service you're interested in directly, you might not care about its callers.

Next, you have to decide:

  • which PeopleApi should this instance call, and
  • which TodoStore should this instance read and write to?

Remember that PeopleApi will call TodoApi back. It's very tough (and almost always wrong) to try to get away with calling the Production instance of PeopleApi from your Dev instance of TodoApi; the production instance you call might mutate the production state, or it might call back the production instance of TodoApi instead of you. You might convince yourself it's harmless, but more often than not, you'll be met with subtle glitchy behavior at best and serious bugs or user data leaks at worst.

Instead, you'll want an entirely separate Dev instance of PeopleApi in its own right. As you configure this dev instance of PeopleApi, you will have only one correct choice for which TodoApi to call: the Dev instance we just created.

We will also likely want a separate TodoStore database to be available to the Dev instance of TodoApi, with totally separate tasks, etc. This allows us to make sure none of our read/write testing has the potential to affect production users.

This line of reasoning applies recursively and can help you arrive at some general principles.

Principles of consistent connectedness

Read more about the principles of consistent connectedness in the full post.


  1. I include the rest of the ecosystem around this for completeness, but often it's sufficient to think of a Deployment Environment simply as a consistently connected set of processes & datastores. 

  2. Explicit processes that exist beyond the bounds of any environment may purposely interact with multiple environments. For example, it might be desirable to sync or seed some test data between environments, etc. 

Top comments (0)