DEV Community

Ben Crinion
Ben Crinion

Posted on

Make onboarding simple using VS Code Remote Containers

Note: this article was written before the Docker Desktop license change but I still think it's a valuable technique. I believe that the Docker Desktop license will still be good value for money compared with the time it takes to setup a dev environment.

Over the last few weeks our team has grown rapidly. Each time a new engineer joins the team (or an existing engineer gets a new machine) we dig out the laptop onboarding guide and spend a chunk of time installing the right frameworks and tools to get our teammate up and running. This can be fairly painful: the onboarding doc isn't always updated, links die and toolchains evolve. To add to this we have a mix of Apple, Windows and Linux users which means we might be trying to support someone using a platform we're not familiar with.

Another issue we have is that our squad is responsible for multiple services. These have slightly different dependencies. Different versions of NodeJS, Python, Serverless Framework or CDK, different test runners etc. Add consultancy into the mix and we might have people working on several services at multiple clients and managing the dependency mix gets difficult.

Wouldn't it be useful if we had some light weight, isolated operating systems? Something we could run on any machine and that we can configure separately without them impacting each other?

Luckily for us Docker exists and can do exactly this. Even better, Microsoft have created the Visual Studio Code Remote - Containers extension which lets you use a Docker container as a full-featured development environment within VS Code.

This is how we solved some of the problems we came up against using Dev Container and Serverless framework.

Not using dev containers

The first problem we have is that not everyone on our team wants to use VS Code. Because of this, everything we change to enable dev containers needs to also work natively and with our CI/CD pipeline. This baiscally boils down to replacing localhost with the container hostname which is available by default in a Docker container.

const hostname: process.env.HOSTNAME || 'localhost'
Enter fullscreen mode Exit fullscreen mode

Using Docker

We use LocalStack for integration testing so we need to be able to run containers from within our dev container.

It's possible to install a container engine within a container and create "child" containers but it's complex and there's a simpler solution.

We can use Docker on the host machine to create "sibling" containers by installing the Docker CLI and mounting /var/run/docker.sock. The devcontainer.json settings file has a mounts property which can be used to have some control over the dev container file system.

  "mounts": [
    "source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind",
  ],
Enter fullscreen mode Exit fullscreen mode

Docker Sock Permissions

If you're using a non-root user inside your dev container (and you probably should) then you need to give that user permissions to use docker.sock.

You could run this as sudo and it will persist until you rebuild the container or it can be automated using a post run command in the devcontainer.json file which means no-one has to remember to do it.

  "postCreateCommand": "sudo chown vscode:vscode /var/run/docker.sock",
Enter fullscreen mode Exit fullscreen mode

Using AWS and Git

We need to use the AWS CLI and Github. We could duplicate the credentials and keys in our dev container file system but they would not persist if we had to rebuild the container and aren't reusable between different projects.

We can share the host's ssh keys and AWS credentials by mounting the host file system in the container (again using the mounts property in devcontainer.json).

  "mounts": [
    ...
    "source=${localEnv:HOME}${localEnv:USERPROFILE}/.aws,target=/home/vscode/.aws,type=bind",
    "source=${localEnv:HOME}${localEnv:USERPROFILE}/.ssh,target=/home/vscode/.ssh,type=bind"
  ],
Enter fullscreen mode Exit fullscreen mode

Filesystem Performance Issues

We're using the serverless-webpack plugin but we were getting errors during packaging.

Serverless: Packing external modules: .....

 Error ---------------------------------------------------

  Error: npm install failed with code 1
      at ChildProcess.<anonymous> (/workspace/node_modules/serverless-webpack/lib/utils.js:91:16)
      at ChildProcess.emit (events.js:314:20)
      at ChildProcess.EventEmitter.emit (domain.js:483:12)
      at maybeClose (internal/child_process.js:1022:16)
      at Process.ChildProcess._handle.onexit (internal/child_process.js:287:5)
Enter fullscreen mode Exit fullscreen mode

The error message doesn't give any pointers to what's going wrong but there were some clues when we tried to clean up the .webpack folder. Running ls from inside the container showed it to be enpty but it wouldn't allow us to delete it because it wasn't empty on the host.

This is because the default source code mount uses the cached consistency model. The cached consistency model is more appropriate for files which the host modifies. There's a good description of the different modes in this StackOverflow answer.

Our solution was to use a volume for the webpack and node_modules folders as "volumes are the preferred mechanism for persisting data generated by and used by Docker containers". mounts property to the rescue again.

  "mounts": [
    ...
    "source=node_modules,target=${containerWorkspaceFolder}/node_modules,type=volume",
    "source=webpack,target=${containerWorkspaceFolder}/.webpack,type=volume",
  ],
Enter fullscreen mode Exit fullscreen mode

These folders will be owned by root so we'll use the postCreateCommand again to change their ownership back to vscode.

  "postCreateCommand": "sudo chown vscode:vscode node_modules && sudo chown vscode:vscode .webpack",
Enter fullscreen mode Exit fullscreen mode

Finally we need to modify the webpack config slightly. It's not possible for the container to delete the volume so we've set the webpack output path to a sub folder in the webpack.config.js.

  ...
  output: {
    libraryTarget: 'commonjs',
    path: path.join(__dirname, '.webpack/build'),
    filename: '[name].js',
  },
  ...
Enter fullscreen mode Exit fullscreen mode

Another option would be to use a delegated mount which are more appropriate when the container's view of the filesystem is authoritive or clone the whole repo into a container volume.

Docker Networking

As I mentioned earlier, we're using LocalStack for integration testing and we have a bash script which uses docker-compose to manage that container. Docker compose creates a network for the workload, this allows all the containers in the workload to communicate easily but it isolates them from other workloads and individual containers. This meant that Serverless offline and the tests which were running in the dev container couldn't access the database running in LocalStack.

Docker containers can be attached to more than one network at a time so we've solved this by creating a dedicated network and attaching the dev-container and LocalStack container to it. There are another couple of properties in the settings file which can help us with this. We can ensure the network exists before we start the dev container using the initializeCommand property, and use runArgs to provide additional arguments to the dev container (we append || true to the initializeCommand to ensure the command succeeds if the network already exists.).

  "initializeCommand": "docker network create payment_network || true",
  "runArgs": ["--network=payment_network"],
Enter fullscreen mode Exit fullscreen mode

This is only half the job. We also need to attach the LocalStack container to the network and we still can't use localhost for addressing. This is another area where we've had to consider the CI/CD pipeline and users who don't want to use VS Code.

In our test setup shell script we inspect an environment variable which will only be present in our dev container and combine settings from more than one YAML file by using the -f parameter. We can set environment variables in the dev container using the containerEnv property in devcontainer.json.

if [ -z "$LOCALSTACK_HOST" ]
then
    docker-compose -f docker-compose.yml up -d localstack
else
    docker-compose -f docker-compose.yml -f docker-compose.devcontainer.yml  up -d localstack
fi
Enter fullscreen mode Exit fullscreen mode
# docker-compose.yml
version: '3.5'
services:
  localstack:
    image: localstack/localstack:0.12.15
    environment:
      - DEFAULT_REGION=eu-west-1
      - DEBUG=true
      - LAMBDA_EXECUTOR=docker
    volumes:
      - '/var/run/docker.sock:/var/run/docker.sock'
    ports:
      - '4567:4566'
Enter fullscreen mode Exit fullscreen mode
# docker-compose.devcontainer.yml
version: '3.5'
services:
  localstack:
    container_name: paymentslocalstack
    environment:
      - HOSTNAME_EXTERNAL=paymentslocalstack
networks:
  default:
    external:
      name: payment_network
Enter fullscreen mode Exit fullscreen mode
  "containerEnv": { "LOCALSTACK_HOST": "paymentslocalstack", "LOCALSTACK_PORT": "4566" },
Enter fullscreen mode Exit fullscreen mode

Specifying the container_name in the devcontainer compose file means we've got a consistent hostname we can use to address the LocalStack container and we expose that inside the dev container using an environment variable.Another thing to remember about container networking is that containers on the same network don't need to use the mapped external port. That's only required for the host to container communication. We've also added this as an environment variable so we can use it in our tests.

The final issue we had with networking was LocalStack specific. Many AWS services publish metadata which includes the host name i.e. SQS queue URLs. This metadata is fundamental to how they operate. We need to tell LocalStack the new hostname by setting the HOSTNAME_EXTERNAL environment variable in that container which you can see in the second docker-compose yaml file.

Summary

Now we've got a repeatable way to onboard new team members, no one should ever install the wrong version of Python again.

Instead of taking hours or even days to get their system setup, possibly being guided by someone else on the squad, new team members can get themselves up and running in minutes.

Hopefully some of these fixes will be useful for you when you setup a dev container for your project.

The next step for us is to investigate how we can use this with GitHub Code Spaces.

Oldest comments (0)