DEV Community

Stefan Bauckmeier
Stefan Bauckmeier

Posted on • Updated on

unburden your machine by moving local Docker containers into the cloud

Our local machines are limited. We have just an limited amount of memory and CPU. Also we run some heavier apps on our machines, like several instances of Chrome, our editor to write code, a mail client, calendar, and so on. And at the same time we need to run some docker containers containing some data for the project we are currently working on. Having everything on you local machine can sometimes become a burden when you machine starts swapping the memory onto your hard disk and you CPU fans start going crazy.

We can avoid this by easily moving some docker containers to other machines while it feels they were local.

The basic example used here is an docker-compose file having an Elasticsearch and an MariaDB container running.

version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.5.0
    ports:
      - '9200:9200'
      - '9300:9300'
  mariadb:
    image: mariadb:10.1
    ports:
      - '3306:3306'

We want to move the Elasticsearch container to a external machine while keeping the MariaDB container locally. Also we run our app local and our app still connects to 127.0.0.1 for both elasticsearch and mariadb.

set up a Ubuntu server for docker

(all commands in this section to be applied on the remote machine)

  • you can use any machine running Ubuntu, I used an cloud server on Hetzner. Any other cloud provider providing Ubuntu instances like Amazon EC2 or Digital Ocean will also do the job.

  • first we log into the server and update it

    • apt update
    • apt upgrade
    • restart
  • next step is installing Docker on the server. We simply follow the official Docker CE documentation

  • we want to prevent that outside users to access our running docker containers:

    • we install the ufw firewall:
    • apt install ufw
    • then we do some basic configuration:
    • allow SSH traffic to the instance:
      • ufw allow 22
    • disable any other incoming connection:
      • ufw default deny incoming
    • but enable traffic from the machine to the outside world:
      • ufw default allow outgoing
    • enabling the firewall:
      • ufw enable
    • by default Docker does some firewall config by itself which we need to disable:
      • echo '{ "iptables" : false }' > /etc/docker/daemon.json
      • service docker restart
  • also there is a SSH limit for sessions per connections that we should raise when we want to have multiple containers remotely:

    • edit /etc/ssh/sshd_config and put this config in there:
    • MaxSessions 100

launch and tunnel

(this happens on the local machine)

Now we are ready to launch the elasticsearch container remtoly with docker-compose:

docker-compose -H 'ssh://root@[remote IP]' up elasticsearch

This builds and launches the container on the other server. When the service is configured to copy files into the container this will also happen.
You can use basically all docker-compose commands locally and with the -H option they will be executed on the remote server.

The only missing part is to get access to the container on the remote machine. The easiest solution is to use a SSH tunnel. Elasticsearch is configured to expose 2 ports: 9200 and 9300, so we specify both ports to be tunneled:

ssh -L 9200:127.0.0.1:9200 -L 9300:127.0.0.1:9300 root@[remote IP]

Now you can connect to the remote elasticsearch instance by accessing it like it would run on your machine.

things to note

  • you will have some lag to the remote machine. So you should consider to run services, which are used a lot (f.e. database connections), still locally. Another solution would be just to access a machine in your local network.
  • sharing a remote server with other people is not advised since the remote docker has root privileges and the other people could to not so nice things

Top comments (2)

Collapse
 
tiim profile image
Tim Bachmann

Thanks for that article, really interesting. I have just one question: If I use docker compose with custom Images (build: tag) will this still work? If yes, will the Image be built on the local machiene and then somehow uploaded to the server or do I also have to have the sourcecode on the server as well?

Collapse
 
emrox profile image
Stefan Bauckmeier

I haven't tried this so far, but I made a quick test. The image is build remotely. If you have f.e. a COPY command in your Dockerfile, Docker will copy these files from your local environment to the remote server. So you don't have to manually upload/sync it.