DEV Community

Hiromi
Hiromi

Posted on • Edited on

Connecting Docker to Digital Ocean Droplet using Git Actions

After connected the Sushi Project in a Droplet from Digital Ocean, and created the environment inside the virtual machine, it was essential to consider a CI/CD process to keep the application updated for all the developers involved in the process. Even more that the developers are working in different timezone.

The platform chosen to our CI/CD was GitHub Actions. It allows us to automate everything from the build process to the production pipeline by using Event-driven workflows. In simple words, basically, changes that I send to my repository (local machine) will automatically update my app in my virtual machine (remote machine).

Building on our previous experience, you should already be familiar with the concepts of Docker Container and a Droplet. In this section, I will explain the concept of a Container Registry and how to connect GitHub Actions to our droplet.

The setup mentioned above is based on Thao Truong logic:


source: Thao Truong, 04/12/2021, post.

As you can see in the image, there is a component called Container Registry. Initially, I didn't immediately understand why we should use one; if you feel the same, I recommend watching this video. The definition is:

Container Registry is a repository/collection of repositories used to store and access container images.

In other words, the container registry is a storage of images(snapshots) of static files on a disk that can be easily copied to a container. Our goal is to use the registry to keep our Docker image updated and built inside our virtual machine every time we update our repository.

Creating a Container Repository in Digital Ocean

I recommend to follow these tutorials:

In my experience, these were easy to follow without issues, so I will focus on the GitHub Actions step.

Connecting the GitHub Actions to our Droplet:

1) To create an Action workflow, you must have a repository containing your app files and a Dockerfile. Then, in the Actions tab, you have two paths:

a) Choose a pre-built workflow that generates a .yml file, which you can then adapt to your needs.

b) Choose the "Simple Workflow" option, which generates a basic .yml file, allowing you to write your script from scratch.

Both will create the path .github/workflows in your repository.

For this guide, I will choose path "b" and develop our .yml file based on .yml file from Thao Truong post

2) Creating the .yml/.yaml file:

  • Following Thao Truong's structure might fail due to the out date versions of actions and keys therefore here is my updated version:
name: CI

# 1
# Controls when the workflow will run
on:
  # Triggers the workflow on push events but only for the master branch
  push:
    branches: **NAME OF YOUR MAIN BRANCH (usually master or main)**

  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:
    inputs:
      version:
        description: 'Image version'
        required: true
#2
env:
  REGISTRY: ${{ vars.REGISTRY}}
  IMAGE_NAME: ${{ vars.IMAGE_NAME}}

#3
jobs:
  build_and_push:
    runs-on: ubuntu-latest
    outputs:
      sha: ${{ steps.short-sha.outputs.sha }}
    steps:
      - name: Checkout the repo 
        uses: actions/checkout@v4

      #We will create this step to create automatic tags for our images
      - uses: benjlevesque/short-sha@v3.0
        id: short-sha
        with:
          length: 6

      - run: echo $SHA
        env:
          SHA: ${{ steps.short-sha.outputs.sha }}

      - name: Build container image
        run: docker build -t ${{ vars.REGISTRY}}/${{ vars.IMAGE_NAME}}:${{ steps.short-sha.outputs.sha }}  **{path of your Dockerfile inside your container}**

      - name: Install doctl
        uses: digitalocean/action-doctl@v2
        with:
          token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}

      - name: Log in to DigitalOcean Container Registry with short-lived credentials
        run: doctl registry login --expiry-seconds 600

      - name: Remove all old images
        run: |
            doctl registry repository delete-tag ${{ vars.IMAGE_NAME }} ${{ steps.short-sha.outputs.sha }} --force ||
            echo "Tag doesn't exist yet"

      - name: Push image to DigitalOcean Container Registry
        run: docker push ${{ vars.REGISTRY}}/${{ vars.IMAGE_NAME}}:${{ steps.short-sha.outputs.sha }}

  deploy:
    runs-on: ubuntu-latest
    needs: build_and_push

    steps:
      #recommended step to test your SSH connection within the VM
      - name: Test SSH connection
        uses: appleboy/ssh-action@v0.1.6
        with:
          host: ${{ secrets.HOST }}
          username: ${{ secrets.USERNAME }}
          key: ${{ secrets.SSHKEY }}
          port: 22
          script: |
              whoami
              hostname

      - name: Deploy to Digital Ocean droplet via SSH action
        uses: appleboy/ssh-action@v0.1.6
        with:
          host: ${{ secrets.HOST}}
          username: ${{ secrets.USERNAME}}
          key: ${{ secrets.SSHKEY}}
          port: 22
          script: |
            # Login to registry
            #docker login -u ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }} -p ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }} registry.digitalocean.com
            echo "${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}" | docker login registry.digitalocean.com -u ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }} --password-stdin
            # Stop running container
            docker stop ${{ vars.IMAGE_NAME }} || true
            # Remove old container
            docker rm ${{ vars.IMAGE_NAME}} || true
            # Pull the new image
            docker pull ${{ vars.REGISTRY }}/${{ vars.IMAGE_NAME }}:${{ needs.build_and_push.outputs.sha }}
            # Run a new container from a new image
            docker run -d \
            --restart always \
            --name ${{ vars.IMAGE_NAME}} \
            -p 8080:80 \
            ${{ vars.REGISTRY}}/${{ vars.IMAGE_NAME}}:${{ needs.build_and_push.outputs.sha }}
Enter fullscreen mode Exit fullscreen mode

I strongly recommend AVOIDING copying and pasting this code directly into your project. There are elements you will need to adapt, such as the branch name and the Dockerfile folder path inside your VM.

Before committing and pushing your project, you must add your project variables to GitHub:

2.a) Adding the variables:

  • *Go to your repository -> Settings -> Secrets and Variables -> Actions *

After it, added the equivalent information:

Secrets tab:

  • DIGITALOCEAN_ACCESS_TOKEN: Token generated during the Container Registry.
  • HOST: IPv4 from your Virtual Machine
  • USERNAME: Username used to access your virtual machine (this is likely root if you are using DigitalOcean).
  • SSHKEY: This part was tricky because my machine did not allow an RSA key. Therefore, based on Appleboy/ssh-action, I generated an ED25519 Key. Using this version, I didn't need to adjust any permissions, though this can often be a blocker depending on your specific case. Once the keys were created, I copied the Public Key (the one with .pub) into authorized_keys inside my VM, and I copied the Private Key into the SSHKEY variable in GitHub.

Additionally, based on this StackOverflow. had to update my action version from appleboy/ssh-action@v0.1.3 to appleboy/ssh-action@v0.1.6.

Variable tab:

  • IMAGE_NAME: The name given to the image
  • REGISTRY: The DigitalOcean Container Registry URL generated by the platform.

After completing these steps, I just needed to commit the .yml file and we were done. Since the CI/CD is triggered by pushes to the repository, the workflow will automatically start the pipeline. You can monitor the progress in the Actions tab of your repository.

I feel it’s important to share that I tried using AI to debug my code, but I actually lost more time than if I had just searched for the solution online. I truly recommend avoiding AI for debugging in this specific case.

Top comments (2)

Collapse
 
melezhik profile image
Alexey Melezhik

Good explanation of your setup. You may consider some improvements.

Why not reverse logic - keep your source code on forgejo instance self hosted on vps and mirror it on GitHub , and then trigger builds on GH and then pull when image is ready back to your host - dead simple ci may help with that, check it out - deadsimpleci.sparrowhub.io/doc/README

On self hosted forgejo side dsci pipeline you just need to run this code in the loop till it succeeds ( prototype version )

sha=$(config DSCI_COMMIT)
while true
do
gh api repos/{owner}/{repo}/actions/artifacts \
  --jq ".artifacts[] | select(.workflow_run.head_sha == $sha)" \
&& echo “doe something with artifact” \
&& break;
sleep 5; # sleep before give it another try of loading artifact from GitHub 
done 
Enter fullscreen mode Exit fullscreen mode

Pros:

  • Your vps instance is not exposed ssh/scp publicly

  • You still use free gh cycles to build heavy things ( original intention)

  • Your internal stuff and deployment logic is kept privately, you don’t need to add any keys, secrets to your gh account , as in that case you just pull artifacts from public gh api and deploy them in localhost mode

Collapse
 
dentrodailha96 profile image
Hiromi

Hi Alexey,

Thank you for the insight! I didn't know about this option, I will explore and see if it is valid for my project :)