DEV Community

Cover image for Pushing your Go app to production
Gayan Hewa
Gayan Hewa

Posted on

Pushing your Go app to production

I recently built Pulse Loyalty, a Loyalty API for developers. The idea is to help app developers (Mobile, extensions etc) quickly have some form of a Loyalty program integrated to the users.

I decided to build the app using Go and React for the frontend. I mad a couple of posts on the decisions I made and the tools I chose for development of the product. A few days back I pushed the app into a production taking it into a semi-live preview mode for app developers to test it out.

I decided to deploy the app in to vultr (Refferral link with free credits). Purely because it gave me free credits on sign up, and allowed me see how the product turned out with little investment on my end.

I decided to go on a single host docker deployment with docker-compose. I didn't want to make the deployment process overly complicated. The docker-compose file I had spins up MySQL, allows me to spin up the app using the image I have pushed to my private docker hub registry.

I initially wanted to serve the react frontend using the Go web server. Which is pretty straight forward and it works. But the only downside was I wanted to split the domains pulseloyalty.com for the static site and api.pulseloyalty.com for the API's, also I wanted to get SSL configured. This seemed like too much work so I decided to quickly pull in Caddy Server caddy is an interesting product. First time I came across caddy was during one of the PHP Conference talks by Kai Hendry. It was pretty insane. Now they have released v2.

Will all of these in hand my Caddyfile looked like:


api.pulseloyalty.com {
    reverse_proxy * localhost:8011
}

pulseloyalty.com {
    root * /home/kaboom/web/web/build
    try_files {path} /index.html
        file_server
}

With this in hand, a simple caddy start gets the server running and provisions SSL using letsencrypt. You need to make sure port 80 is open for provisioning to happen without a problem.

One of the problems I came across, which I didn't anticipate was running migrations in production. Since, I decided to launch the MySQL server in the private network and its not exposed to the host, this to work around this I would spin up a temporary container with mysql-client an connect it to the same network run the migrations. I was using goose for this. It was not that bad, but I can improve.

GitHub logo pressly / goose

Goose database migration tool - fork of https://bitbucket.org/liamstask/goose

goose

Goose is a database migration tool. Manage your database schema by creating incremental SQL changes or Go functions.

GoDoc Widget Travis Widget

Goals of this fork

github.com/pressly/goose is a fork of bitbucket.org/liamstask/goose with the following changes:

  • No config files
  • Default goose binary can migrate SQL files only
  • Go migrations
    • We don't go build Go migrations functions on-the-fly from within the goose binary
    • Instead, we let you create your own custom goose binary register your Go migration functions explicitly and run complex migrations with your own *sql.DB connection
    • Go migration functions let you run your code within an SQL transaction, if you use the *sql.Tx argument
  • The goose pkg is decoupled from the binary
    • goose pkg doesn't register any SQL drivers anymore thus no driver panic() conflict within your codebase!
    • goose pkg doesn't have any vendor dependencies anymore
  • We use timestamped migrations by default but recommend a hybrid approach of using timestamps in…

The docker-compose file used looked something like this,

version: "3.1"

networks:
  api:

services:
  db__service:
    image: mysql:8.0
    container_name: api-mysql
    restart: unless-stopped
    tty: true
    environment:
      MYSQL_USER: *****
      MYSQL_PASSWORD: *****
      MYSQL_ROOT_PASSWORD: ****
      SERVICE_TAGS: dev
      SERVICE_NAME: api-mysql
    volumes:
      - data-volume:/var/lib/mysql
    networks:
      - api
  adminer:
    image: adminer:latest
    container_name: db_client
    ports:
      - "9090:8080"
    networks:
      - api
  api__service:
    image: gayanhewa/pulseloyalty
    container_name: api-service
    ports:
      - "8080:8080"
    depends_on:
      - db__service
    networks:
      - api
volumes:
  data-volume:

I decided to hook adminer so I can spontaneously spin it up and down so I can access the DB using a GUI :D

To sum it up, the experience was fine, not out of this world. But, that because I don't have the tooling in place. I found this much faster and easier than having to spin up LAMP. I guess it would be the same experience we do it as a single host docker deployment.

A few things that caught me offgaurd,

  1. docker-compose and managing secrets for production is a pain I am still digging around docker swarm has native integration with secrets management. May be docker-compose was not the best choice for this.

  2. Vultr servers block all ports by default except for 22. There is the ability to manage firewall using the UI. But for some reason the changes didn't get propagate to my server. I had to open port 80 using Uncomplicated firewall on my server.

To conclude, this is just my experience having to deploy Go web apps, on a less complicated setup. For some reason felt much more difficult than pushing to k8s

Top comments (0)