DEV Community

Cover image for Roadmap To Self-Hosted
Vitalii Popov
Vitalii Popov

Posted on

Roadmap To Self-Hosted

Cara.app received invoice from Vercel for $96,280. Many startups start with Vercel and Firebase, then, out of reluctance to pay Google, they go to their own servers

Let's talk about the nuances of the technology stack, particularly the choice of language, and evaluate the efforts to migrate to private servers. We'll use the example of a pet project with Golang, monitoring infrastructure and Kubernetes (Github)

Client → Server → Monitoring → K8S

Demo with monitoring infrastructure:


Client

Thanks to Firebase Rules, working with the database on the client is secure. However, if the rules are misconfigured, the entire database can be accessed using a script run from the browser console by an authorized user. The config is easily found in the site code.

const script = document.createElement('script');
script.type = 'module';
script.textContent = `
  import { initializeApp } from "https://www.gstatic.com/firebasejs/10.3.1/firebase-app.js";
  import { getAuth }       from 'https://www.gstatic.com/firebasejs/10.3.1/firebase-auth.js'
  import { getAnalytics }  from "https://www.gstatic.com/firebasejs/10.3.1/firebase-analytics.js";
  import { getFirestore, collection, getDocs, addDoc }  from 'https://www.gstatic.com/firebasejs/10.3.1/firebase-firestore.js'

// TODO: search for it in source code
  const firebaseConfig = {
    apiKey: "<>",
    authDomain: "<>.firebaseapp.com",
    projectId: "<>",
    storageBucket: "<>.appspot.com",
    messagingSenderId: "<>",
    appId: "<>",
    measurementId: "G-<>"
  };

  const app = initializeApp(firebaseConfig);
  const analytics = getAnalytics(app);
  window.app = app
  window.analytics = analytics
  window.db = getFirestore(app)
  window.collection = collection
  window.getDocs = getDocs
  window.addDoc = addDoc
  window.auth = getAuth(app)

  alert("Houston, we have a problem!")
`;
document.body.appendChild(script);
Enter fullscreen mode Exit fullscreen mode

When using a custom server, database interactions are never handled on the client. The client relies on the server for authentication and business logic. Best practice is to move Firebase operations to a separate file, replacing functions with API calls. The example uses Axios and Tanstack

Deploy with Docker

We build Vite using the command in package.json, and expose the finished application via Nginx. This is the easiest way to deploy applications remotely

# Build stage
FROM node:21.6.2-alpine as build
WORKDIR /client
COPY package.json yarn.lock ./
RUN yarn config set registry <https://registry.yarnpkg.com> && \\
    yarn install
COPY . .
RUN yarn build

# Serve stage
FROM nginx:alpine
COPY --from=build /client/build /usr/share/nginx/html
COPY --from=build /client/nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 3000
CMD ["nginx", "-g", "daemon off;"]
Enter fullscreen mode Exit fullscreen mode

Server

I chose Golang for practice and let’s see where it took me. All popular languages ​​have database clients and query processors, language differences will become apparent later

Authentication

As elsewhere, you are given the choice of registering through providers or by email. In the example, I used JWT tokens and Google — SDK have already been written for this

For login and registration via Google, 2 handles are defined:

  • /api/v1/google/login — “Sign in with Google” button goes here
  • /api/v1/google/callback — upon successful login, you will receive a callback from Google, here the user is saved in the database and a JWT token is generated for him. This URL is registered in google cloud (localhost is fine, local domains are not)

The user keeps an array of Providers in the database — this gives an understanding of whether the user registered via Google, email, or all together

As is typical for JWT tokens, they cannot be canceled. The “log out” button adds tokens to the blacklist; to do this, connect Redis and indicate the key lifetime until the end of the token’s life

I store JWT tokens in httpOnly cookies, I chose this path based on the alternatives:

  • due to a redirect from Google, I cannot specify the token in the response header; react without SSR will not be able to read it
  • I didn’t want to leave the token in the URL, because then I need to get it from the frontend

CORS

To work with cookies, I allow Access-Control-Allow-Credentials and set local domains, localhost and infrastructure addresses to Access-Control-Allow-Origin

corsCfg := cors.DefaultConfig()
    corsCfg.AllowOrigins = []string{
        cfg.FrontendUrl,
        "<http://prometheus:9090>",
        "<https://chemnitz-map.local>",
        "<https://api.chemnitz-map.local>",
        "<http://localhost:8080>"}
    corsCfg.AllowCredentials = true
    corsCfg.AddExposeHeaders(telemetry.TraceHeader)
    corsCfg.AddAllowHeaders(jwt.AuthorizationHeader)
    r.Use(cors.New(corsCfg))
Enter fullscreen mode Exit fullscreen mode

Environment Variables

The problem with working with env: variables cannot be stored in the Github/Gitlab repository. Alone, you can store everything locally on your computer, but with two people this already creates problems with transferring variables when updating them

I solved this with a script that pulls variables from Gitlab CI/CD variables and creates .env.production. This ties me to Gitlab, ideally Vault is connected for this purpose

Tests

You can’t hide from them, whether with Firebase or your own server. Their task is to give confidence and less manual testing

I covered the Unit business logic with tests and felt the difference: at a late stage of the project I changed the field of the user entity — the change is minor, but this entity appears in the code 27 times. This field is encrypted for the database and the database works with the user’s DBO entity; in requests it is parsed into JSON and vice versa. To check the change by manual testing, you need to poke each request a couple of times with different parameters

Swagger Query Documentation

Swagger documentation, here you can poke queries

Swagger in Golang is inconvenient — instructions are written in code comments without validation or hints:

// GetUser godoc
//
//  @Summary        Retrieves a user by its ID
//  @Description    Retrieves a user from the MongoDB database by its ID.
//  @Tags           users
//  @Produce        json
//  @Security       BearerAuth
//  @Param          id  path        string                      true    "ID of the user to retrieve"
//  @Success        200 {object}    dto.GetUserResponse         "Successful response"
//  @Failure        401 {object}    dto.UnauthorizedResponse    "Unauthorized"
//  @Failure        404 {object}    lib.ErrorResponse           "User not found"
//  @Failure        500 {object}    lib.ErrorResponse           "Internal server error"
//  @Router         /api/v1/user/{id} [get]
func (s *userService) GetUser(c *gin.Context){...}
Enter fullscreen mode Exit fullscreen mode

Unlike .Net or Java, where Swagger is customized with annotations: [SwaggerResponse(200, message, type)]

Moreover, Swagger generation in Go does not happen automatically, so we call the swagger config build with every single change of the API. The IDE makes life easier here — before building the application, a call to the Swagger generation script is configured

#!/usr/bin/env sh
export PATH=$(go env GOPATH)/bin:$PATH

swag fmt && swag init -g ./cmd/main.go -o ./docs
Enter fullscreen mode Exit fullscreen mode

Maintaining Swagger in Golang is more difficult, but there is no alternative with the same characteristics: query collections such as Postman, Insomnia or Hoppscotch lose out to Swagger due to the manual labor required to create queries

Moreover using the swagger.json configuration, you can generate a Typescript file with all requests indicating the desired generator from the list

Docker

Similar to the client, the server is assembled in 2 stages. For Go, don’t forget to specify the operating system for the build and go mod download, so as not to download dependencies with each build

# build stage
FROM golang:1.22.3-alpine3.19 AS builder
WORKDIR /app
COPY go.mod .
COPY go.sum .
RUN go mod download

COPY . .
RUN GOOS=linux GOARCH=amd64 go build -ldflags="-s -w" -o main ./cmd/main.go

# run stage
FROM alpine:3.19
WORKDIR /app
COPY --from=builder /app/main .
COPY --from=builder /app/app.yml .
COPY --from=builder /app/resources/datasets/ ./resources/datasets/

EXPOSE 8080
CMD ["/app/main"]
Enter fullscreen mode Exit fullscreen mode

Monitoring

We want to repeat the experience with Firebase, so we need to understand what is happening with our data and queries. For this purpose, we set up a third-party infrastructure

Prometheus & Grafana Metrics

Thanks to metrics, we understand the server load. There is a library called penglongli/gin-metrics for Go, which collects query metrics. Using these metrics, you can immediately display graphs based on the finished Grafana config in the repository

Metrics architecture

Grafana

Logs in Loki

Best practice to take logs directly from docker containers, rather than using an http logger, but in the example I didn’t do this

One way or another, we write the logs in a structured JSON format so that the third-party Loki system can chew them and provide filtering tools. For structured logs a custom logger is used, I used Zap

Logs architecture

Loki

openTelemetry and tracing via Jaeger

An x-trace-id header is attached to each request, using it you can view the entire request path in the system. This is relevant for microservices

Trace Architecture

One query path in Jaeger

Here the choice of programming language plays an important role; popular enterprise languages ​​Java, C# well support the openTelemetry standard. Golang is younger and currently doesn’t fully support log collection (Beta). So, tracing is less convenient, it is more difficult to see the context of the request in the system

Pyroscope

You can run load or stress tests, or you can connect a Pyroscope and watch the CPU load, memory and real-time threads. Although, of course, Pyroscope itself eats up a percentage of productivity

Pyroscope and application memory allocation

In the context of optimization, we choose a programming language for its potential, because there is no point in comparing the speed of Go, Rust, Java, C#, JS without it. But many man-hours are invested in optimization, and for business it may be more relevant to look at out-of-the-box performance, availability of specialists and language development

Sentry

Server errors lead to losses, so there is a Sentry system that collects the error path from the frontend to the backend, allowing you to see the user’s clicks and the full context of what is happening

Sentry with errors

Deploying monitoring via Docker Compose

This is the easiest way to bring everything together. Don’t forget to configure healthcheck, volume and security of all connected services

services:
  # ----------------------------------- APPS
  chemnitz-map-server:
    build: .
    develop:
      watch:
        - action: rebuild
          path: .
    env_file:
      - .env.production
    healthcheck:
      test: ["CMD", "wget", "-q", "--spider", "<http://localhost:80/api/v1/healthcheck>"]
      interval: 15s
      timeout: 3s
      start_period: 1s
      retries: 3
    ports:
      - "8080:8080"
    networks:
      - dwt_network
    depends_on:
      mongo:
        condition: service_healthy
      loki:
        condition: service_started

  # ----------------------------------- DATABASES
  mongo:
    image: mongo
    healthcheck:
      test: mongosh --eval 'db.runCommand("ping").ok' --quiet
      interval: 15s
      retries: 3
      start_period: 15s
    ports:
      - 27017:27017
    volumes:
      - mongodb-data:/data/db
      - ./resources/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js
    networks:
      - dwt_network
    env_file: .env.production
    command: ["--auth"]

  # ----------------------------------- INFRA
  # [MONITORING] Prometheus
  prometheus:
    image: prom/prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./resources/prometheus.yml:/etc/prometheus/prometheus.yml
    networks:
      - dwt_network

  # [MONITORING] Grafana
  grafana:
    image: grafana/grafana
    ports:
      - "3030:3000"
    networks:
      - dwt_network
    env_file: .env.production
    environment:
      - GF_FEATURE_TOGGLES_ENABLE=flameGraph
    volumes:
      - ./resources/grafana.yml:/etc/grafana/provisioning/datasources/datasources.yaml
      - ./resources/grafana-provisioning:/etc/grafana/provisioning
      - grafana:/var/lib/grafana
      - ./resources/grafana-dashboards:/var/lib/grafana/dashboards

  # [profiling] - Pyroscope
  pyroscope:
    image: pyroscope/pyroscope:latest
    deploy:
      restart_policy:
        condition: on-failure
    ports:
      - "4040:4040"
    networks:
      - dwt_network
    environment:
      - PYROSCOPE_STORAGE_PATH=/var/lib/pyroscope
    command:
      - "server"

  # [TRACING] Jaeger
  jaeger:
    image: jaegertracing/all-in-one:latest
    networks:
      - dwt_network
    env_file: .env.production
    ports:
      - "16686:16686"
      - "14269:14269"
      - "${JAEGER_PORT:-14268}:14268"

  # [LOGGING] loki
  loki:
    image: grafana/loki:latest
    ports:
      - "3100:3100"
    command: -config.file=/etc/loki/local-config.yaml
    volumes:
      - ./resources/loki-config.yaml:/etc/loki/local-config.yaml
    networks:
      - dwt_network

# ----------------------------------- OTHER
networks:
  dwt_network:
    driver: bridge

# Persistent data stores
volumes:
  mongodb-data:
  chemnitz-map-server:
  grafana:
Enter fullscreen mode Exit fullscreen mode

It will work! But only within one computer


Deployment with K8S

If 1 machine can handle your loads, I assume you won’t go much beyond the free plan in Firebase, not so much that there will be an economic incentive to pay to migrate the entire system and scale yourself

If we take an average RPS of 100 requests/second, which a server can easily handle for $40, Firebase will charge $100 per month only for functions + fees for the database and storage + vercel hosting, but it scales out of the box

To scale on your servers, Docker Compose is no longer enough + the entire monitoring infrastructure complicates moving to several machines. Here we connect k8s

Fortunately, k8s is independent of the server program; it takes images from the registry and works with them. Usually you create your own private registry, but I used the public docker hub

For all services, we create our own deployment and service manifests, connect configurations and secrets, give the database space using PersistentVolume and PersistentVolumeClaim, write routing settings in ingress, for development we connect local domains from /etc/hosts(code below), creating our own certificates for the browser, and for sales connect certificates from Let’s Encrypt to your domain and voila!

127.0.0.1 grafana.local pyroscope.local jaeger.local prometheus.local loki.local mongo.local chemnitz-map.local api.chemnitz-map.local
Enter fullscreen mode Exit fullscreen mode

Then, if you need to administer several machines, connect Terraform or Ansible

We also have options to configure blue/green deployment, stage/prod via helm, connect nginx mesh — which is more difficult to do with Firebase, if it’s even possible. But in Firebase it is easier to direct the user to the geographically closest server and protect against DDOS attacks


Almost every topic above revolves around infrastructure and the ability to work with it, so here are some questions to think about:

  • The topics of deployment, infrastructure, optimization and scaling are rarely raised in tutorials; can Junior developers who are comfortable working with Firebase cope with this?
  • How much money and time will all this work cost?
  • What is the cost of a mistake?
  • Is it possible to just cut the features and not cut the bones?
  • What pricing policy will allow you to use Firebase and Vercel on Highload?

Top comments (0)