Introduction 📖
Two months into my internship at Trell - A Visual Blogging Platform for Explorers! as a full stack developer, I dived headfirst into the world of Go and more specifically the Gin Web framework. Upon my first major project assignment, I was given this holy grail module, a one-stop solution for any scripting + HTTP server requirements including integrations with SQL database, Redis and Elastic. We Majorly used this for scripting and I will be elaborating how.
TL; DR 🚶♂️
If you know the basics of scripting and how the module works in go lang and just want a bootstrap code for getting started with the scripting part, head over to this GitHub repository uds5501/go-arch
Use Case 💡
- This module can be used interchangeably whenever you need a web server or a script.
- You can use this to create a database scheduled task dequeuer (Which we did)
- This can be used for a custom task queue in coupling with Redis (check this post maybe: Implement Job queue in Redis[0]
- You can couple this with go-Olivere [1] package to integrate ELK Stack.
Now, we can get right into our project structure!
What is go-arch and what are we building? 👨🏻💻
Go-arch is a small module that employs multiple packages and the goal work of the script is to do simple hardcoded arithmetic operations.
It is a condensed version of the project implementation we actually did at Trell, that project employs Redis, elastic, and Kubernetes (for deployment) but what I will be showing is scaled-down version with only a database and logging module, while the main Module (Operations) will be taking care of only simple arithmetic operations (+,-,* to be specific) and structural outlines and what method in which package calls what struct in which package.
Project Structure (For Scripts) 🔧
A rookie mistake of a hackathon project in Go is to bundle everything you want to do into a simple main.go and completely forget about it, until next time you change a single line of logs and bam! Your IDE is full of yellow and red lines to check where did we go wrong.
Now bundling around 5k lines of code was an option to me, but making your first Go project in a monolith directory layout? That should prick your conscience.
So here is a much more distributed layout and a scaled-down version of the boilerplate we actually use!
Package definition
config : Takes care of the project setup, this is where we read our environment files and return a config
object.
db : As the name suggests, takes care of the database connection handler for our project and returns a db handler for the same.
logger : This package helps in logging errors and debugging messages at production and development level.
pkg/common : Here we build our utility functions which are supposed to be accessible for trivial operations in our main script aka checkAdder.go
pkg/scripts : This is the money shot. This is where your main script should be placed and your server.go
should take all the scripting logic it wants from right here.
redis : Again as the name suggests, for handling local redis connection.
server : The layer between main.go
and your business logic ( pkg/scripts
). Here you can decide whether you want your package to be a gin web server or a simple script hosting module, it's all your choice!
main.go : This bad boy seriously does not need any introductions!
go.mod : Stores your external dependencies and it is what stores the module dependencies which are required for a successful run.
Now that our layout is defined, let's go step by step to our package setup!
[1/8] Basic Environment Configuration 🔧
package config
import (
"fmt"
"os"
"strconv"
"strings"
"github.com/joho/godotenv"
)
type Config struct {
AppName string
AppEnv string
SqlPrefix string
RedisAddr string
DBUserName string
DBPassword string
DBHostWriter string
DBHostReader string
DBPort string
DBName string
DBMaxOpenConnections int
DBMaxIdleConnections int
ServerPort string
EsURL string
EsPort int
}
var config Config
// Should run at the very beginning, before any other package
// or code.
func init() {
appEnv := os.Getenv("APP_ENV")
if len(appEnv) == 0 {
appEnv = "dev"
}
configFilePath := ".env"
switch appEnv {
case "production":
configFilePath = ".env.prod"
break
case "stage":
configFilePath = ".env.stage"
break
}
fmt.Println("reading env from: ", configFilePath)
e := godotenv.Load(configFilePath)
if e != nil {
fmt.Println("error loading env: ", e)
panic(e.Error())
}
config.AppName = os.Getenv("ELASTIC_APM_SERVICE_NAME")
config.AppEnv = appEnv
config.SqlPrefix = "/* " + config.AppName + " - " + config.AppEnv + "*/"
config.RedisAddr = os.Getenv("REDIS_ADDR")
config.DBUserName = os.Getenv("DB_USERNAME")
config.DBHostReader = os.Getenv("DB_HOST_READER")
config.DBHostWriter = os.Getenv("DB_HOST_WRITER")
config.DBPort = os.Getenv("DB_PORT")
config.DBPassword = strings.ReplaceAll(os.Getenv("DB_PASSWORD"), "--", "#")
config.DBName = os.Getenv("DB_NAME")
config.DBMaxIdleConnections, _ = strconv.Atoi(os.Getenv("DB_MAX_IDLE_CONENCTION"))
config.DBMaxOpenConnections, _ = strconv.Atoi(os.Getenv("DB_MAX_OPEN_CONNECTIONS"))
config.ServerPort = os.Getenv("SERVER_PORT")
config.EsURL = os.Getenv("ES_URL")
config.EsPort, _ = strconv.Atoi(os.Getenv("ES_PORT"))
}
func Get() Config {
return config
}
func IsProduction() bool {
return config.AppEnv == "production"
}
We use a config structure to grab all the configuration parameters from .env
file us godotenv and return the struct object for further use. You can configure which env file to use according to the APP_ENV
environment variable.
[2/8] Setup the database handler ⛓
Though this is not really required for what I will be building, but this is still a handy exercise to see how you can seemlessly pass around a configured database module into business modules without any re-initialisation.
package db
import (
"database/sql"
"fmt"
"sync"
"time"
"trell/go-arch/config"
"trell/go-arch/logger"
"go.elastic.co/apm/module/apmsql"
_ "go.elastic.co/apm/module/apmsql/mysql"
"go.uber.org/zap"
)
var reader *sql.DB
var writer *sql.DB
var once sync.Once
type DBConfig struct {
DBUserName string
DBPassword string
DBHost string
DBPort string
DBName string
DBMaxIdleConnections int
DBMaxOpenConnections int
DBConnMaxLifetime time.Duration
}
func NewDBClient(config *DBConfig) *sql.DB {
url := config.DBUserName + ":" + config.DBPassword + "@tcp(" + "trell-mysql-db-staging.cyqwbanzexpw.ap-south-1.rds.amazonaws.com" + ":" + config.DBPort + ")/" + config.DBName + "?multiStatements=true&parseTime=true"
client, err := apmsql.Open("mysql", url)
fmt.Println(url)
if err != nil {
panic(err.Error())
}
client.SetMaxIdleConns(config.DBMaxIdleConnections)
client.SetMaxOpenConns(config.DBMaxOpenConnections)
client.SetConnMaxLifetime(time.Minute * 10)
return client
}
func Init() {
once.Do(func() {
config := config.Get()
writerConfig := &DBConfig{
DBUserName: config.DBUserName,
DBPassword: config.DBPassword,
DBHost: config.DBHostWriter,
DBPort: config.DBPort,
DBName: config.DBName,
DBMaxIdleConnections: config.DBMaxIdleConnections,
DBMaxOpenConnections: config.DBMaxOpenConnections,
DBConnMaxLifetime: time.Minute * 10,
}
readerConfig := writerConfig
readerConfig.DBHost = config.DBHostReader
reader = NewDBClient(readerConfig)
writer = NewDBClient(writerConfig)
logger.Client().Info("writer connected", zap.String("host", config.DBHostReader))
logger.Client().Info("reader connected", zap.String("host", config.DBHostWriter))
})
}
func Factory(typ string) *sql.DB {
switch typ {
case "reader":
return reader
case "writer":
return writer
default:
panic("no such db")
}
}
func WrapQuery(query string) string {
return config.Get().SqlPrefix + query
}
type DBFactory func(t string) *sql.DB
Over here, we reuse the config module's struct and use our own DBConfig struct to setup the database connection handler. You can see that there is a nice combination of database/sql
and ampsql
packages to handle the database connection.
What are readers and writers? 🤔
Writer Handler handles write heavy queries (like inserting, batch insertion and updating) into a database and the Reader handles the select queries from the database. Although, this is for elastic search and not vanilla sql so you can really skip it.
[3/8] Setup the Logger 🧾
With the database setup, now let's take care of how we are going to show you, the developer the outputs because we firmly need dubgging module in place for production environment. You can get away with a series of fmt.Println()
but it won't really cut it in a production environment.
package logger
import (
"sync"
"trell/go-arch/config"
"go.uber.org/zap"
)
var logger *zap.Logger
var once sync.Once
func Init() {
once.Do(func() {
if config.IsProduction() {
logger, _ = zap.NewProduction()
} else {
logger, _ = zap.NewDevelopment()
}
defer logger.Sync()
})
}
func Client() *zap.Logger {
return logger
}
At trell, we are using Uber's zap logger and it's fast. No, really fast when it comes to logging. Don't believe me? Check this performance comparison[2] and see for yourself!
[4/8] Implementing the Operations Script Logic ✨
In the pkg/scripts
let's make our checkAdder.go which will be a part of scripts
package. Just create a struct named Operations which will be the initialized script.
It will contain a DB factory struct export (from trell/go-arch/db
package) and define the methods accordingly.
Exportable and private methods ✨
I have created Init()
and ExportableFunction()
as an example. These functions are named in a capitalized manner and these are the ones that can be accessed by default when you call utility functions inside another script.
The functions like getAddition()
, getSubtraction()
etc are non capitalized and are in camel-case, they can be accessed within the struct but not outside it.
What does NewOperations() do? 🤔
The sole purpose of existence of this capitalized function is to initialize a new Operations script in module.go
which in turn can be used for almost any operation (in our case, Initialization only)
[5/8] Making sure the script is initialized only Once 🙇🏻♂️
package scripts
import (
"sync"
"trell/go-arch/db"
)
type Module struct {
script *Operations
}
var moduleSingleton *Module
var moduleSingletonOnce sync.Once
func NewScriptsModuleSingleton(db db.DBFactory) *Module {
moduleSingletonOnce.Do(func() {
script := NewOperation(db)
moduleSingleton = &Module{
script: script,
}
})
return moduleSingleton
}
func (m *Module) GetScript() *Operations {
return m.script
}
To make sure we initialize the script only once for our application, we will use the function NewScriptsModuleSingleton()
to call an instance of sync.Once
(which executes a function once) and return a struct with Operations module pointer.
Ps: you should really read more about context management and process syncing to get the idea behind sync.Once()
from the official docs or this.[3]
You will be executing GetScript()
to get this script with appropriate definitions inside your infinite running code (in this case, server.go
)
[6/8] Setup the server 🖥
package server
import (
"trell/go-arch/db"
"trell/go-arch/logger"
"trell/go-arch/pkg/scripts"
)
func Init() {
logger.Init()
db.Init()
//es.Init()
//redis.Init()
scriptsModule := scripts.NewScriptsModuleSingleton(db.Factory)
scriptsModule.GetScript().Init()
// r := NewRouter()
// r.Run(":" + "4000")
}
This is a simple initialzation of a server. It initializes the logger and database and once that's done, It calls for the scriptsModule to initialize the business logic scripts (in this case, our pkg/scripts/module.go
will be called) and will run infinitely.
[7/8] Last but not the least, main.go 🌠
package main
import (
_ "trell/go-arch/config"
"trell/go-arch/server"
)
func main() {
server.Init()
}
This is it, that's all, just call server.Init() rest will be a series of procedural calls as shown in the diagram below and to run your app, use go run main.go
and see the magic!
[8/8] One more thing, external modules? 🤦
go run main.go
didn't work? oops, one last thing. So far we have taken care of modules which we defined ourselves, but what about the modules we have been using externally? (like go-redis, ampsql etc). Use go.mod
file for that, as this is what really makes your go-arch a module.
It's basically like package.json, where you specify the external dependencies and versions of the same and on each build which are required for a successful build of your own go module, a go.sum
file will be created, storing their hashes for future use, just like package.lock.
And that's a wrap, This makes sure that your go module is packaged and runs perfectly on each successful build, go ahead and give it a try now. The Github repository link can be found here uds5501/go-arch
Links Provided 🗣
- [0]Job Queues: @mhewedy's post
- [1] Go-Olivere : Go-Olivere docs
- [2] Go-Zap : Go Zap performance
- [3] Sync.Once : @martyer's post
Top comments (0)