DEV Community

Mitchell
Mitchell

Posted on • Updated on • Originally published at verystrongfingers.github.io

Installing ErpNEXT with k3d v4

ERPNext is a beautiful piece of open source (GPL licensed) work - intended to offer an alternative to the encumbered and very enterprise SAP & ERP solutions that typically dominate the business world.

However, given the nature of an OSS ERP project, a slow but natural rate of adoption & popularity will ensure, but for now the kubernetes/helm documentation are quite raw. Community and developer resources are also lacking due to reasons that will likely just resolve, as the project matures and gets more widely adopted.

We've had a very recent release of K3d v4.0 (mid-Jan 2021) which I had been waiting keenly to try out. It just felt like a perfect matching, but understandably lacking resources pairing the two.


If you just want to blindly copy+paste commands, getting your instance operational ASAP:



Tonights plan

  • create a local k3s cluster by using k3d
  • add some namespaces to said cluster
  • prepare kubernetes resources & helm values to our filesytem
  • install some helm charts to said cluster
  • declare a persistent volume claim & secret for our cluster
  • run a kubernetes job to create ERPNext site
  • setup an ingress to route our ERPNext site through the LoadBalancer
  • suss out the joys of ERPNext

Install Tooling

It is assumed you already have kubectl & Docker installed, and that you're running a Unix based OS.

k3d

k3d is a helper that lets you easily create k3s clusters, using a docker daemon

curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
Enter fullscreen mode Exit fullscreen mode

v4.10 was latest at time of writing

references

helm

We will be installing the ERPNext stack by using their official helm chart

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
Enter fullscreen mode Exit fullscreen mode

v3.5.1 was latest at time of writing

references

optionally

Consider using kubectx or setting your own zsh/bash aliases to easily switch kubectl context

Create the cluster

k3d cluster create erpnext -v /opt/local-path-provisioner:/opt/local-path-provisioner
kubectl config use-context k3d-erpnext
Enter fullscreen mode Exit fullscreen mode

A volume is required for the cluster because we will be using the k3s built-in local-path persistence

Your kubecontext gets updated with the second command, meaning now when we run kubectl it will default to our new K3s cluster.

Prepare resources & environment

Firstly we'll work within a newly created directory because ~ is already a mess

mkdir erpnext-stuff && cd erpnext-stuff
Enter fullscreen mode Exit fullscreen mode

Helm repos

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add frappe https://helm.erpnext.com
helm repo update
Enter fullscreen mode Exit fullscreen mode

Namespaces

Create two namespaces to separate our database from ERPNext.

kubectl create ns mariadb
kubectl create ns erpnext
Enter fullscreen mode Exit fullscreen mode

Namespaces are a good way to separate concerns, but not a hard-requirement

Resources

Save each Kubernetes resource inside the recently created erpnext-stuff directory

Do not run kubectl apply for now

pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: erpnext
  name: erpnext-pvc
  namespace: erpnext
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
  storageClassName: local-path
Enter fullscreen mode Exit fullscreen mode

Despite the ERPNext helm chart creating a PVC, we actually want to avoid using their PVC because the accessMode is hardcoded to RWX - ReadWriteMany.

When using K3s, RWX is not possible with the built-in storage controller 'local-path'. It only supports RWO - ReadWriteOnce or below, and this is sufficient for our needs.

Dumbing down what RWO - ReadWriteOnce is, the volume can be mounted to support writes one node at a time

Given we've provisioned our K3s cluster with a single node (default) RWO will suffice for our needs in place of RWX

Kubernetes provide a list of possible storage classes along with their supported access modes, but this is mostly unrelated for our goal today.


site-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  ingressClassName: traefik
  rules:
  - host: "localhost"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: erpnext
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode

The ingress resource makes our built-in ingress-controller (ie. web server) traefik aware of a routing rule.

In our case we're telling traefik that http://localhost/ will route through a service called erpnext:80

erpnext being a service which will be provisioned when we install the ERPNext helm chart.

Still unsure what an Ingress is? think of it like a virtualhost when using Apache, or an nginx server{} configuration


erpnext-db-secret.yaml

apiVersion: v1
data:
  password: c29tZVNlY3VyZVBhc3N3b3Jk
kind: Secret
metadata:
  name: mariadb-root-password
type: Opaque
Enter fullscreen mode Exit fullscreen mode

This secret will hold the password of the database user which ERPNext will use to create sites, and perform queries with.

c29tZVNlY3VyZVBhc3N3b3Jk is base64 for someSecurePasword and it's the same password MariaDB will be told to use as root password, when we install later.

Feel free to change as you see fit, and obviously do not use my defaults in a real or production environment.


create-site-job.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: create-erp-site
spec:
  backoffLimit: 0
  template:
    spec:
      securityContext:
        supplementalGroups: [1000]
      containers:
      - name: create-site
        image: frappe/erpnext-worker:v12.17.0
        args: ["new"]
        imagePullPolicy: IfNotPresent
        volumeMounts:
          - name: sites-dir
            mountPath: /home/frappe/frappe-bench/sites
        env:
          - name: "SITE_NAME"
            value: "localhost"
          - name: "DB_ROOT_USER"
            value: root
          - name: "MYSQL_ROOT_PASSWORD"
            valueFrom:
              secretKeyRef:
                key: password
                name: mariadb-root-password
          - name: "ADMIN_PASSWORD"
            value: "bigchungus"
          - name: "INSTALL_APPS"
            value: "erpnext"
      restartPolicy: Never
      volumes:
        - name: sites-dir
          persistentVolumeClaim:
            claimName: erpnext-pvc
            readOnly: false
Enter fullscreen mode Exit fullscreen mode

As mentioned before, ERPNext is multi-tenanted. You can run many sites, and sites can have many companies.

One database is created per site, and there's also some configuration files created for the ERPNext setup to resolve sites to databases and beyond.

The 'create-site' job is the recommended way to provision a new 'site' to your ERPNext setup.

Paths of interest

  • spec.template.spec.containers[0].image - should match version used in helm chart
  • spec.template.spec.containers[0].volumeMounts - volume required for ERPNext to resolve hostnames to databases, and other meta
  • spec.template.spec.containers[0].env[0] - SITE_NAME is the FQDN where this ERPNext site is destined for
  • spec.template.spec.containers[0].env[3] - ADMIN_PASSWORD we will use this to login with later
  • spec.template.spec.volumes[0] - volume mount based on our pvc.yaml

Resources


maria-db-values.yaml

auth:
  rootPassword: "someSecurePassword"

primary:
  configuration: |-
    [mysqld]
    character-set-client-handshake=FALSE
    skip-name-resolve
    explicit_defaults_for_timestamp
    basedir=/opt/bitnami/mariadb
    plugin_dir=/opt/bitnami/mariadb/plugin
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    tmpdir=/opt/bitnami/mariadb/tmp
    max_allowed_packet=16M
    bind-address=0.0.0.0
    pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
    log-error=/opt/bitnami/mariadb/logs/mysqld.log
    character-set-server=utf8mb4
    collation-server=utf8mb4_unicode_ci

    [client]
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    default-character-set=utf8mb4
    plugin_dir=/opt/bitnami/mariadb/plugin

    [manager]
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
Enter fullscreen mode Exit fullscreen mode

ERPNext indicate your MariaDB instance should explicitly use this configuration. I'm going to assume they're primarily wanting you to have a utf8mb4 happy setup.

You may notice the ERPNext helm chart instructions at https://helm.erpnext.com/prepare-kubernetes/mariadb are slightly different to my values above.

This is because their documentation revolves around an older mariadb chart version. The newer chart version does not enable slave by default now too, so our config is simplified.

Resources

ERPNext

erpnext-values.yaml

replicaCount: 1

mariadbHost: "mariadb.mariadb.svc.cluster.local"

persistence:
  enabled: false
  existingClaim: "erpnext-pvc"
Enter fullscreen mode Exit fullscreen mode

References:

Bringing it all together

  1. Declare the PVC to your cluster. By default the PVC will not be provisioned until required (ie. container mounts it)
kubectl apply --namespace erpnext -f pvc.yaml
Enter fullscreen mode Exit fullscreen mode
  1. Install MariaDB with our specific server configuration, and root password. --wait will force the process to hang until all pods & services are healthy
helm install mariadb --namespace mariadb bitnami/mariadb --version 9.3.1 -f maria-db-values.yaml --wait
Enter fullscreen mode Exit fullscreen mode
  1. Install ERPNext. All services and pods will be deployed essentially as scaffholding, for any sites provisioned afterward.
helm install erpnext --namespace erpnext frappe/erpnext --version 2.0.11 -f erpnext-values.yaml --wait
Enter fullscreen mode Exit fullscreen mode
  1. Declare the MariaDB user account password secret for our upcoming job
kubectl apply --namespace erpnext -f erpnext-db-secret.yaml
Enter fullscreen mode Exit fullscreen mode
  1. Run the 'create site' job and stream the job's pod until completion
kubectl apply --namespace erpnext -f create-site-job.yaml && kubectl logs --namespace erpnext job/create-erp-site
Enter fullscreen mode Exit fullscreen mode

A successful completion will look something like:

> kubectl apply --namespace erpnext -f create-site-job.yaml && kubectl logs --namespace erpnext job/create-erp-site

Attempt 1 to connect to mariadb.mariadb.svc.cluster.local:3306
Attempt 1 to connect to erpnext-redis-queue:12000
Attempt 1 to connect to erpnext-redis-cache:13000
Attempt 1 to connect to erpnext-redis-socketio:11000
Connections OK
Created user _334389048b872a53
Created database _334389048b872a53
Granted privileges to user _334389048b872a53 and database _334389048b872a53
Starting database import...
Imported from database /home/frappe/frappe-bench/apps/frappe/frappe/database/mariadb/framework_mariadb.sql

Installing frappe...
Updating DocTypes for frappe        : [========================================]
Updating country info               : [========================================]

Installing erpnext...
Updating DocTypes for erpnext       : [========================================]
Updating customizations for Address
*** Scheduler is disabled ***
Enter fullscreen mode Exit fullscreen mode

Now your ERPNext instance is operational, and you have a site setup. The final step is to declare the ingress, so we can route our ERPNext site's name to the ERPNext service.

kubectl apply -f site-ingress.yaml
Enter fullscreen mode Exit fullscreen mode

Usage

kubectl port-forward --namespace kube-system svc/traefik 8080:80
Enter fullscreen mode Exit fullscreen mode

Now visit http://localhost:8080 and you should be prompted with the ERPNext login page.

u: administrator

p: bigchungus

Top comments (2)

Collapse
 
turker_tunali profile image
Türker TUNALI ⚡

I quite don't get these Kubernetes, Docker, etc. I am using LXD for ERPNext. How does Kubernetes relate to LXD?

Collapse
 
verystrongfingers profile image
Mitchell • Edited

Kubernetes is a container orchestrator, basically means it's in charge of provisioning containers - an abstract concept that utilises container technologies such as Docker, LXD.