DEV Community

Alexey Zimarev
Alexey Zimarev

Posted on

Deploy GitLab CE on a new Azure Kubernetes cluster

I would like to share my experience to create a small Kubernetes cluster on
Azure Container Service (AKS Preview) and deploy GitLab CE on it using the Helm chart.

Originally published on Medium.


Creating a cluster in AKS should be an easy task but sometimes things don’t go at they suppose to. There are at least two main issues that I found:

  • When using the Azure Portal, there is a big chance something will go wrong and the cluster will be always in “creating” state and you can only remove it by using the Azure CLI.
  • There is no way from Azure CLI to understand, which VM sizes are available in which region. Definitely, not all VM sizes can be used in AKS and in different regions the set of available VM sizes is different. It is OK when someone else pays for your Azure subscription, but using your own freemium subscription or paying out of own pocket might end up not so well.
  • You need a cluster with at least three nodes, since smaller VMs have limitations on how many data volumes can be attached to them and GitLab uses quite a few persistent volumes.

So, I managed to install the cluster with three nodes of the recent VMs, which are still quite cheap — the relatively new B-series machine size. These machines, according to Microsoft, are suited best for irregular CPU load, like web sites, development or test environments. This suits my purpose ideally.

I used B2s machine size with 2 vCPUs, 8 Gb RAM and 8 Gb local SSD.

You need Azure CLI if you want to repeat what I have done. If you don’t have it, the installation is described on this page. You also need kubectl, the management command like tool for Kubernetes.

After logging in to the Azure account, I had to register a new resource group,
because AKS requires a separate resource group. So do this, I executed this command:

az group create --name ubiquitous --location eastus
Enter fullscreen mode Exit fullscreen mode

Although I am not from the US, I chose this region because it has most VM sizes available. As you can imagine, ubiquitous is my own resource group name and you should be using something else.

Next, the cluster creation. Doing it using Azure CLI is rather simple.

az aks create --resource-group ubiquitous --name ubiquitous --node-count 3 --generate-ssh-keys --node-vm-size Standard_B2s
Enter fullscreen mode Exit fullscreen mode

The cluster name can be anything and I chose to use my company name, which matches with the resource group name. As you can see I specified three nodes for my cluster and B2s VM size.

The process is quite lengthy so leave it to run for at least 20 minutes or more. At the end, it will produce a JSON output, which you might record but I have not used it.

In order to be able to use kubectl with your new cluster, you need to add the cluster context to your .kube/config file. It can be easily done using Azure CLI.

az aks get-credentials -g ubiquitous -n ubiquitous
Enter fullscreen mode Exit fullscreen mode

Here, -g is the shortcut for --group and -n — for --name , where you
specify the name of your cluster.


The next thing is to install GitLab CE. The best way of installing it on Kubernetes is to use the Omnibus Helm chart. If you don’t know about Helm, learn more here.

Note: it will be replaced by a new “cloud ready” chart, which is in alpha stage
now.

First things first, and if you don’t have Helm, you need to install it first. Installation is different per platform, so the best way to do it is to refer to the installation instructions.

After Helm is installed, we need to install Tiller, the Helm agent in Kubernetes. It is very easy to do by running this command:

helm init
Enter fullscreen mode Exit fullscreen mode

Since you are already have your current Kubernetes context switched to the new cluster, it will just go there and install Tiller in the default namespace.

Then, we need to add GitLab Helm repository:

helm repo add gitlab 
Enter fullscreen mode Exit fullscreen mode

Before installing, remember that GitLab will use kube-lego to create SSL for your Ingress using Let’s Encrypt. But you need to have a domain, which you will use for your GitLab instance. You also need to control DNS for this domain. This is because it is necessary to add a wildcard DNS entry and point it to Ingress, which will be configured by GitLab.

Here, you can either get an external IP address for your cluster in advance, create a DNS entry and specify this address in the Helm deployment command. Or, as I did, rely on Azure to assign the IP address but in this case you will have just a few minutes to create the DNS entry, since you need to do it as soon as Ingress gets the address but before GitLab deployment is finished. This is the way I used.

So, the installation is executed by this command (remember to use your own domain name):

helm install --name gitlab --set provider=acs gitlab/gitlab-omnibus
Enter fullscreen mode Exit fullscreen mode

As you can see, I chose to use the app.ubiquitous.no subdomain, so I needed to add a DNS entry for it.

In order to see whether the IP address is already available, I used this command:

kubectl get svc -w --namespace nginx-ingress nginx
Enter fullscreen mode Exit fullscreen mode

At first, it produces output, where the EXTERNAL_IP will be shown as <pending> but let it run for a few minutes until a new line will appear and show the newly assigned external IP address.

Now, quickly add an A DNS entry for your (sub)domain and use this new address. In my case, the DNS entry was:

A   *.app.ubiquitous.no   23.96.2.189
Enter fullscreen mode Exit fullscreen mode

I intentionally expose the real details here since it is a test cluster with non-production data. In addition, it is quite secure.

At this moment, the GitLab instance is still being deployed. Azure is not very fast on completing persistent volume claims and before this is done, GitLab pods will not be operational. It took about 20 minutes in my case for everything to be set up.


Now, after all is done, I got a GitLab CE instance running on Kubernetes cluster in Azure. Helm deployment told Ingress to use a few host names: gitlab , mattermost and registry . These host names are for the subdomain specified for the installation. So in my case, I am accessing GitLab by going to https://gitlab.app.ubiquitous.no. When I went there the first time, I was able to specify the new password for the root user, so my advice is to log in the first time as fast as you can.

What is included by default? Well, quite a lot:

  • GitLab CE itself, which hosts your Git repositories and handles issues and wikis.
  • GitLab Runner for Kubernetes, which is used to build and deploy your applications using the CI/CD pipeline.
  • GitLab Docker Registry, allowing you to host Docker images in individual Docker repositories for each project.
  • Mattermost chat system.
  • Prometheus time-series database, which will collect metrics for GitLab itself and for pods that the CI pipeline will deploy.

Hence that Kubernetes integration is enabled and configured out of the box, so you can use their default Auto Deploy CI/CD configuration and start building and deploying your apps to Kubernetes if you have a Dockerfile in your repository or your app can be built using one of default Heroku buildpacks. Read more about Auto Deploy
here.

Prometheus is not exposed to the outside world, so if you want to reach it, you can use this command:

kubectl port-forward svc/gitlab-gitlab 9090:9090
Enter fullscreen mode Exit fullscreen mode

to set up port forwarding to the GitLab service, then open http://localhost:9090 to reach the Prometheus UI.

Later, I will describe how to use Auto Deploy in GitLab to build and deploy ASP.NET Core applications.


Finally, if some configuration for GitLan is required, you need to change the Ruby file that GitLab sues for configuration. Since configuration is stored on a persistent volume, it can safely be changed and the changes will remain after the pod or the whole cluster restarts.

But since there are neither root login nor SSH certificate for the pod, the easiest way to configure GitLab is to use kubectl exec. In order to use it, you have to know the name of the pod where GitLab runs. To do this, you can list all pods:

$ kubectl get pods
NAME                                        READY     STATUS    RESTARTS   AGE
gitlab-gitlab-764bd7665-w94t6               1/1       Running   1          22h
gitlab-gitlab-postgresql-5fff4f67bb-lp74p   1/1       Running   0          22h
gitlab-gitlab-redis-6c88945d56-gm5qc        1/1       Running   0          22h
gitlab-gitlab-runner-f7c85548-bl8c5         1/1       Running   7          22h
Enter fullscreen mode Exit fullscreen mode

You can see some pods to run Postgresql, Redis, the GitLab Runner, but the first pos is the one we need. So, if we run this command:

kubectl exec -it gitlab-gitlab-764bd7665-w94t6 -- /bin/bash
Enter fullscreen mode Exit fullscreen mode

we will get the pod bash prompt in sudo mode. From there everything is easy. To change the configuration, just edit the Ruby file:

vim /etc/gitlab/gitlab.rb
Enter fullscreen mode Exit fullscreen mode

You can of course use something like nano if you aren’t familiar with vim.

When all changes are done, save the file, quit the editor and run these commands:

gitlab-ctl reconfigure
gitlab-ctl restart
Enter fullscreen mode Exit fullscreen mode

Happy GitLabbing!

Top comments (1)

Collapse
 
der_gopher profile image
Alex Pliutau

We do it differently - packagemain.tech/p/streamline-your...