loading...
Cover image for Gitlab MR Bot: Getting people to do code reviews
MyCareersFuture

Gitlab MR Bot: Getting people to do code reviews

whoiskai profile image Kai Hong ・5 min read

Hello World, recently at MyCareersFuture Team, we have a growing stack of merge requests (MR) that are missing the code review attention it needs to go into main branch. This slows down our delivery process as our working agreement is that every merge requires at least two other developers to approve first.

It's a long weekend, let's do something about it.

Goal: Get more eyes on the MRs ready for review.

MR reviews in MCF

Yes we really do review everything that goes in.

High quality code is important to us, and code reviews is one of the ways we uphold that standard. By ensuring at least two developers review each MR, we can catch potential bugs and inefficiency early in the cycle.

This also facilitates knowledge transfer between devs, which also translates to better code being written subsequently. All parties including the stakeholders understand the additional overhead to this process and have accepted it as we are aligned to the goal of delivering a quality product.

Why aren't more people reviewing?

Slack is our currently communication channel, and when a MR is ready for review, it is labelled for review, and posted to the channel for anyone to pick it up.

  • Sometimes all devs are just busy and missed the messages
  • Sometimes the request get buried among other conversations
  • Other factors like unfamiliarity with various codebases

Temporary solution

For the past 2 sprints or so, our kind scrum master has been helping to manually consolidate the various MRs and pinging the channel for people to take action on those items.

And it works! More devs have been noticeably more active in MR reviews ever since she started doing that.

So... let's automate that! 😉

Introducing the MR Bot

Objective: consolidate a list of opened merge requests that have a label of review me across multiple repositories daily and notify on slack.

We are using a self-hosted version of Gitlab, so we rely on labels for approval. For example, if I want to review a specific MR, I will add my label Review by Kai Hong, and subsequently Approved by Kai Hong.

The bot would only run once a day to prevent it from being spammy because, we wouldn't want people muting the channel do we? Since this runs daily, it sounds a lot more like a cronjob than a long running service. So let's model it as a batch process and build it!

A typical batch process chains a bunch of processors, where the output of one processor will be the input of the next processor.

Input -> Process -> Output

Let's take a look at the overview of the entire process before going into details. This "bot" is built with native NodeJS without any frameworks because it's only dealing with network requests.

const fetchTasks = GITLAB_PROJECT_ID_LIST.map(id => fetchMergeRequests(id));
Promise.all(fetchTasks)
  // Part 1
  // Input: list of fetch tasks
  // Process: resolve all requests
  // Output: list of json responses
  .then(res => {
    return Promise.all(res.map(r =>{
      return r.json();
    }))
  })
  // Part 2
  // Input: list of json responses
  // Process: extract/transform relevant data
  // Output: list of processed MR
  .then(listOfMrList => {
    // ...
    return mergeRequests.map(mr => processMr(mr));
  })
  // Part 3
  // Input: list of processed MR
  // Process: send to slack
  // Output: none
  .then(processedMrList => {
    sendToSlack(processedMrList);
  })
  .finally(() => {
    console.log(`stub end`);
  })
  .catch(err => console.error(`stub error`));

Part 1: Fetch data

The magic Gitlab API URL that allows me to fetch all the merge requests from a repo.
https://<gitlab_url>/api/v4/projects/<id>/merge_requests/state=opened&labels=Review+Me

In order to fetch from a list of repos, I had to brush up my async/await/promises codefu to make this work as it's not everyday that I try to synchronise a bunch of asynchronous calls to work with a batch process. And it was all solved with Promise.all([]).

Promise all diagram

It helps to consolidate all of the promises into one promise, and you just have to handle the output of that one, which made the batch process a lot simpler.

Part 2: Transform data

processMr() is a very simple function that helps to extract and transform the data into the relevant fields.

const processMr = (mr) => {
  const updatedOn = new Date(Date.parse(mr.updated_at)).toDateString();
  const reviewers = mr.labels.filter(x => x !== 'Review Me');
  const props = {
    mergeRequestName: mr.title,
    mergeRequestUrl: mr.web_url,
    // ...
  };
  return props;
};

For batch process design, it's important to keep your processors decoupled so that it's easy to switch them out when required. Imagine the power you could wield if you had a collection of processors that you can combine and tear apart as you wish.

Part 3: Send notification

Slack uses webhooks for posting to channels. Given the processed data, it's simple to build a POST request according to Slack's block kit design.

fetch(`${SLACK_WEBHOOK_URL}`, {
  body: JSON.stringify(message),
  ...slackPostOptions
});

However, the troublesome part is actually building the payload itself. Would not go into details as it's dependent on Slack's documentation, but it's a fun exercise in composing JSON objects.

Example payload

{
  blocks: [
    { type: 'section', text: [Object] },
    { type: 'section', text: [Object] },
    { type: 'section', text: [Object] }
  ]
}

Deploying it

Nearly all of our services are hosted on AWS EKS (Kubernetes). So we have to dockerize this service, which is as simple as this.

FROM node:lts-alpine as base

RUN apk update --no-cache \
  && apk upgrade --no-cache
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
ENV TZ="UTC"
COPY . .

CMD ["node", "app.js"]

This goes in as a cronjob for our EKS. It's set to run every weekday at 1pm.

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  # ...
spec:
  schedule: 0 5 * * 1-5
  concurrencyPolicy: Forbid
  failedJobsHistoryLimit: 2
  successfulJobsHistoryLimit: 3
  jobTemplate:
    metadata:
      # ...
    spec:
      backoffLimit: 2
      template:
        metadata:
          # ...
        spec:
          restartPolicy: Never
          imagePullSecrets:
            # ...
          containers:
            - name: gitlab-slackbot
              image: mycfsg/gitlab-slackbot:latest
              imagePullPolicy: Always
              env:
                - name: GITLAB_URL
                  value: <URL>
                - name: GITLAB_PROJECT_ID_LIST
                  value: "[<ID_LIST>]"
                - name: GITLAB_MR_OPTIONS
                  value: state=opened&labels=Review+Me
                - name: GITLAB_MR_REVIEWERS_NUM
                  value: "2"
                - name: SLACK_WEBHOOK_URL
                  value: <WEBHOOK_URL>
                - name: GITLAB_TOKEN
                  valueFrom:
                    secretKeyRef:
                      name: gitlab-slackbot
                      key: GITLAB_TOKEN
              resources:
                limits:
                  cpu: 150m
                  memory: 150Mi
                requests:
                  cpu: 100m
                  memory: 100Mi

What does it look like?

There are two types of design, one where it includes more details, and a compact one that gets straight to the point. It's just a POC/MVP at this point and it will be further refined based on feedback from the team.

Normal design

Normal design

Compact design

Compact design

Moving on

Remember what I mentioned earlier about how the MR process is just one of the methods of maintaining high quality codebases?

So what happens after the MR is approved? For that, we have something called the "chicken" process to help us with having a more stable pipeline.

extra notes:

  • MR growing because our team is growing in size
  • This is really more of a glorified reminder than a bot

MyCareersFuture

Technical writings from the MyCareersFuture product team

Discussion

markdown guide
 

Thanks for sharing your experience, as a junior, I learned from this article to be always attentive to improvement opportunities