<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: atikahe</title>
    <description>The latest articles on DEV Community by atikahe (@atikahe).</description>
    <link>https://dev.to/atikahe</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/atikahe"/>
    <language>en</language>
    <item>
      <title>How to Automatically Generate Test File with AI</title>
      <dc:creator>atikahe</dc:creator>
      <pubDate>Wed, 18 Jan 2023 16:17:36 +0000</pubDate>
      <link>https://dev.to/atikahe/how-to-automatically-generate-unit-test-using-ai-3c12</link>
      <guid>https://dev.to/atikahe/how-to-automatically-generate-unit-test-using-ai-3c12</guid>
      <description>&lt;p&gt;Talks around unit tests and TDD in general have been quite polarizing. On one hand, it makes sense to have a measurable gatekeeper in place especially for large projects. On the other hand, it's just an awfully lot of work for such an unproductive task. Not to mention the learning curve that can vary for beginners, and often lack of documentation or learning sources for some language. But no matter one's strong opinion on the matter, at the end of the day, it's either gonna be a job done or a tech debt looming over us all.&lt;/p&gt;

&lt;p&gt;Thankfully, the latest development on AI have made it possible to automate some part of the test writing process. Keep in mind that a lot of these tools are mostly research preview or usable prototype, which would require signing up for access.&lt;/p&gt;

&lt;p&gt;Here's some tool you can use to write test from scratch, or refine existing test to be better:&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Github's Testpilot
&lt;/h2&gt;

&lt;p&gt;After launching their AI pair programmer, &lt;a href="https://github.com/features/copilot"&gt;Copilot&lt;/a&gt;, last year, Github cited &lt;a href="https://githubnext.com/projects/testpilot/"&gt;Testpilot&lt;/a&gt; as one of their many &lt;a href="https://githubnext.com/"&gt;next exciting product&lt;/a&gt;. Similar to Copilot, Testpilot come as part of a VSCode extension and is still a usable prototype. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4NwylxZh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4wx1gdkyc9tkvlpnd07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4NwylxZh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4wx1gdkyc9tkvlpnd07.png" alt="Using Testpilot on Github Copilot Lab's Extension" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To use it, you need to signup to &lt;a href="https://github.com/github-copilot/signup"&gt;Github Copilot&lt;/a&gt; access, install &lt;a href="https://marketplace.visualstudio.com/items?itemName=GitHub.copilot"&gt;Github Copilot Extension&lt;/a&gt; and then install &lt;a href="https://marketplace.visualstudio.com/items?itemName=GitHub.copilot-labs"&gt;Github Copilot Labs Extension&lt;/a&gt; on VSCode. Keep in mind that Github Copilot access is a $10 plan per month by the time of writing this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. OpenAI Codex
&lt;/h2&gt;

&lt;p&gt;A free alternative to Testpilot would be &lt;a href="https://openai.com/blog/openai-codex/"&gt;OpenAI Codex&lt;/a&gt; that comes in limited beta. It doesn't come in the form of VSCode extension, but instead a website where you can copy your code and insert the prompt to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--25MSH_LV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xz4ibljriwj75qk1qoi4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--25MSH_LV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xz4ibljriwj75qk1qoi4.png" alt="Using OpenAI Beta Playground" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To access this page, you need to &lt;a href="https://beta.openai.com/signup"&gt;sign up&lt;/a&gt; for an OpenAI account, and then go to its &lt;a href="https://beta.openai.com/playground"&gt;playground&lt;/a&gt; as seen in the image above. The page provides you with many tuning options you can tinker with, but which could be too verbose for beginner. This is suitable for when you want to experiment and need more freedom with prompts.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Auto-test CLI
&lt;/h2&gt;

&lt;p&gt;The next one would be a shameless plug from myself. Auto-test is a CLI tool that uses OpenAI Codex under the hood, but can hopefully provide a better experience as you can do the test generation process from the CLI. You only need to tell the filename, and it will generate the test file complete with all the test cases inside.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f20HTbhS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4zvqjnqa3ddp5w64aafd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f20HTbhS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4zvqjnqa3ddp5w64aafd.png" alt="Running auto-test from terminal" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To use it, you need to &lt;a href="https://github.com/atikahe/auto-test/blob/v0.1.0/README.md"&gt;install auto-test&lt;/a&gt; and include it in the &lt;code&gt;$PATH&lt;/code&gt; of your machine. Then, get your &lt;a href="https://beta.openai.com/account/api-keys"&gt;API Key&lt;/a&gt; from OpenAI's account page and export it to your terminal profile. And that's it! You can use &lt;code&gt;auto-test&lt;/code&gt; command and point it to any file you want to be tested.&lt;/p&gt;

&lt;p&gt;The result of the test may vary. If the result is unsatisfactory to you, you can add a custom prompt that helps the AI better understand your code and generate better test. What has worked for me is by explaining a little bit about my code and what I want it to accomplish, usually with an example of input and output. Or just providing a summary of the code would also suffice. You can add custom prompt using the &lt;code&gt;--prompt&lt;/code&gt; or &lt;code&gt;-p&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aS5lV1JV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2bujxtjikz0pqy7af6bl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aS5lV1JV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2bujxtjikz0pqy7af6bl.png" alt="Running auto-test with custom prompt" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want more freedom of the prompt and completely override the default one, you can always add &lt;code&gt;--override&lt;/code&gt; or &lt;code&gt;-o&lt;/code&gt; flag at the end of the command.&lt;/p&gt;

&lt;p&gt;Keep in mind that in v0.1 version, the generated test file might lack important things such as importing dependencies or mocking external functions. You would still need to add this yourself, but, on the bright side, at least we're no longer writing test from scratch :)&lt;/p&gt;

</description>
      <category>testing</category>
      <category>ai</category>
      <category>tooling</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>A Novice's Guide to Integrating Code Coverage Report with SonarQube and Gitlab Runner</title>
      <dc:creator>atikahe</dc:creator>
      <pubDate>Sat, 14 Jan 2023 12:20:21 +0000</pubDate>
      <link>https://dev.to/atikahe/a-novices-guide-to-integrating-code-coverage-report-with-sonarqube-and-gitlab-runner-1gj</link>
      <guid>https://dev.to/atikahe/a-novices-guide-to-integrating-code-coverage-report-with-sonarqube-and-gitlab-runner-1gj</guid>
      <description>&lt;p&gt;Integrating SonarQube is crucial for identifying breaking changes and technical debt in large codebases. Automating unit tests and setting up a SonarQube job for code analysis can help with this. However, configuring the pipeline can be difficult, and available documentation may not be intuitive for those new to DevOps like myself. This guide aims to assist developers in setting up and testing CI/CD jobs on their own machine. One case example is to integrate code coverage report to SonarQube analysis in a gitlab job that could be tested locally.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Installing sonarqube
&lt;/h2&gt;

&lt;p&gt;Visit &lt;a href="https://www.sonarqube.org/downloads/?gads_campaign=Class-1-Brand-SQ&amp;amp;gads_ad_group=SonarQube&amp;amp;gads_keyword=sonarqube&amp;amp;gclid=CjwKCAiA7vWcBhBUEiwAXieItj4MVTyimpSwjCGjxehdEI6byxh0Lk-uPXdoiXn0PU7Usc1Z9b30MRoC4w8QAvD_BwE" rel="noopener noreferrer"&gt;here&lt;/a&gt; for a straight-forward SonarQube installation guide. Enterprise usually have a centralized SonarQube dashboard which you can connect to. However, for learning purposes we will install and run it on our machine instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Setup sonar config on the project's root
&lt;/h2&gt;

&lt;p&gt;Go to the root of your project and create a new &lt;code&gt;sonar-project.properties&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="c"&gt;# Project specification
&lt;/span&gt;&lt;span class="py"&gt;sonar.projectKey&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;project-key-example&lt;/span&gt;
&lt;span class="py"&gt;sonar.projectName&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;project-name-example&lt;/span&gt;
&lt;span class="py"&gt;sonar.projectVersion&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;project-version-example&lt;/span&gt;

&lt;span class="c"&gt;# Analysis specification
&lt;/span&gt;&lt;span class="py"&gt;sonar.language&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;go&lt;/span&gt;
&lt;span class="py"&gt;sonar.exclusions&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;**/mocks/**,**/vendor/**&lt;/span&gt;
&lt;span class="py"&gt;sonar.qualitygate.wait&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Testing specification
&lt;/span&gt;&lt;span class="py"&gt;sonar.go.coverage.reportPaths&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;./coverage.out&lt;/span&gt;
&lt;span class="py"&gt;sonar.go.coverage.minimumCoverage&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;80&lt;/span&gt;
&lt;span class="py"&gt;sonar.test.exclusions&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;**/*_test.go,**/mocks/**,**/vendor/**&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Project specification will be necessary to identify your project among others that will share your SonarQube dashboard. So, make sure that the key is unique and the name is descriptive.&lt;/p&gt;

&lt;p&gt;Analysis specification is important to tell sonar what framework it needs to use in running the analysis. The things they will analyze by default are: code smells, code duplication, and bugs. By default, coverage analysis won't be done unless we feed the report into it, which we'll do by the end of this article.&lt;/p&gt;

&lt;p&gt;Specify the language you're using in &lt;code&gt;sonar-language&lt;/code&gt;, in this case we'll use Go. Exclude the folders that won't be needing analysis, such as mock files or vendor files, and specify it under &lt;code&gt;sonar.exclusions&lt;/code&gt;. You can also use &lt;code&gt;sonar.inclusions&lt;/code&gt; to specify the folders that needs to be analyzed. But since by default we want everything we write to be put under scrutiny, only specifying exclusions would suffice. &lt;code&gt;sonar.qualitygate.wait=true&lt;/code&gt; will tell the SonarQube analysis to wait for the completion of the quality gate evaluation before completing the analysis.&lt;/p&gt;

&lt;p&gt;Quality gate could be another topic for another time, but to put simply, it is a set of threshold that would qualify your code to be production ready. This threshold could consist of percentage of code coverage, number of code smells, etc. Waiting for this evaluation may not be necessary at all, unless you want to make sure every deploy is production-ready.&lt;/p&gt;

&lt;p&gt;Next is specifying the tests. &lt;code&gt;sonar.go.coverage.reportPaths=./coverage.out&lt;/code&gt; tells sonar where it needs to read coverage report, with &lt;code&gt;sonar.go.coverage.minimumCoverage=80&lt;/code&gt; setting the minimum percent of coverage. You'd also want to list untested files, including test files themselves, under &lt;code&gt;sonar.test.exclusions&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That's it! We're done with configuring sonar file.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Setup Gitlab CI jobs
&lt;/h2&gt;

&lt;p&gt;Find these stages under the existing &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; in your project, or create a new one if it doesn't exist.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Usually, there are three main stages to CI: build, test, and deploy. But now, we not only want to do test as part of the stage, but also analyzing the codebase as a whole. You can add a new &lt;code&gt;analyze&lt;/code&gt; stage after &lt;code&gt;test&lt;/code&gt;, or you can put them both under one stage. In my case, I preferred grouping them together under &lt;code&gt;analyze&lt;/code&gt; stage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;analyze&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we're going to create two jobs: &lt;code&gt;unit-test&lt;/code&gt; and &lt;code&gt;sonar-analysis&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;unit-test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;analyze&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;golang:1.18&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;go test -v -coverprofile=coverage.out -covermode=set ./...&lt;/span&gt;

&lt;span class="na"&gt;sonar-analysis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;analyze&lt;/span&gt;
   &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sonarsource/sonar-scanner-cli:latest&lt;/span&gt;
   &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sonar-scanner&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See that both of these jobs belong under the same &lt;code&gt;analyze&lt;/code&gt; stage, while also running on different docker images which would allow each jobs to have their own dependencies. After setting it up this way, both jobs will be running simultaneously, which won't allow us to integrate coverage report with sonar analysis. Instead, what we want is to run &lt;code&gt;unit-test&lt;/code&gt; first, generate coverage report, and then run overall code analysis on top of the report.&lt;/p&gt;

&lt;p&gt;To do this, we're going to add the &lt;code&gt;artifacts.paths&lt;/code&gt; to save our generated coverage report. Then, tell &lt;code&gt;sonar-analysis&lt;/code&gt; job to wait for &lt;code&gt;unit-test&lt;/code&gt; by adding &lt;code&gt;needs.job&lt;/code&gt; and then for it to download and use the artifact produced by &lt;code&gt;unit-test&lt;/code&gt; by adding &lt;code&gt;needs.artifacts&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;unit-test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;analyze&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;golang:1.18&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;go test -v -coverprofile=coverage.out -covermode=set ./...&lt;/span&gt;
  &lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;coverage.out&lt;/span&gt;

&lt;span class="na"&gt;sonar-analysis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;analyze&lt;/span&gt;
   &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sonarsource/sonar-scanner-cli:latest&lt;/span&gt;
   &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sonar-scanner&lt;/span&gt;
   &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unit-test&lt;/span&gt;
       &lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can add other configurations too, such as &lt;code&gt;allow_failure: true&lt;/code&gt; so any failures won't block the rest of the stage. You can also specify the branches to run these by adding &lt;code&gt;only&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="s"&gt;...&lt;/span&gt;
   &lt;span class="s"&gt;allow_failure&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
   &lt;span class="s"&gt;only&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; 
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;develop&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;
  &lt;span class="s"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also an important thing to note, this example does not cover everything. For example, you'll have to connect the &lt;code&gt;sonar-analysis&lt;/code&gt; into your SonarQube dashboard, which we don't cover in this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Testing Gitlab CI jobs in local environment
&lt;/h2&gt;

&lt;p&gt;Testing &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; changes was the trickier part, since it runs on Gitlab Runner and can't be tested with regular docker commands. In order to do this, we need to install Gitlab Runner on our machine, and then test the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file on top of it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gitlab Runner Installation
&lt;/h3&gt;

&lt;p&gt;Install gitlab runner &lt;a href="https://docs.gitlab.com/runner/install/" rel="noopener noreferrer"&gt;here&lt;/a&gt;, or install from &lt;a href="https://docs.gitlab.com/runner/install/linux-repository.html" rel="noopener noreferrer"&gt;linux repository&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up Gitlab Runner
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Make sure installation succeeded by running &lt;code&gt;gitlab-runner help&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Register gitlab-runner
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;gitlab-runner register
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will be prompted to input gitlab instance url and token, which can be found the &lt;code&gt;Settings&lt;/code&gt; of your repo under &lt;code&gt;CI/CD&lt;/code&gt; mennu. Expand &lt;code&gt;Runners&lt;/code&gt; section and find the token under &lt;code&gt;Specific runners&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter runner description and tags&lt;/li&gt;
&lt;li&gt;Choose executor, we'll use docker for our project. &lt;a href="https://docs.gitlab.com/runner/executors/index.html" rel="noopener noreferrer"&gt;See here&lt;/a&gt; for other options&lt;/li&gt;
&lt;li&gt;For detailed instruction, &lt;a href="https://docs.gitlab.com/runner/register/index.html" rel="noopener noreferrer"&gt;see here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Run test
&lt;/h3&gt;

&lt;p&gt;After successfully setting up runner, you can start the service and then go to the project's root directory. To test one specific job, you can run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gitlab-runner &lt;span class="nb"&gt;exec &lt;/span&gt;docker job_name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Do this to test the whole pipeline&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gitlab-runner run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your gitlab-ci.yml isn't detected, a config flag can be added to specify file's location&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gitlab-runner run &lt;span class="nt"&gt;--config&lt;/span&gt; /path/to/project/.gitlab-ci.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! You should be able to execute Gitlab CI now on your machine and see SonarQube analysis running in real time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.freepik.com/free-vector/programmers-working-project-website-development-methodology-technical-support_11669314.htm#query=devops&amp;amp;position=4&amp;amp;from_view=keyword" rel="noopener noreferrer"&gt;Image by vectorjuice&lt;/a&gt; on Freepik&lt;/p&gt;

</description>
      <category>gitlab</category>
      <category>sonarqube</category>
      <category>cicd</category>
    </item>
  </channel>
</rss>
