<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ujwal dhakal</title>
    <description>The latest articles on DEV Community by ujwal dhakal (@ujwaldhakal).</description>
    <link>https://dev.to/ujwaldhakal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ujwaldhakal"/>
    <language>en</language>
    <item>
      <title>Application Resilience: 4 Ways to Build It</title>
      <dc:creator>ujwal dhakal</dc:creator>
      <pubDate>Tue, 22 Nov 2022 13:01:02 +0000</pubDate>
      <link>https://dev.to/ujwaldhakal/application-resilience-4-ways-to-build-it-1klh</link>
      <guid>https://dev.to/ujwaldhakal/application-resilience-4-ways-to-build-it-1klh</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Do you ever wonder if you could keep your application from ?  experiencing any failures or downtime at all?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The response is "No." You can't stop every single failure and downtime that could ever occur to an application.&lt;/p&gt;

&lt;p&gt;You can improve a few areas to develop a robust system and mindset, so don't worry.&lt;/p&gt;

&lt;p&gt;Systems that recover fast after failure are considered resilient systems. Quick failure recovery does not always imply that if something goes wrong, it will be fixed in a short period of time. If the items could be recovered immediately, they would be recovered. If not, a simple procedure to fix the situation would need to be developed.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to build a resilient system
&lt;/h2&gt;

&lt;p&gt;There are multiple ways to build a resilient system but in this system, we will discuss 4 points only.&lt;/p&gt;

&lt;h4&gt;
  
  
  1) Auto healing -
&lt;/h4&gt;

&lt;p&gt;There should be some means of detecting downtime if your program or a particular component goes down. After a set amount of time, the program should be able to either continue the task from where it stopped off.&lt;/p&gt;

&lt;p&gt;Let's talk about the ways to introduce auto-healing&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Retry -: You should, if at all possible, have manual or automatic techniques to attempt again things that, for a variety of reasons, were not successful the first time. You can have a few possible retries, for instance, if you are running an eCommerce site and a consumer wants to pay using PayPal but PayPal is down.&lt;/li&gt;
&lt;li&gt;Proper UX:  In some circumstances, you can additionally plan the user experience (UX) of the application so that the end user is aware of any unfinished business. so that you can try again with less effort. For instance, anytime you post a video to Facebook, it will let you know after a short while whether the upload was successful or unsuccessful.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  2) Monitoring -:
&lt;/h4&gt;

&lt;p&gt;You may track the behavior of your application over time by using monitoring. When I use the word "behaving," I mean everything that is necessary for a business to function properly, including major errors, resource consumption, feature utilization, revenue numbers, and cost numbers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logs: If handled properly, logs can be a highly important source of information. They serve as proof that a certain event occurred. For instance, you keep track of every time a consumer tries to pay and when they are unable to pay at a specific time. It can be used to determine&lt;/li&gt;
&lt;li&gt;Metrics: Metrics are a method for quantifying anything. Anything that is significant to you can be measured. For instance, the total number of 4xx status codes, the total number of registered users, the total number of users who were unable to make a payment using PayPal, etc.&lt;/li&gt;
&lt;li&gt;Alerting: Even if you have solid metrics and logs, what if you always need to check to see if something is working or not as expected? The alerting method involves informing you if there is a problem with the expectation you've set. You can set up logs for anytime someone is unable to pay using PayPal so that you are notified through email, Slack, or SMS that the payment has failed for a specific user.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3) Testing
&lt;/h4&gt;

&lt;p&gt;Testing is a method of determining what would occur if you either gave specific data to a specific function or clicked certain before or after the page loaded if a third-party service went down, etc. It involves confirming your hypotheses regarding specific features, techniques, scenarios, and so forth.&lt;/p&gt;

&lt;p&gt;Your application can be used or abused in a variety of ways, therefore by validating your assumptions, you are already protecting yourself. It enables you to make wiser decisions on what to do next. Writing units, integration, and, if applicable, end-to-end tests are a few examples.&lt;/p&gt;

&lt;h4&gt;
  
  
  4) Incident Retros
&lt;/h4&gt;

&lt;p&gt;Failures and other potential negative outcomes are unavoidable, as we've already stated. Even when anything goes wrong, there should always be a discussion or report regarding what, when, and how it happened.&lt;/p&gt;

&lt;p&gt;On incident retros make sure to not blame anyone because the things happen not because of the person but because of not having the right process in the first place. After getting insights into how things happened. Try to come to a conclusion on setting up a process to over this kind of uncertainty for next time.  For e.g User A was not able to pay because the payment service was down at that time could be a talk. Setting up a retry first on the first failure if payment still fails notify users that their action has failed. Create a simple way for users to retry the payment themselves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Always remember that failure is inevitable; in light of this, attempt to think of three questions. What would happen, how would it affect us, and how could we handle this if something or some part broke down? With this attitude, you'll start displaying resilience in some way in your work. Building a robust system takes time and work, and the key is to start small and keep going.&lt;/p&gt;

</description>
      <category>resilient</category>
      <category>resiliency</category>
      <category>stable</category>
      <category>zerodowntime</category>
    </item>
    <item>
      <title>How to use Protobuf with Go?</title>
      <dc:creator>ujwal dhakal</dc:creator>
      <pubDate>Wed, 06 Apr 2022 19:08:47 +0000</pubDate>
      <link>https://dev.to/ujwaldhakal/how-to-use-protobuf-with-go-4ncf</link>
      <guid>https://dev.to/ujwaldhakal/how-to-use-protobuf-with-go-4ncf</guid>
      <description>&lt;h1&gt;
  
  
  How to use Protobuf with Go?
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;Have you ever been in a situation where you do not know the structure of data that services are either consuming or publishing?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Did you know that you have to write a schema only once which defines the structure data and the same schema can be used as validation before sending &amp;amp; receiving data using &lt;a href="https://developers.google.com/protocol-buffers"&gt;Protocol Buffers&lt;/a&gt;?&lt;/p&gt;

&lt;p&gt;In this tutorial, you will learn about using Protobuf. You will send &amp;amp; receive a message asynchronously using Protobuf with &lt;a href="https://www.rabbitmq.com/"&gt;RabbitMQ&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table Of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Protobuf&lt;/li&gt;
&lt;li&gt;Bootstrapping Project&lt;/li&gt;
&lt;li&gt;Creating Protobuf Messages&lt;/li&gt;
&lt;li&gt;Publishing a Message&lt;/li&gt;
&lt;li&gt;Consuming a Message&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;This tutorial will be a hands-on demonstration. If you’d like to follow along, be sure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have running &lt;a href="https://docs.docker.com/compose/"&gt;Docker Composer&lt;/a&gt; in your machine.&lt;/li&gt;
&lt;li&gt;You have a basic understanding of &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;, &lt;a href="https://go.dev/"&gt;GO&lt;/a&gt;, &lt;a href="https://www.rabbitmq.com/"&gt;Rabbitmq&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Protobuf
&lt;/h1&gt;

&lt;p&gt;Protobufs are the Messaging Contracts. They are schema that describes the structure of the message. These files are written in &lt;code&gt;.proto&lt;/code&gt; extension which is language-neutral i.e write once &amp;amp; use anywhere. Proto Messages are the structured schema that is inside &lt;code&gt;.proto&lt;/code&gt;files&lt;/p&gt;

&lt;p&gt;You will write Proto Messages and with the compiler called &lt;a href="https://grpc.io/docs/protoc-installation/"&gt;Proto Compiler&lt;/a&gt;, you can generate the code which will have a way to serialize &amp;amp; deserialize data into that format.&lt;/p&gt;

&lt;p&gt;There are many benefits of using Messaging Contracts like Protobuf :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;With these Proto Messages, you will know what type of data you are sending &amp;amp; receiving&lt;/li&gt;
&lt;li&gt;Serves as a documentation&lt;/li&gt;
&lt;li&gt;Supports validation for serializing &amp;amp; deserializing&lt;/li&gt;
&lt;li&gt;Proto messages are language-neutral and it supports various programming languages which means you can publish messages from Go &amp;amp; start consuming them from Php / Node etc&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Bootstrapping Project
&lt;/h1&gt;

&lt;p&gt;For creating a new test project follow the steps below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy &lt;code&gt;Dockerfile&lt;/code&gt; &amp;amp; &lt;code&gt;docker-compose.yaml&lt;/code&gt; from &lt;a href="https://github.com/ujwaldhakal/golang-protobuf"&gt;this repository&lt;/a&gt; which will dockerized Go source code, Proto compiler &amp;amp; RabbitMQ&lt;/li&gt;
&lt;li&gt;Create a file &lt;code&gt;publish.go&lt;/code&gt; where you will write some code to publish events&lt;/li&gt;
&lt;li&gt;Create a file &lt;code&gt;consume.go&lt;/code&gt; where you will write some code to consume events&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Creating Protobuf Messages
&lt;/h1&gt;

&lt;p&gt;In an application, while you sending messages you will serialize data using the definition from Proto Messages. And when consuming messages you will deserialize them using the same Proto Messages. Let's create a Proto Message so your services can use them to send and receive messages.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a folder &lt;code&gt;messages&lt;/code&gt; and inside that folder create a file called &lt;code&gt;user_registered.go&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Inside &lt;code&gt;user_registered.go&lt;/code&gt; copy the following codes. Let's understand the code&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;syntax&lt;/code&gt; defines which proto version you are going to use&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;go_package&lt;/code&gt; defines the path where all your generated Protobuf code will be into&lt;/li&gt;
&lt;li&gt;`&lt;code&gt;import “validate/validate.proto”;&lt;/code&gt; it is importing the validator so that you can validate message using various rules without writing it by yourself&lt;/li&gt;
&lt;li&gt;All message starts with a message keyword followed by name of the message. Inside the block, you can write key-value pairs where the key defines name &amp;amp; type &amp;amp; the value defines its order &amp;amp; rules if there are any. In our case, you are defining the first field &lt;code&gt;userId&lt;/code&gt; which is an integer and should be greater than 0 and the next is an email which is a string and should not be empty.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
\3. Run &lt;code&gt;make build-proto&lt;/code&gt; on your project root. This command will generate two files &lt;code&gt;user_registered.pb.go and user_registered.pb.validate.go&lt;/code&gt; that define type &amp;amp; includes all the validation logic that you specified on the text which you can later use on your code while serializing &amp;amp; deserializing.
&lt;h1&gt;
  
  
  Publishing a Message
&lt;/h1&gt;

&lt;p&gt;As you have successfully created a Proto Message. Let's use the same auto-generated code while sending messages to other services for this you need to write a way to publish messages. Follow the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a directory &lt;code&gt;pkg&lt;/code&gt; and a file &lt;code&gt;rabbitmq.go&lt;/code&gt; inside pkg dir where you will make a connection with RabbitMQ. The following code the function &lt;code&gt;getConnection&lt;/code&gt; is importing library and making a connection with RabbitMQ server and upon error function &lt;code&gt;failOnError&lt;/code&gt; is terminating the application with an error.&lt;/li&gt;
&lt;/ol&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
\2. After successfully making a connection, write a function that will publish a message to RabbitMQ. The following code accepts &lt;code&gt;queueName&lt;/code&gt; and payload as a &lt;code&gt;message&lt;/code&gt; which will write a message to default exchange to JSON format.


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;\3. Create &lt;code&gt;publish.go&lt;/code&gt; which will import publish function from &lt;code&gt;rabbitmq.go&lt;/code&gt; to send data. The following code is using a struct that was generated by the proto using the proto compiler and it is encoding them into JSON after that it is calling to publish the function that you created earlier to send data to &lt;code&gt;UserRegistered&lt;/code&gt;queue.&lt;/p&gt;

&lt;p&gt;Try running &lt;code&gt;make publish&lt;/code&gt; to verify that your code is actually working if you see &lt;code&gt;Data has been published&lt;/code&gt; the output that means you were successful in publishing a message.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h1&gt;
  
  
  Consuming a Message
&lt;/h1&gt;

&lt;p&gt;Until now you defined a photo message then you created a &lt;code&gt;rabbitmq.go&lt;/code&gt;to create a way to publish messages &amp;amp; you used the same function &lt;code&gt;publish.go&lt;/code&gt; to publish messages to RabbitMq. Now let's consume the message that you published earlier.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Inside &lt;code&gt;rabbit. go&lt;/code&gt; add a function that will help in consuming a message. The following code makes a connection to RabbitMq &amp;amp; returns a channel that you can leverage to listen to incoming messages.
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;\2. Inside &lt;code&gt;consume.go&lt;/code&gt; listen to all the messages and then print them. Let's understand the following code :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The main function is listening to message by creating a consuming client. While deserializing message it is importing &lt;code&gt;UserRegistered&lt;/code&gt; struct from the auto-generated code to define the type of data. There might be cases where the producer might send invalid data due to various reasons so it is better to validate that data you are consuming are as per the contracts that you specified.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;validateMessage&lt;/code&gt; accepts the JSON data and tries to validate as per the &lt;code&gt;user_registered.proto&lt;/code&gt; definition and if the data breaks any validation rules it will throw a validation error&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Throughout this tutorial, you have learned Protobuf &amp;amp; how to use them with GO &amp;amp; RabbitMQ.&lt;/p&gt;

&lt;p&gt;Protobuf is one of the ways to implement Messaging Contracts there are more options like &lt;a href="https://docs.pact.io/"&gt;Pact&lt;/a&gt;, &lt;a href="https://www.asyncapi.com/"&gt;AsyncApi&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now with this knowledge, you can use Protobuf on any programming language to send &amp;amp; receive the data. You have also learned that it was easier to predict the data that you are consuming or publishing.&lt;/p&gt;

&lt;p&gt;Let me know if you have any comments /suggestions/feedback.&lt;/p&gt;

&lt;p&gt;Source code: &lt;a href="https://github.com/ujwaldhakal/golang-protobuf"&gt;https://github.com/ujwaldhakal/golang-protobuf&lt;/a&gt;&lt;br&gt;
Ref: &lt;a href="https://stackshare.io/protobuf"&gt;https://stackshare.io/protobuf&lt;/a&gt;&lt;/p&gt;

</description>
      <category>protobuf</category>
      <category>contracts</category>
      <category>go</category>
      <category>messaging</category>
    </item>
    <item>
      <title>Manage Multiple Cron with Helm Flow Control</title>
      <dc:creator>ujwal dhakal</dc:creator>
      <pubDate>Mon, 04 Apr 2022 12:39:00 +0000</pubDate>
      <link>https://dev.to/ujwaldhakal/manage-multiple-cron-with-helm-flow-control-32i7</link>
      <guid>https://dev.to/ujwaldhakal/manage-multiple-cron-with-helm-flow-control-32i7</guid>
      <description>&lt;p&gt;If you want to set up a &lt;a href="https://www.hivelocity.net/kb/what-is-cron-job/"&gt;Cron&lt;/a&gt; on your application, using a cron in Kubernetes is straightforward. All you need to do is copy the &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/"&gt;CronJob template&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you have one cronjob then it won't matter much but if you have many cronjobs then creating each file per cron can be not fun sometimes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://helm.sh/"&gt;Helm&lt;/a&gt; has &lt;a href="https://helm.sh/docs/chart_template_guide/control_structures/"&gt;Flow Control &lt;/a&gt;which can be used to manipulate dynamic values in the template.&lt;/p&gt;

&lt;p&gt;In this article, we will create a single cron and then we will loop through the collections of cronjob commands to create multiple cronjob resource.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a single CronJob:
&lt;/h2&gt;

&lt;p&gt;Create a cronjob by copying the following code.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The file above creates a cron that runs every minute and prints some text.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating multiple Cronjob Commands:
&lt;/h2&gt;

&lt;p&gt;Now let's create multiple cronjob commands which will hold all unique placeholders. We will now create a separate cronjob file that will hold the values i.e &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;frequency&lt;/code&gt;, &lt;code&gt;command&lt;/code&gt; etc of cron. Let's create a &lt;code&gt;cronjobs.yaml&lt;/code&gt; inside &lt;code&gt;.helm/values/cronjobs.yaml.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We will be creating four cronjobs which will have their own id, name, command, and schedule. But you can add more dynamic values if you want.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Pass dynamic values to helm template:
&lt;/h2&gt;

&lt;p&gt;So far we have worked single cronjob then we created multiple cronjob commands. Now we want to loop those cronjobs commands inside the single cronjob so that we do not have to create multiple files manually.&lt;/p&gt;

&lt;p&gt;There are multiple commands we can use in &lt;a href="https://helm.sh/docs/chart_template_guide/control_structures/"&gt;Helm Flow Control&lt;/a&gt;. For this we will use &lt;code&gt;*{{- range $cronjob := $.Values.cronjobs }}*&lt;/code&gt; to loop over the values and access the dynamic values like &lt;code&gt;{{$cronjob.id}},{{$cronjob.name}},{{$cronjob.schedule}}&lt;/code&gt; . As shown in the code below:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The final piece of the puzzle is to pass those dynamic values while installing/upgrading Helm. In order to do so, you can pass those values by using the — values keyword with the full path of that cronjobs command file i.e*.&lt;em&gt;`*helm/values/cronjobs.yaml.&lt;/em&gt; `The final command will look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm upgrade — install — values “.helm/values/cronjobs.yaml” multiple-cronjobs .helm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;We set up multiple cronjobs using two files one for defining the specs of the whole Kubernetes cronjob and the next for defining the dynamic values i.e command, frequency, name, etc. You can build/manage complex configurations using the Helm Flow Control.&lt;/p&gt;

&lt;p&gt;Github Link: &lt;a href="https://github.com/ujwaldhakal/multiple-cronjob-helm"&gt;https://github.com/ujwaldhakal/multiple-cronjob-helm&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cronjobs</category>
      <category>kubernetes</category>
      <category>helm</category>
    </item>
    <item>
      <title>Build a simple automated deployment pipeline for Cloud Run (Step by step tutorial)</title>
      <dc:creator>ujwal dhakal</dc:creator>
      <pubDate>Wed, 05 May 2021 02:59:00 +0000</pubDate>
      <link>https://dev.to/ujwaldhakal/build-a-simple-automated-deployment-pipeline-for-cloud-run-step-by-step-tutorial-211d</link>
      <guid>https://dev.to/ujwaldhakal/build-a-simple-automated-deployment-pipeline-for-cloud-run-step-by-step-tutorial-211d</guid>
      <description>&lt;p&gt;Automating the deployment pipeline is always a challenging thing. It requires a lot of effort in setting up the server for multiple environments, preparing the build, and deploying the same build into the server.&lt;/p&gt;

&lt;p&gt;We will go step by on configuring simple deployment pipelines so you could build furthermore from that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AOlMfSybcMWDvVMeW_aMX6A.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AOlMfSybcMWDvVMeW_aMX6A.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;In this post, I am going to show you how easily one can make a deployment pipeline for their next app by just copying down the source code. We will be using a simple NodeJs Dockerized application to be hosted on Cloud Run. By the end of this read, you will be able to make a deployment for any language.&lt;/p&gt;

&lt;p&gt;In this tutorial, we will be using the following stacks -:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cloud.google.com/container-registry" rel="noopener noreferrer"&gt;Container Registry&lt;/a&gt; -: It will store all Docker Images to be used in Cloud Run&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cloud.google.com/storage" rel="noopener noreferrer"&gt;Google Storage&lt;/a&gt; -: It will save our state of Terraform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cloud.google.com/run" rel="noopener noreferrer"&gt;Cloud Run&lt;/a&gt; -: Serverless platform where our final app will be hosted&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; -: It will help us to spin up Cloud Run instance and to create multiple working environments like staging and production&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://golang.org/" rel="noopener noreferrer"&gt;Go&lt;/a&gt; -: It will help us to trigger Terraform command whenever we want to with Github Actions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;Github Actions&lt;/a&gt; -: It will help us as an entry point to trigger those Go command and Go will trigger terraform&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Cloud Run, Go and Terraform?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cloud Run
&lt;/h3&gt;

&lt;p&gt;Cloud Run is a managed compute platform that enables you to run containers and Google Scales Containers as per the request and you pay per usage. The management of the infrastructure overhead is handled by Google itself so we can focus on building apps. Google has awesome doc at &lt;a href="https://cloud.google.com/run" rel="noopener noreferrer"&gt;Cloud Run&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Go
&lt;/h3&gt;

&lt;p&gt;Go is a very popular programming language these days due to its fast, reliable, and simple architecture. With Go, we can generate binaries that will execute without installing anything so this feature is pretty handy while working with OS&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform
&lt;/h3&gt;

&lt;p&gt;Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. So with Terraform, we can build the infrastructure like we build an app with Code i.e &lt;strong&gt;Infrastructure as Code&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assumptions
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You are familiar with how Go works&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You are familiar with how Cloud Run works i.e specifying container image. If not check this article &lt;a href="https://ujwaldhakal.medium.com/automate-cloud-run-deployment-in-a-minute-cb85e7db9f82" rel="noopener noreferrer"&gt;https://ujwaldhakal.medium.com/automate-cloud-run-deployment-in-a-minute-cb85e7db9f82&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You are familiar with how Terraform spins up a new server and how Terraform manages state&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You are familiar with Dockerizing the application.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Deployment tutorial steps
&lt;/h2&gt;

&lt;p&gt;We will be building deployment pipelines in the following four steps-:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Dockerize application
&lt;/h3&gt;

&lt;p&gt;First, we will create an application and dockerize it. Let's create a index.js the file inside the src directory and create a hello world response.&lt;/p&gt;

&lt;p&gt;index.js&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Now let's Dockerize it by adding Dockerfile. This is a simple Dockerfile where there is a multi-stage build and I have added as production it so that we could use the same Docker file for local and production. The same environment will be used while building the Docker image in the coming steps where we will build and push images to Google Container Registry.&lt;br&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  2. Infrastructure setup with Terraform
&lt;/h3&gt;

&lt;p&gt;First, we will write a terraform file where we will tell Terraform to spin up the Cloud Run instance. Let's create a GitHub project and create a folder cicd under which there are files need for terraform to create a cloud run instance with a given image.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
and you only need to replace dev.tfvars and credentials/dev-cred.json with your actual credentials. We could actually test by running terraform apply for this to work you need an image in a Google Container Registry

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply -var image_tag=docker_image_tag -var-file=dev.tfvars -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Once we are able to push the Docker image to Container Registry then we are able to use this but we want to make it more simple and automated like what if we could deploy by running ./cicd/deploy dev master where dev is environment and master is the branch name in Git. Let's work on few files to achieve this&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Triggering Terraform with Go
&lt;/h3&gt;

&lt;p&gt;With Go, we can create binaries that will run without installing any further dependencies. Go will build a docker image and push it to the container registry if the image does not exist in the registry and trigger Terraform to deploy a new image to Cloud Run.&lt;/p&gt;

&lt;p&gt;I am using &lt;a href="https://github.com/ujwaldhakal/gcp-deployment-utils" rel="noopener noreferrer"&gt;https://github.com/ujwaldhakal/gcp-deployment-utils&lt;/a&gt; this package for all utilities in this demo. Getting a commit hash of the current branch, logging into Container Registry, and Pushing images are all done by the &lt;strong&gt;go-gcp-docker-utils&lt;/strong&gt; package.&lt;/p&gt;

&lt;p&gt;If you look at &lt;a href="https://github.com/ujwaldhakal/gcp-deployment-utils/blob/master/docker/docker.go#LC24" rel="noopener noreferrer"&gt;https://github.com/ujwaldhakal/gcp-deployment-utils/blob/master/docker/docker.go#LC24&lt;/a&gt; this Build function closely there is the &lt;a href="https://docs.docker.com/engine/reference/commandline/build/#specifying-target-build-stage---target" rel="noopener noreferrer"&gt;**target&lt;/a&gt;** being used in Docker File because oftentimes the way build will be different in production and local environment so make sure you use &lt;a href="https://docs.docker.com/engine/reference/commandline/build/#specifying-target-build-stage---target" rel="noopener noreferrer"&gt;**target&lt;/a&gt;** in your docker file.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Inside function initTerraform&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cmd := exec.Command(“terraform”, “init”, “-backend-config”, “bucket=tf-test-app”)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;tf-test-app&lt;/strong&gt; is the name of our bucket which one has to create manually in cloud storage&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3740%2F1%2Ax1GFqi8HFWxd7Y3ZLG-hjQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3740%2F1%2Ax1GFqi8HFWxd7Y3ZLG-hjQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This deployer.go generates a deployerbinary by running go build deployer.go . Deploy binary expects one parameter which is the name environment and it will prepare a service account JSON dev-cred.json ,dev.tfvarsand Github Commit Hash to be used on Terraform apply. After that, it will try to try to check whether the image we are trying to build already exists or not. If it already exists, it won’t build. If not, it will build and push. And finally, it will apply to terraform with given credentials.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Since it will run inside GitHub action after checking out to the branch with given branch name with./cicd/deploy dev master .We will use the same commit hash to create tag in docker image and use the same image to deploy.&lt;br&gt;
 Note-: Make sure your IAM JSON has service to read bucket , create &amp;amp; destroy cloud run , read and add container images to Google Container Registry&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you want to modify anything you could just copy the same function from utils package and paste it in your actual project and start using your own instead of using the package&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Glue everything with Github Action and Bash file
&lt;/h3&gt;

&lt;p&gt;Finally, these two things will run inside Github Action Container upon manual command trigger i.e deploy dev master . As we have recently created deploybash inside cicdwhich will process our deploy dev master command and triggers &lt;a href="https://docs.github.com/en/actions/reference/events-that-trigger-workflows" rel="noopener noreferrer"&gt;Github Repository Dispatch&lt;/a&gt; which will trigger deploy.yml with payloads i.e name of environment and branch. Github Action will checkout to the branch given at deploy command and that will trigger deployer binary with environment name and as we already discussed how &lt;code&gt;deployer. go&lt;/code&gt; handles the incoming request&lt;/p&gt;

&lt;p&gt;In order to run this, you need to obtain a &lt;a href="https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token" rel="noopener noreferrer"&gt;Github Personal Token&lt;/a&gt; follow the &lt;a href="https://medium.com/r?url=https%3A%2F%2Fdocs.github.com%2Fen%2Fgithub%2Fauthenticating-to-github%2Fcreating-a-personal-access-token" rel="noopener noreferrer"&gt;link&lt;/a&gt; to get one, and put it inside the .env file. Usually, we don't track the .env file on git so for this purpose I am pushing it to Github&lt;/p&gt;

&lt;p&gt;Once you trigger that command ./cicd/deploy dev master ,deploy file will trigger GitHub action deploy.ymlpass those dev and master arguments where the GitHub action will checkout to that given branch i.e master and pass the name of the environment to deployer.gobinaries. Once binaries know the commit hash and environment name it will load the necessary credentials and trigger terraform apply with a new image dash and it will take minutes to get reflected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3702%2F1%2Ard0pkbqlnADJtZc5asFcug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3702%2F1%2Ard0pkbqlnADJtZc5asFcug.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will see a green checkmark next to the deploy text once your deployment is successful. You can find the link to action at &lt;a href="https://github.com/rucaHQ/yorba-nodejs/actions" rel="noopener noreferrer"&gt;https://github.com/username/reponame/actions&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Add More Environments
&lt;/h2&gt;

&lt;p&gt;We have only seen setting up one environment i.e **dev. **But with more team members we will need more environment that could be production, staging,test1, test 2 based on your preference.&lt;/p&gt;

&lt;p&gt;To add more environments is simple as just copying config files and editing them as most of the challenging part is already done i.e making up and running with one environment. Now we just need to create separate service account credentials by making a separate project from the same account on Google Cloud and make them accessible in **deployer. go **as we did for dev and production and you could add more environment either with an if-else pattern or just pass the dynamic variable to the environment name i.e ${environment}.tfvars and credentials/${environment}-cred.json.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note -: If we don't want to create multiple Google Cloud Projects to handle multiple environments then we can use &lt;a href="https://www.terraform.io/docs/language/state/workspaces.html" rel="noopener noreferrer"&gt;Terraform State&lt;/a&gt; to handle multiple environments with the same project.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func getTfVarFileName(env string) string {

if env == "dev" {

return "dev.tfvars"

}

if env == "production" {

return "prof.tfvars"

}

panic("Please select correct environment only dev &amp;amp; production available at the moment")

}

func getCredentialsFilePath(env string) string {

if env == "dev" {

return "credentials/dev-cred.json"

}

if env == "production" {

return "credentials/prod-cred.json"

}

panic("error on loading credentials")

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This demo does not cover any DNS automation of adding a domain &amp;amp; verifying in Cloud Run. Since there is no encryption done for exposed credentials, please do not use this approach in Public Repositories. Only use this on Private Repositories.&lt;/p&gt;

&lt;p&gt;If you want to encrypt the credentials, You could use &lt;a href="https://cloud.google.com/security-key-management?utm_source=google&amp;amp;utm_medium=cpc&amp;amp;utm_campaign=japac-AU-all-en-dr-bkws-all-all-trial-e-dr-1009882&amp;amp;utm_content=text-ad-none-none-DEV_c-CRE_504965081181-ADGP_Hybrid%20%7C%20BKWS%20-%20EXA%20%7C%20Txt%20~%20Security%20and%20Identity%20~%20Cloud%20Key%20Management_Key%20Management%20Service-KWID_43700062030571198-kwd-283025365818&amp;amp;userloc_9070016-network_g&amp;amp;utm_term=KW_google%20key%20management%20service&amp;amp;gclsrc=ds&amp;amp;gclid=CKGf3-i2qvACFczH1AodFx8H2g" rel="noopener noreferrer"&gt;Google Kms&lt;/a&gt; to encrypt with a key and decrypt with a key while using it to authenticate to Google Cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This is just a basic idea of how to make CI/CD with minimal tools. With this approach, one can make CI/CD for normal application in a company, attach other google services in Terraform too like Cloud SQL, Redis, etc.&lt;/p&gt;

&lt;p&gt;Special thanks and credits to &lt;a href="https://dev.toundefined"&gt;Ujjwal Ojha&lt;/a&gt; for helping me understand &amp;amp; make CI/CD&lt;/p&gt;

&lt;p&gt;Let me know your thoughts!&lt;/p&gt;

&lt;p&gt;Source -: &lt;a href="https://github.com/ujwaldhakal/cloud-run-cicd-boilerplate" rel="noopener noreferrer"&gt;https://github.com/ujwaldhakal/cloud-run-cicd-boilerplate&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>cloudrun</category>
      <category>severless</category>
      <category>deployment</category>
    </item>
    <item>
      <title>Easily Manage Dot Files (Config Files)</title>
      <dc:creator>ujwal dhakal</dc:creator>
      <pubDate>Tue, 13 Apr 2021 02:27:22 +0000</pubDate>
      <link>https://dev.to/ujwaldhakal/easily-manage-dot-files-config-files-5fhj</link>
      <guid>https://dev.to/ujwaldhakal/easily-manage-dot-files-config-files-5fhj</guid>
      <description>&lt;p&gt;Whenever I migrated from an old laptop to a new laptop or re-installed the whole OS, I always ended up copying all the config files(Ide configuration, bash history, app profiles, etc) in a hard disk.&lt;/p&gt;

&lt;p&gt;I either ended up copying all unnecessary configs(by zipping all) from the home folder or if I pick particular configs, I keep on missing things.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68e5ru668x7zolg7sqzs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68e5ru668x7zolg7sqzs.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even if I manage to zip all those things, storing them was another big challenge for me. Storing data comes with a cost as one would need to preserve it by storing locally or in any cloud. Updating the configs is even harder as I have to manually repeat all these steps.&lt;/p&gt;

&lt;p&gt;So I started using Git to track my config files, so I created a Private Repo where I could push my all config files.&lt;/p&gt;

&lt;p&gt;At first, I created a bash script with the name &lt;code&gt;sync. sh&lt;/code&gt; inside directory dotfiles&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sync.sh&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#
#!/bin/bash
declare -a filesToSync=(".bashrc" ".bash_history" ".zshrc" ".zsh_history" ".gitignore_global" ".gitconfig")
## now loop through the above array
for i in "${filesToSync[@]}"
do
   cp ~/$i ~/dotfiles/$i
   # or do whatever with individual element of the array
done
declare -a foldersToSync=(".config/Terminator",".config/JetBrains") 
## now loop through the above array
for i in "${foldersToSync[@]}"
do
  cp -r ~/$i ~/dotfiles
   ## or do whatever with individual element of the array
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So after running bash &lt;code&gt;sync.sh&lt;/code&gt; the script would copy all my local config files and folders into the directory into the folder dotfiles.&lt;/p&gt;

&lt;p&gt;Now all I need to do is run &lt;code&gt;git add . git commit -m “message” git push -u origin master&lt;/code&gt; these commands to push to my Git Repository.&lt;/p&gt;

&lt;p&gt;Then I again wanted to automate the above manual process on a weekly basis, so I created a file inside the same dotfiles directory with the name&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cronscript.sh&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#
#!/bin/bash
cd ~/dotfiles
bash syncfile.sh
git add .
git commit -m “weekly changes”
git push -u origin master -f 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, I created a weekly Cron job with &lt;code&gt;crontab -e&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;0 0 * * 0 bash /home/ujwal/dotfiles/cronscript.sh&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Every week my config files are getting synced automatically and I don't need to remember at all.&lt;/p&gt;

&lt;p&gt;There are other great tools for managing dotfiles that have many more features than the above script like &lt;a href="https://github.com/jbernard/dotfiles" rel="noopener noreferrer"&gt;https://github.com/jbernard/dotfiles&lt;/a&gt;.&lt;br&gt;
I simply made my own script as I could learn bash scripting side by side too.&lt;/p&gt;

</description>
      <category>configmanagement</category>
      <category>configs</category>
      <category>dotfiles</category>
      <category>dotfilesmanagement</category>
    </item>
    <item>
      <title>7 Easiest Ways To Introduce Technical Debts</title>
      <dc:creator>ujwal dhakal</dc:creator>
      <pubDate>Mon, 03 Aug 2020 15:59:09 +0000</pubDate>
      <link>https://dev.to/ujwaldhakal/7-easiest-ways-to-introduce-technical-debts-4ag5</link>
      <guid>https://dev.to/ujwaldhakal/7-easiest-ways-to-introduce-technical-debts-4ag5</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Things that are left undone for making quality software due to various circumstances. Poor management, decisions making &amp;amp; process are major factor behind it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is just like a financial debt the longer you keep not paying your debts, worse and worse it gets.&lt;/p&gt;

&lt;p&gt;So in this read, I will guide you how one can introduce “Technical Debt” effectively &amp;amp; efficiently&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QYWJcfPz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2360/0%2A1s4jyQf13u-ABb0A" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QYWJcfPz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2360/0%2A1s4jyQf13u-ABb0A" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are 7 quickest ways to get the job done&lt;/p&gt;

&lt;p&gt;1)   &lt;strong&gt;Introduce multiple languages&lt;/strong&gt; -:&lt;/p&gt;

&lt;p&gt;Since most of the company these days follow microservices already. Introduce multiple languages in your microservices so that it would be difficult for the company to hire newcomers as well as for the existing member to collaborate.&lt;/p&gt;

&lt;p&gt;2)  &lt;strong&gt;Don’t follow any coding principles&lt;/strong&gt; -:&lt;/p&gt;

&lt;p&gt;I repeat please do not follow any coding principles or try to violate the principles like &lt;a href="https://www.makeuseof.com/tag/basic-programming-principles/"&gt;YAGNI, SOLID, KISS, DRY etc&lt;/a&gt;. After some time in your project, you won’t be able to extend new features easily. It would take many days to get the job done if you could manage to get the job done, be ready for more and more bugs.&lt;/p&gt;

&lt;p&gt;3) &lt;strong&gt;Do not refactor &amp;amp; cover the test&lt;/strong&gt; -:&lt;/p&gt;

&lt;p&gt;Keep working on more features just don’t look back to refactor the code and do not write a single test for any functionalities. Eventually, you feel like your productivity has been decreased. This will help to introduce more bugs &amp;amp; take more time to release features.&lt;/p&gt;

&lt;p&gt;4) &lt;strong&gt;Do not document a single thing&lt;/strong&gt; -:&lt;/p&gt;

&lt;p&gt;Whether is a complex logic you have written or a complex app you have built just don’t share a single word in any written format. Act like you are the single source of truth. Let your company &amp;amp; colleagues suffer to find the truth.&lt;/p&gt;

&lt;p&gt;5) &lt;strong&gt;Do not monitor any projects&lt;/strong&gt; -:&lt;/p&gt;

&lt;p&gt;Let your projects grow wild into nature. Don’t put any logs &amp;amp; analytics into your application so with this you won’t actually know where to refactor &amp;amp; which features to work on next. If you are a company and planning to outsource new members just don’t analyze your team strength &amp;amp; capabilities. Let the new member do their part and let them leave without any handover.&lt;/p&gt;

&lt;p&gt;6) &lt;strong&gt;Use multiple cloud services provider&lt;/strong&gt; -:&lt;/p&gt;

&lt;p&gt;Try to use many cloud service provider as you can. Try out new provider per project until you start feeling you have really made the worst decision. Your dependency will increase with multiple providers so that you would spend most of your time figuring out the things among providers.&lt;/p&gt;

&lt;p&gt;7) &lt;strong&gt;Always rush&lt;/strong&gt; -:&lt;/p&gt;

&lt;p&gt;If a feature takes 3 weeks to build. Try to build it within 1.5weeks with all shortcuts, hack fix you know and don’t refactor later on. This way the complexity of code will increase and increase so that you will feel difficult to work with that project i.e (taking a long time to push new features, more time to fix bugs, bugs mating with bugs &amp;amp; more bugs)&lt;/p&gt;

&lt;p&gt;Don’t take ownership of any work you do and don’t be responsible for any mistakes you made so that you won’t feel like there is a room for improvement. This way the fire can keep burning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There are other indirect ways you could eventually introduce **Technical Debt **like demotivating your employees, mental burnout, lack of freedom, Micro-Management etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C-suZkL5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AGId8Q0Fj6okpG_VM.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C-suZkL5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AGId8Q0Fj6okpG_VM.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>technicaldebt</category>
    </item>
    <item>
      <title>What are you doing for keeping your Mental Health green ?</title>
      <dc:creator>ujwal dhakal</dc:creator>
      <pubDate>Sat, 25 Jul 2020 17:38:42 +0000</pubDate>
      <link>https://dev.to/ujwaldhakal/what-are-you-doing-for-keeping-your-mental-health-green-484j</link>
      <guid>https://dev.to/ujwaldhakal/what-are-you-doing-for-keeping-your-mental-health-green-484j</guid>
      <description></description>
      <category>discuss</category>
    </item>
    <item>
      <title>Fast Prototyping with Lighthouse Laravel</title>
      <dc:creator>ujwal dhakal</dc:creator>
      <pubDate>Sat, 25 Jul 2020 17:14:42 +0000</pubDate>
      <link>https://dev.to/ujwaldhakal/fast-prototyping-with-lighthouse-laravel-2kbn</link>
      <guid>https://dev.to/ujwaldhakal/fast-prototyping-with-lighthouse-laravel-2kbn</guid>
      <description>&lt;p&gt;&lt;a href="https://laravel.com/" rel="noopener noreferrer"&gt;Laravel&lt;/a&gt; is a full-fledged framework which is suitable for any kind of application from small, medium to large. And what so special about Laravel is the development process, they claim about rapid web development with it and there is no doubt.&lt;/p&gt;

&lt;p&gt;What if I told, you can create an app a lot faster than before by using &lt;a href="https://lighthouse-php.com/master/api-reference/directives.html#all" rel="noopener noreferrer"&gt;Lighthouse&lt;/a&gt; on Laravel. &lt;a href="https://lighthouse-php.com/master/api-reference/directives.html#all" rel="noopener noreferrer"&gt;Lighthouse&lt;/a&gt; is a Graphql wrapper for Laravel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2A516sp8efFuC8244h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2A516sp8efFuC8244h.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have used this on a side project &lt;a href="http://www.daani.life" rel="noopener noreferrer"&gt;DAANI&lt;/a&gt; API. &lt;a href="https://daani.life" rel="noopener noreferrer"&gt;DAANI&lt;/a&gt; is a crowdsourcing app build on &lt;a href="https://svelte.dev/" rel="noopener noreferrer"&gt;Svelte&lt;/a&gt; &amp;amp; &lt;a href="https://laravel.com/" rel="noopener noreferrer"&gt;Laravel&lt;/a&gt;, &lt;a href="https://lighthouse-php.com/" rel="noopener noreferrer"&gt;Lighthouse&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Building on Graphql was easy than the normal REST approach. Here are some points why I felt that this could speed up the development process -:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Let's say if you want to create a record, just define a schema and action in the same line as this and it will create a model no need to create separate routes, controllers, requests for this.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Mutation {
  createPost(title: String!): Post @create
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and if you want to read the data this could be simple as&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Query {
  posts: [Post!]! @paginate
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note -: Mutation are for write part where (you would do update,create,delete)as Query are for read&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Most of the CRUD actions are handled with default &lt;a href="https://lighthouse-php.com/master/api-reference/directives.html#all" rel="noopener noreferrer"&gt;Directives&lt;/a&gt; provided by Lighthouse.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Most of the time we spend time documenting the Apis. With Lighthouse they provide default Playground for testing your API along with the documentation of yours schemas.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3744%2F1%2Asmx1rgyXalIWb_YbTLdrtg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3744%2F1%2Asmx1rgyXalIWb_YbTLdrtg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you feel like you need a custom way around for your requirements. Extending is easy all you need to do is create a &lt;a href="https://lighthouse-php.com/master/api-reference/resolvers.html#resolver-function-signature" rel="noopener noreferrer"&gt;Resolve&lt;/a&gt;r i.e (mutation or query) it acts as a controller like we used to do.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you haven't worked with Graphql yet this could be a fun approach to know the Graphql mechanism. And if you have already tried it let me know your thoughts!&lt;/p&gt;

&lt;p&gt;Happy Coding!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ujwaldhakal/daani-api" rel="noopener noreferrer"&gt;Daani Source Code&lt;/a&gt;&lt;/p&gt;

</description>
      <category>graphql</category>
      <category>laravel</category>
      <category>php</category>
      <category>lighthouse</category>
    </item>
    <item>
      <title>Deploy Sapper on Google Cloud Run</title>
      <dc:creator>ujwal dhakal</dc:creator>
      <pubDate>Mon, 18 May 2020 06:33:21 +0000</pubDate>
      <link>https://dev.to/ujwaldhakal/deploy-sapper-on-google-cloud-run-5gda</link>
      <guid>https://dev.to/ujwaldhakal/deploy-sapper-on-google-cloud-run-5gda</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gtRWx9x2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2400/1%2AQ8rbENeM6VIecriTpMrpkQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gtRWx9x2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2400/1%2AQ8rbENeM6VIecriTpMrpkQ.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://svelte.dev/"&gt;Svelte&lt;/a&gt; is the next cool thing for frontend if you don't believe me just try it out and feel the experience of developing web applications with minimal efforts.&lt;/p&gt;

&lt;p&gt;So I have tried &lt;a href="https://sapper.svelte.dev/"&gt;Sapper&lt;/a&gt; which is built on top of Svelte with SSR (which is just like nuxt or next) support for my pet project. This blog post will be about how to deploy apps built with sapper on Google Cloud Run as I have faced issues while deploying.&lt;/p&gt;

&lt;p&gt;This post assumes you are already familiar with &lt;a href="https://sapper.svelte.dev/"&gt;Sapper&lt;/a&gt;, &lt;a href="https://cloud.google.com/run"&gt;Google Cloud Run&lt;/a&gt;, &lt;a href="https://github.com/features/actions"&gt;Github Actions&lt;/a&gt;, and &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are two ways of deploying the Sapper apps&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://sapper.svelte.dev/docs#Exporting"&gt;Exporting&lt;/a&gt; -: By running npm export on the root, it will generate a bunch of static files and now upload those to any web server, link domain and it will run.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://sapper.svelte.dev/docs#Deployment"&gt;Running Server&lt;/a&gt; -: By running npm run buildit will generate production build and then run npm run startwhich will start serving those build files you made. So we will be doing same thing on our Cloud Run and serve them. Here are some steps we will be doing -:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Dockerization&lt;/strong&gt; -: As we will deploying docker container so we need to dockerize our app so create a file called Dockerfile in root.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    *# Use the official lightweight Node.js 12 image.*

    *# https://hub.docker.com/_/node*

    *FROM* node:12-slim

    *# Create and change to the app directory.*

    *WORKDIR* /usr/src/app

    *# Copy application dependency manifests to the container image.*

    *# A wildcard is used to ensure both package.json AND package-lock.json are copied.*

    *# Copying this separately prevents re-running npm install on every code change.*

    *COPY* / ./

    *# Install dependencies.*

    *RUN* npm install

    *# Compile to production

    *RUN* npm run build

    *EXPOSE* 8080

    *# Run the web service on container startup.*

    *CMD* [ "npm", "start" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will set up an environment with node js pre-installed, set up your app, and serve with the desired port number.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Github Action &amp;amp; Cloud Run Setup -: Create a folder inside root with the name .github and inside that create workflows and then create a YML file name deploy.yml(You can use any name but you should strictly follow the directory convention) with configs and now the path and content should look like .github/workflows/deploy.yml&lt;/li&gt;
&lt;/ol&gt;



&lt;p&gt;It will log in to your Google account, build the image, and deploy it to the cloud run. Three variables are used on Github Secrets&lt;/p&gt;

&lt;p&gt;GCLOUD_AUTH -: This should be a 64baseEncoded string of your JSON credentials. You can use base64 path/to/jsonFile it to convert to JSON on Linux. Ref &lt;a href="https://cloud.google.com/docs/authentication/getting-started"&gt;https://cloud.google.com/docs/authentication/getting-started&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GCLOUD_PROJECT_ID- : Project id of your Google Account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x63d0nIV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2884/1%2AFDnUZIAsCLXCrIUGMCd5bw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x63d0nIV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2884/1%2AFDnUZIAsCLXCrIUGMCd5bw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CLOUD_RUN_SERVICE_NAME -: Create a cloud with the same name you have added on Github Secrets.&lt;/p&gt;

&lt;p&gt;Make sure you have added your Github repo on Cloud&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XGCPdl_A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3214/1%2Ao2vYJ4Xsd2Nb3dnjdENBjg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XGCPdl_A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3214/1%2Ao2vYJ4Xsd2Nb3dnjdENBjg.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On every push, it will build the image and deploy code to the Cloud Run.&lt;/p&gt;

&lt;p&gt;Link for Github repo -: &lt;a href="https://github.com/ujwaldhakal/sapper-on-cloud-run"&gt;https://github.com/ujwaldhakal/sapper-on-cloud-run&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion -:&lt;/p&gt;

&lt;p&gt;In my opinion, I found out working with Svelte easy, comparing to React and Angular which I have worked with. Just give it a try for your next pet projects.&lt;/p&gt;

&lt;p&gt;Keep Coding!&lt;/p&gt;

</description>
      <category>sapper</category>
      <category>svelte</category>
    </item>
    <item>
      <title>Why do we microservice?</title>
      <dc:creator>ujwal dhakal</dc:creator>
      <pubDate>Mon, 17 Feb 2020 18:47:54 +0000</pubDate>
      <link>https://dev.to/ujwaldhakal/why-do-we-microservice-2jna</link>
      <guid>https://dev.to/ujwaldhakal/why-do-we-microservice-2jna</guid>
      <description>&lt;p&gt;Well Microservice word itself explains micro + service which is the breaking&lt;br&gt;
down of services into individual independent one. It's an architectural pattern&lt;br&gt;
on designing the system and its flow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B4qKTkcN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2A00Z-XBAFDiVKkr3M4xTwaA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B4qKTkcN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2A00Z-XBAFDiVKkr3M4xTwaA.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But why do we need to microservice? Well let me explain with an example&lt;/p&gt;

&lt;p&gt;Let's say someone wants to build an eCommerce application for a single country&lt;br&gt;
with no shipping. Initially, he wants to launch eCommerce store with limited&lt;br&gt;
100+ products only with, stripe &amp;amp; cash on delivery payment method. He wants to&lt;br&gt;
build an eCommerce app so fast with less money &amp;amp; time so that he wants to know&lt;br&gt;
whether this business idea is going to work or not.&lt;/p&gt;

&lt;p&gt;Note that resources are limited Time, Money, Quality. If you focus on quality&lt;br&gt;
you need more money and time. If you want to finish in less time with less&lt;br&gt;
money, the quality will be less.&lt;/p&gt;

&lt;p&gt;After 1 month of hard-working eCommerce, the app was ready where people could&lt;br&gt;
buy listed products with limited payment gateway for a single country. The&lt;br&gt;
developer has made an application keeping in mind that the site will only handle&lt;br&gt;
1000 users per second or less than 1000 users since the owner wanted to build&lt;br&gt;
his MVP so fast. Now the eCommerce seems booming, each and every people are&lt;br&gt;
loving this idea and using in their daily life. Now more people using means more&lt;br&gt;
traffic means your limited computing resource has to serve many people now. As&lt;br&gt;
this business is booming now the owner wants to build further, he wants to make&lt;br&gt;
it global with a variety of payment gateways, shipping, inventory management to&lt;br&gt;
work on. As the traffic starts to increases-:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The site seems extremely slow people had to wait for more time to choose
products.&lt;/li&gt;
&lt;li&gt; Adding a new payment gateway, shipping method etc was taking so much time.&lt;/li&gt;
&lt;li&gt; A bug in one feature was taking a whole system down which was reported by the
customer.&lt;/li&gt;
&lt;li&gt; Deploying was taking so much time than it used to.&lt;/li&gt;
&lt;li&gt; The server was from the US region and people from Asia are facing some latency
issues.&lt;/li&gt;
&lt;li&gt; Since they were using PHP everywhere and they wanted to use languages like go,
the node for concurrency, parallelism for more efficient computation. But it was
difficult for them to implement a new stack.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now he wants his tech team to solve these problems and tech team has one thing&lt;br&gt;
in mind, an idea of restructuring them into microservices (Product listing&lt;br&gt;
service, payment service, order processing service, inventory management&lt;br&gt;
service, Automation testing service, etc) and after they have successfully&lt;br&gt;
implemented microservices&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Scaling site was easy for them they could rewrite site into the latest stack.&lt;/li&gt;
&lt;li&gt; Adding a new payment gateway, shipping method, features was easy for them as
code base was splitted it was easy for multiple developers to work autonomously.&lt;/li&gt;
&lt;li&gt; Now a downtime of one service won't affect the downtime of others&lt;/li&gt;
&lt;li&gt; Deploying was so much easier for them as they had used Kubernetes/ EKS and they
could easily deploy/revert specific services they have worked on&lt;/li&gt;
&lt;li&gt; Due to their flexibility, they were able to implement dedicated DevOps for
multi-region network serve and they have been implementing Kubernetes and
scaling their services according to traffic.&lt;/li&gt;
&lt;li&gt; Now their frontend relies on on react js, whereas they have been using PHP for
some for product listed &amp;amp; checkout. Where was order processing was done by
Golang leveraging concurrency and they have been using cypress for UI Automation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Like everything comes off with tradeoff surely microservice is no different it&lt;br&gt;
has its own tradeoffs like the specialized team to handle multiple services,&lt;br&gt;
hard to trace / monitor, hard to maintain consistency, etc which I will describe&lt;br&gt;
in a future post. So decide which one will you really need. Happy Coding!&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>monolith</category>
    </item>
    <item>
      <title>I think this is non technical but lets share our work playlist?</title>
      <dc:creator>ujwal dhakal</dc:creator>
      <pubDate>Thu, 07 Nov 2019 04:05:36 +0000</pubDate>
      <link>https://dev.to/ujwaldhakal/i-think-this-is-non-technical-but-lets-share-our-work-playlist-1ppk</link>
      <guid>https://dev.to/ujwaldhakal/i-think-this-is-non-technical-but-lets-share-our-work-playlist-1ppk</guid>
      <description></description>
      <category>discuss</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>What does a good company culture really mean?</title>
      <dc:creator>ujwal dhakal</dc:creator>
      <pubDate>Sun, 03 Nov 2019 09:37:31 +0000</pubDate>
      <link>https://dev.to/ujwaldhakal/what-does-a-good-company-culture-really-mean-5f13</link>
      <guid>https://dev.to/ujwaldhakal/what-does-a-good-company-culture-really-mean-5f13</guid>
      <description></description>
      <category>discuss</category>
    </item>
  </channel>
</rss>
