<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Iman Tumorang</title>
    <description>The latest articles on DEV Community by Iman Tumorang (@bxcodec).</description>
    <link>https://dev.to/bxcodec</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bxcodec"/>
    <language>en</language>
    <item>
      <title>Hosting Internal Proxy and Internal pkg.go.dev for Internal Libraries Documentation</title>
      <dc:creator>Iman Tumorang</dc:creator>
      <pubDate>Mon, 20 Sep 2021 05:12:21 +0000</pubDate>
      <link>https://dev.to/bxcodec/hosting-internal-proxy-and-internal-pkggodev-for-internal-libraries-documentation-27d4</link>
      <guid>https://dev.to/bxcodec/hosting-internal-proxy-and-internal-pkggodev-for-internal-libraries-documentation-27d4</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Es1xFLLd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Adl-1QMs8yP3TA2zhnd5yUw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Es1xFLLd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Adl-1QMs8yP3TA2zhnd5yUw.jpeg" alt="" width="880" height="460"&gt;&lt;/a&gt;Gopher reading docs and download library&lt;/p&gt;

&lt;p&gt;Hey everyone, it’s been a while I haven’t published any blog posts. I think it’s been 6 months already haha. I was so busy for the past few months! There were just too many things happening and it kept me out busy from writing.&lt;/p&gt;

&lt;p&gt;So here today, I’m sharing with you all about one of my experiments last year. Yes, it was &lt;strong&gt;last year&lt;/strong&gt;. But hopefully, the content that I want to share here is still relevant and will help you with the pain I had previously in my job. The topic will be about hosting our own pkg.go.dev for our internal libraries at Xendit&lt;/p&gt;

&lt;h3&gt;
  
  
  Background
&lt;/h3&gt;

&lt;p&gt;It’s not a secret, most of the companies have their own private libraries that will only be used for internal usage. It could be for internal logging, internal tracing, internal error libraries, or anything that could satisfy their systems needs.&lt;/p&gt;

&lt;p&gt;It is the same for us at Xendit. My current team, DevPlatform, is responsible to help the engineer to be able to move faster. Thus, we want to help our product engineer work faster by providing and maintaining a lot of tools and libraries that are commonly used in every codebase. For instance, we built CLI Generator for some code templates, Logger libraries, and other common tools, both are written in NodeJS and Golang.&lt;/p&gt;

&lt;p&gt;Specifically for Golang, we have various internal libraries for internal usages. In the beginning, we only had 2–3 Go libraries that were not hard to maintain. But as the team grows, the needs for Go libraries also grow bigger. We started to build more and more libraries until we realized, we already have a handful number of libraries written in Go.&lt;/p&gt;

&lt;p&gt;Looking at the situation, we then observed a few problems that can cause the developer toil as the team growing. We want to solve these problems earlier so that our engineers will have a positive experience working with our Go internal libraries.&lt;/p&gt;

&lt;h4&gt;
  
  
  Problem 1: Looking for Documentation Website for Private Libraries
&lt;/h4&gt;

&lt;p&gt;As a Gopher, I’ve been using pkg.go.dev since its first release. It’s a portal website for public Golang libraries. It really helps to see the detail of libraries without having to check the source code. It will generate the documentation site for any public libraries that are available in Github or any public Git provider (Gitlab, Bitbucket). So let’s say, I want to see what are the available functions in the &lt;em&gt;Logrus&lt;/em&gt; library, I can easily see the complete functions in the pkg.go.dev like below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t7oQrXcG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/808/1%2AztVbZhS9GmTwUJOzMQ3ShQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t7oQrXcG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/808/1%2AztVbZhS9GmTwUJOzMQ3ShQ.png" alt="" width="808" height="724"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having realized that, I see that pkg.go.dev is only for public libraries. Yet for private/internal libraries, it is not available there. Because of that, I immediately try to figure out how we can host our internal documentation sites.&lt;/p&gt;

&lt;p&gt;After looking for the solutions, I finally found pkg.go.dev website is already open-sourced here &lt;a href="https://github.com/golang/pkgsite"&gt;https://github.com/golang/pkgsite&lt;/a&gt;. That’s where we dig deeper in our exploration.&lt;/p&gt;

&lt;h4&gt;
  
  
  Problem 2: Easily Pull Private Libraries without Complex Configuration
&lt;/h4&gt;

&lt;p&gt;Another problem we start to realize is, the engineering onboarding process to use Golang is quite longer compared to onboard them to use Node JS. For the Node JS project, we only need to create the .npmrc (a configuration for private NPM) then you will be able to pull all internal libraries.&lt;/p&gt;

&lt;p&gt;But for Golang on the other hand, there isn’t anything like .npmrc for configuring the private libraries registry. Hence, it’s difficult, especially for new Gopher engineers. Sometimes, the error can be like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ go get [github.com/](https://sum.golang.org/lookup/bitbucket.org/gokar/lucifer@v0.0.0-20190921175342-61a76c096369)xendit[/internal-library-name](https://sum.golang.org/lookup/bitbucket.org/gokar/lucifer@v0.0.0-20190921175342-61a76c096369)

[https://sum.golang.org/lookup/github.com/](https://sum.golang.org/lookup/bitbucket.org/gokar/lucifer@v0.0.0-20190921175342-61a76c096369)xendit[/internal-library-name@v0.0.0-20190921175342-61a76c096369](https://sum.golang.org/lookup/bitbucket.org/gokar/lucifer@v0.0.0-20190921175342-61a76c096369): **410 Gone**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Or sometimes something like this&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ go get [github.com/](https://sum.golang.org/lookup/bitbucket.org/gokar/lucifer@v0.0.0-20190921175342-61a76c096369)xendit[/internal-library-name](https://sum.golang.org/lookup/bitbucket.org/gokar/lucifer@v0.0.0-20190921175342-61a76c096369)
# cd .; git ls-remote [https://github.com/xendit/i](https://bitbucket.org/bxcodec/math)nternal-library
**fatal: could not read Username for '** [**https://github.com'**](https://bitbucket.org%27/) **: terminal prompts disabled**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Even if you’re an experienced Golang engineer, solving this could be tricky. You either need to do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure your local id_rsa to download from Github through ssh.&lt;/li&gt;
&lt;li&gt;Setting your GOPRIVATE environment. So it will be pointing to the Github repo directly.&lt;/li&gt;
&lt;li&gt;Or by using your personal Github Token to pull the private libraries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And as a Devplatform engineer, we realized this is could be a problem. We then see that having a private proxy could help the engineers to download/pull their private libraries without the need to configure a lot of things in their local.&lt;/p&gt;

&lt;p&gt;They will only need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to our internal VPN&lt;/li&gt;
&lt;li&gt;And set the GOPROXY environment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And they will be able to download all internal libraries.&lt;/p&gt;

&lt;p&gt;And because of these 2 problems (easy to pull library and internal documentation site), we then decide to host our own internal proxy and documentation site in our own infrastructure.&lt;/p&gt;
&lt;h3&gt;
  
  
  pkg.go.dev Architecture
&lt;/h3&gt;

&lt;p&gt;On the Github repo of the pkgsite, we can see how the pkgsite work. This is the complete architecture of how pkgsite works behind the scene.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ArpyHo4j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ac4C4E-mlwMR5hUU2EtF85g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ArpyHo4j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ac4C4E-mlwMR5hUU2EtF85g.png" alt="" width="880" height="433"&gt;&lt;/a&gt;pkgsite architecture from pkgsite repository&lt;/p&gt;

&lt;p&gt;That’s quite complicated works. When I see this for the first time, I was overwhelmed, like, WTF! The time that I’ll spend on maintaining all of this is not worth the impact on the company.&lt;/p&gt;

&lt;p&gt;Then I decided to spend a few weeks doing research on this in parallel. Whenever I was free, I tried to digest the code and the architecture, reading the docs, even asking many Go experts.&lt;/p&gt;

&lt;p&gt;After having a lot of discussions with many people, I came to a basic conclusion. To host an internal pkgsite, we don’t have to deploy all those stuff on that diagram.&lt;/p&gt;

&lt;p&gt;In fact, we only need the following two items&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The frontend application &amp;gt; we can get it from the pkgsite repo&lt;/li&gt;
&lt;li&gt;The Proxy server&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  The Frontend Application
&lt;/h4&gt;

&lt;p&gt;The frontend app can be seen on the pkgsite repository. To deploy this is simple; since we’re using the Kubernetes, we only need to create the Dockerfile and the deployment and service config in the Kubernetes. Moreover, we need to register the domain to the internal domain.&lt;/p&gt;

&lt;p&gt;This is how our Dockerfille looks like,&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;From the Dockerfile, as we can see, we pull the code of the pkgsite directly and compile it for docker distribution. Originally this Dockerfile is copied from Micah Parks’s Github repo (&lt;a href="https://github.com/MicahParks/private-pkgsite"&gt;https://github.com/MicahParks/private-pkgsite&lt;/a&gt;). You might want to explore deeper, feel free to raise a PR or issue on that repository.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Proxy Server
&lt;/h4&gt;

&lt;p&gt;Since the pkg.go.dev is too huge to be hosted for internal needs, the only option we can think of is by using the proxy server.&lt;/p&gt;

&lt;p&gt;We found a few candidates for the proxy server. But after trying it, we decided to use &lt;strong&gt;Athens&lt;/strong&gt; as our proxy.&lt;/p&gt;

&lt;p&gt;The reasons for this were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It’s configurable, we can use custom DB, custom cache&lt;/li&gt;
&lt;li&gt;It is used by a lot of companies. We believe it will be easier to debug and asking about issues in the communities.&lt;/li&gt;
&lt;li&gt;The maintainers are active and a part of the Gopher Slack community. So if we found an issue, we can ask them directly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To draw the architecture on how we use Athens from end-to-end, you can see the diagram below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JJrlVhHd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AzZjRVZHUqtaGx5NIy1FPrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JJrlVhHd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AzZjRVZHUqtaGx5NIy1FPrg.png" alt="" width="880" height="648"&gt;&lt;/a&gt;How we use internal proxy for Go private package&lt;/p&gt;

&lt;p&gt;To deploy the proxy server turned out to be pretty fast, we only need to follow the documentation.&lt;/p&gt;

&lt;p&gt;This is our Dockerfile example.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;For the deployment, we deployed it on top of Kubernetes. Since our proxy is already dockerized, it helps us to deploy it to our infrastructure.&lt;/p&gt;

&lt;p&gt;More documentation on how to deploy &lt;strong&gt;Athens&lt;/strong&gt; proxy in the internal systems can be seen &lt;a href="https://docs.gomods.io/"&gt;here&lt;/a&gt;. It may differ depending on the cloud/environment you use.&lt;/p&gt;

&lt;h4&gt;
  
  
  Security Concerns
&lt;/h4&gt;

&lt;p&gt;Since the repo doesn’t have anything related to how to handle authentication. To avoid the internal libraries leaked to the public, the only solution we choose is by protecting the domain under an internal VPN.&lt;/p&gt;

&lt;p&gt;Both documentations and Proxy servers are only accessible through VPN.&lt;/p&gt;

&lt;h4&gt;
  
  
  How to Use
&lt;/h4&gt;

&lt;p&gt;For internal gopher engineers, if they want to use the Proxy, it will be really straightforward. Like I said previously on top of this article, they will only need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to our internal VPN&lt;/li&gt;
&lt;li&gt;And set the GOPROXY, and GONOSUMDB environment
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ go env -w GOPROXY="[https://proxy.golang.org,https://go-proxy.xendit.com,direct](https://proxy.golang.org,https://gomod.tidnex.com,direct)"

$ go env -w GONOSUMDB="github.com/xendit/*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some important notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The order for the proxy value in GOPROXY matters. From the example above, it means, the Go application will pull the library first from the public Golang proxy, then go to our internal proxy, and if the library still does not exist, it will go directly to Github.&lt;/li&gt;
&lt;li&gt;When we useGOPROXY, &lt;strong&gt;we need to remove&lt;/strong&gt; GOPRIVATE value. By default, the go application will look from the GOPRIVATE first. So the proxy won’t be used if the GOPRIVATE value is set.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After that, we will be able to download all internal libraries through the internal proxy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;After deploying both for Proxy and Documentation site(pkgsite) in our internal systems. We are now able to download internal libraries without complex setup in our local. Moreover, all of our private libraries are now available in our internal documentation site (pkgsite). The engineers are now able to see the documentation of all libraries with the same experience as any other public library.&lt;/p&gt;

&lt;p&gt;The time that was needed to explore all these things took me around 1 month. Thanks to the infra engineer as well, he helped me to experiment on this. But I think if we do it intensively, it should take only a few days to work on this. I did not manage to finish it faster because I only worked on this during my spare time. So there’s no urgency and focus time ;). Although the priority was low, this documentation &amp;amp; library system is still helpful for future development and our internal Go ecosystem.&lt;/p&gt;

&lt;p&gt;What’s next?&lt;/p&gt;

&lt;p&gt;We’re planning to explore the same thing with TS/JS for our Node ecosystem at Xendit. The key is to reduce any developer toils as much as possible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References and Thanks&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Micah Parks (&lt;a href="https://github.com/MicahParks/private-pkgsite"&gt;https://github.com/MicahParks/private-pkgsite&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Patrick Juen (our Infra engineer that helps me to explore this)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/u/9508cde08448"&gt;Stanley Nguyen&lt;/a&gt; for PRs review and design review&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>golangtutorial</category>
      <category>programming</category>
      <category>go</category>
      <category>coding</category>
    </item>
    <item>
      <title>How I was performing Row Locking for Read-Write Transactions in Postgres</title>
      <dc:creator>Iman Tumorang</dc:creator>
      <pubDate>Tue, 30 Mar 2021 03:43:20 +0000</pubDate>
      <link>https://dev.to/bxcodec/how-i-was-performing-row-locking-for-read-write-transactions-in-postgres-1gh9</link>
      <guid>https://dev.to/bxcodec/how-i-was-performing-row-locking-for-read-write-transactions-in-postgres-1gh9</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;It took me 2 days to attempt to solve this, but the solution is actually very simple and elegant.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IJXEf-Dq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A_KTXQPPO9qI5UgTyrZKkBQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IJXEf-Dq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A_KTXQPPO9qI5UgTyrZKkBQ.jpeg" alt="" width="880" height="460"&gt;&lt;/a&gt;Concurrent Acces by Xendit&lt;/p&gt;

&lt;p&gt;Today's problem was complex with an obscure solution. I have been working on the refactoring of our core-payment system that handles the payment transaction to/from each bank that we support.&lt;/p&gt;

&lt;p&gt;While developing this new core service, one of the services is related to generating a payment code, let’s call it the Payment Code Generator service. The payment_code needs to be random but also unique. In short, requires a counter to generate, which means we need to store the counter.&lt;/p&gt;

&lt;p&gt;To simplify the scenario, let’s say I’m building a service that will generate a payment code.&lt;/p&gt;

&lt;p&gt;In this service, I have 2 tables, master_counter and payment_code.&lt;/p&gt;

&lt;p&gt;The master_counter table schema&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BEGIN;
CREATE TABLE IF NOT EXISTS master_counter(
  id varchar(255) NOT NULL PRIMARY KEY, 
  user_id VARCHAR(255) NOT NULL, 
  counter bigint NOT NULL, 
  created_at timestamptz NOT NULL, 
  updated_at timestamptz NOT NULL, 
  deleted_at timestamptz, 
  CONSTRAINT master_counter_user_id_unique_idx UNIQUE (user_id)
);
COMMIT;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the payment_code schema&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BEGIN;
CREATE TABLE IF NOT EXISTS payment_code(
  id varchar(255) NOT NULL PRIMARY KEY, 
  payment_code varchar(255) NOT NULL, 
  user_id varchar(255) NOT NULL, 
  created_at timestamptz NOT NULL, 
  updated_at timestamptz NOT NULL, 
  CONSTRAINT payment_code_unique_idx UNIQUE(payment_code, user_id)
);
COMMIT;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the master_counter, there is a counter column that will always increase based on received requests. If we draw into a diagram, the flow will look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1mx0VoiP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AmxDww1X_77Zutq2u22VoiQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1mx0VoiP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AmxDww1X_77Zutq2u22VoiQ.png" alt="" width="880" height="210"&gt;&lt;/a&gt;Flow the Generating Payment Code&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When the user requests a new payment code, we will get its current counter value.&lt;/li&gt;
&lt;li&gt;The counter value is then increased by 1 and the algorithm is applied. (simply: hash the counter value to be a random character)&lt;/li&gt;
&lt;li&gt;After the new counter value is hashed, the hashed value is stored to the payment_code table with the unique constraint.&lt;/li&gt;
&lt;li&gt;The counter value is updated based on user id again to master_counter.&lt;/li&gt;
&lt;li&gt;The hashed counter value is returned to the user. This is a payment code that will be used by the user.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While there are a lot of algorithms used to achieve this, the above flow sufficiently represents the simplified over all process.&lt;/p&gt;

&lt;h4&gt;
  
  
  Problems Started
&lt;/h4&gt;

&lt;p&gt;Initially, when written in SQL, the query looks similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BEGIN;
SELECT counter FROM master_counter;

// Do Addition to counter in application
// Apply Generator Logic in Application

INSERT INTO payment_code(payment_code) VALUES(generated_value_with_counter);

UPDATE master_counter SET counter=&amp;lt;new_value_from_application&amp;gt;;

COMMIT;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, I’m using GoLang, so there is native support for transactions. This can be seen in the example shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tx,_:=db.Begin() // Begin the transaction

counter, err:= SelectCounter(tx) // Select the `counter`
if err != nil {
 tx.Rollback() // Rollback if any error occurred
}

counter++ // increase the counter
generatedCode:=ApplyAlgorithm(counter) // apply the algorithm

err:= SaveGeneratedCode(tx, generatedCode) // save the payment_code
if err != nil {
 tx.Rollback() // Rollback if any error occurred
}

err:=UpdateCounter(tx, counter) // update the counter to DB
if err != nil {
 tx.Rollback() // Rollback if any error occurred
}

tx.Commit() // commit the transaction
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function works perfectly when used as a single request. However, when load testing with only 2 concurrent users produced the following errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pq: duplicate key value violates unique constraint \"my_unique_payment_code_index\"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The “duplication error on payment_code” is happening because the generated payment_code must be unique.&lt;/p&gt;

&lt;p&gt;While the application is behaving correctly by returning the error, from a business point of view, this is a problem. It should not be returning a duplication error, when the user is requesting a “ &lt;strong&gt;new&lt;/strong&gt; ” payment_code that has never been used?&lt;/p&gt;

&lt;p&gt;My first assumption is, if there’s a duplication, it means, that there’s a race condition here. The race condition occurred when I did the update and read on the counter value. The counter will be hashed for the payment code, and the payment code must be unique, but, because of the race condition, the application is reading the same value of the counter which is causing duplicate key violations.&lt;/p&gt;

&lt;p&gt;In simple, the error occurs because we have met with a race condition where the counters are not updated when concurrent requests come in, causing duplication to occur when we attempt to create new payment codes in parallel requests.&lt;/p&gt;

&lt;h4&gt;
  
  
  First Attempt! — Using Isolation Level
&lt;/h4&gt;

&lt;p&gt;Knowing this, I took to the internet to search and learn about the transaction, bring me to these articles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://pgdash.io/blog/postgres-transactions.html"&gt;https://pgdash.io/blog/postgres-transactions.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://shiroyasha.io/transaction-isolation-levels-in-postgresql.html"&gt;http://shiroyasha.io/transaction-isolation-levels-in-postgresql.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.gojekengineering.com/on-concurrency-control-in-databases-1e34c95d396e"&gt;https://blog.gojekengineering.com/on-concurrency-control-in-databases-1e34c95d396e&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I realized that there’s also an isolation level on the transaction. Reading all those articles made me remember my old days in college life. I remember I’ve learned this, but I had never found a case when I really needed to understand or apply this in practice.&lt;/p&gt;

&lt;p&gt;From this research, I concluded that the isolation level on the transaction is not enough to lock the concurrent read-write-update processes in the database.&lt;/p&gt;

&lt;p&gt;To address this, I added the isolation level to the application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_, err := tx.ExecContext(ctx, "SET TRANSACTION ISOLATION LEVEL REPEATABLE READ")

if err != nil {
   return
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this fix, I re-ran the application, which returned a new error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pq: could not serialize access due to concurrent update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This fix resulted in the avoiding the duplication error, but it also completely blocked concurrent requests. This is no different from having 1 concurrent request to a lot of concurrent requests.&lt;/p&gt;

&lt;p&gt;Since from a business standpoint we need this system would be able to handle requests concurrently, I broadened my search to get advice from peers; colleagues, friends outside of work and even many Senior DBA from other companies.&lt;/p&gt;

&lt;p&gt;Thanks to this crowdsourced effort of knowledge sharing, I realized, that the solution was row-level locking. Since I only need to read and update the counter value, we only need a row-level locking mechanism.&lt;/p&gt;

&lt;h4&gt;
  
  
  Second Attempt! — Using SELECT FOR UPDATE
&lt;/h4&gt;

&lt;p&gt;Since I just want row-level locking, I didn’t need a strict isolation level because an isolation level is for table scope.&lt;/p&gt;

&lt;p&gt;So my second try is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Isolation level: READ COMMITTED (Postgres default isolation level)&lt;/li&gt;
&lt;li&gt;Row-level locking: SELECT FOR UPDATE&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The change is only adding the FOR UPDATE in my select query for my counter value. This new SQL query presents as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BEGIN;
SELECT counter FROM master_counter **FOR UPDATE** ; // notice this

// Do Addition to counter in application
// Apply Generator Logic in Application

INSERT INTO payment_code(payment_code) VALUES(generated_value_with_counter);

UPDATE master_counter SET counter=&amp;lt;new_value_from_application&amp;gt;;

COMMIT;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This worked successfully, and in the end all it took was adding the FOR UPDATE to solve the problem. While this may not be the right end state solution, for now at least, it solves our current problem.&lt;/p&gt;

&lt;p&gt;I discovered that there is a lot of things that you need to understand about SELECT ... FOR UPDATE — I found this article most helpful &lt;a href="http://shiroyasha.io/selecting-for-share-and-update-in-postgresql.html"&gt;http://shiroyasha.io/selecting-for-share-and-update-in-postgresql.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s amazing that this issue could be solved with only a simple SELECT ... FOR UDPATE. Maybe later there will be a new issue, but we leave that for our future selves 😎&lt;/p&gt;




</description>
      <category>softwaredevelopment</category>
      <category>programming</category>
      <category>rowlocking</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Moving On: Building SEA Payment Infrastructures with Xendit</title>
      <dc:creator>Iman Tumorang</dc:creator>
      <pubDate>Sat, 26 Sep 2020 03:03:02 +0000</pubDate>
      <link>https://dev.to/bxcodec/moving-on-building-sea-payment-infrastructures-with-xendit-2bio</link>
      <guid>https://dev.to/bxcodec/moving-on-building-sea-payment-infrastructures-with-xendit-2bio</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TwGyM04b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AcR8lfTvJpiR9O33l" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TwGyM04b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AcR8lfTvJpiR9O33l" alt="" width="880" height="880"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@evieshaffer?utm_source=medium&amp;amp;utm_medium=referral"&gt;Evie S.&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s been a year after moving from Kurio. My past company. There are so many things I learned when worked in Kurio.&lt;/p&gt;

&lt;p&gt;I learned about Products and Metrics pretty well. Learning many buzz-words in the product world like DAU, MAU, Retention Rate, Push Notification rates, etc. Data-Driven decision to create features in the product, and many things.&lt;/p&gt;

&lt;p&gt;I also learned many Engineering practices. Like CI/CD, I learned how to make a CI/CD pipeline from scratch.&lt;/p&gt;

&lt;p&gt;I learned about migrating from PHP into Golang. I learned about Microservice. Domain-Driven Design. API Driven Development, Software Testing. Semantic Versioning, Docker, Kubernetes, and many more buzz-word in Software Engineering. Creating Push Notification service, Logging service, profiling microservice, and many more.&lt;/p&gt;

&lt;p&gt;I would say, the current me, I believe I’m pretty confident, I’m able to create a product from the backend side that will be needed. It’s thanks to Kurio.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lCOKIN-F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/384/1%2A4KjzMC5wDw5E356MybHOiw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lCOKIN-F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/384/1%2A4KjzMC5wDw5E356MybHOiw.png" alt="" width="384" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I must say, it’s hard to leave Kurio. For the past year, I still meet my old’ friend there. Asking how Kurio’s doing. Is there any new issue, why is that? Is my legacy having bugs or issues? etc.&lt;/p&gt;

&lt;p&gt;So, a few of my friends asked me, why do I leave Kurio? There’s a lot of reasons for that, but one thing for sure is, I really love scaling problem. I’ve talked to my manager about how I excited about handling big scale services. I’m hungry for more challenges.&lt;/p&gt;

&lt;p&gt;Later I know, I’ve applied to many companies, I don’t remember which one. But mostly is on the scaling stage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Moving to Xendit
&lt;/h3&gt;

&lt;p&gt;Long story short, I finally choose Xendit. It’s not because I know someone here. To be honest, I don’t have any connection in Xendit back then. I don’t know what they’re doing. I even know what Xendit’s product when on the interview process. But I’ve known Xendit quite active in the community back then.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CN7coslg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/900/1%2Aed_OFiWAVWX096WZlnwdSg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CN7coslg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/900/1%2Aed_OFiWAVWX096WZlnwdSg.png" alt="" width="880" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And, I like the vision of Xendit which is: &lt;strong&gt;Building SEA Payment Infrastructures&lt;/strong&gt;. Infrastructures mean the foundation, the base. So, Xendit aiming to be the base of the Payment in the SEA. Not only Indonesia, but SEA, or event the world. And talking about payment, every business will have the payment stuff. So, we’re aiming to help every company from startup to enterprise to get paid and growing. We also want to born another unicorn born in SEA by helping them to handle their payment. So I’m not surprised, that many of our users are startups. And we want to grow together with them.&lt;/p&gt;

&lt;p&gt;*above paragraph is a sponsored content 😂&lt;/p&gt;

&lt;p&gt;And Xendit back then (till now as well) is still on the scaling stage. Like massively growing. I thought this is the right company for me to learn what’s the pain in helping the company to scale.&lt;/p&gt;

&lt;p&gt;And this is also the reason (why) I’m not trying to apply to any unicorn company. The unicorns already too big! There’s no significant impact for me if I join them, except if I got a higher position tho, but I’m not confident yet for a higher position LOL. And looking for Xendit, I might still have a bigger impact, maybe bigger than the other’s company if I joined them.&lt;/p&gt;

&lt;h4&gt;
  
  
  Interview and Test to Xendit!
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--n6M2Z2wd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AVLwtaD_Dc9O0EshZ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n6M2Z2wd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AVLwtaD_Dc9O0EshZ" alt="" width="880" height="586"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@vantaymedia?utm_source=medium&amp;amp;utm_medium=referral"&gt;Van Tay Media&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It takes 1 week for me to pass all of the tests. So in my period, we have 5 steps.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Screening with HR Manager&lt;/li&gt;
&lt;li&gt;Pre-Assessment&lt;/li&gt;
&lt;li&gt;Interview with Teams&lt;/li&gt;
&lt;li&gt;Trials with the full team&lt;/li&gt;
&lt;li&gt;Offered / Rejected&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Screening is just a simple interview with the hiring manager. The questions are just about who am I, what’s is my previous job, and many more.&lt;/p&gt;

&lt;p&gt;The next step is, obviously, the coding test. The HR give me the link for the test link. And it was a coding test. Luckily, the test for me is so easy 😂. I’m so lucky at the time.&lt;/p&gt;

&lt;p&gt;After passing the pre-assessment, the next step is also a simple one. Interview with the teams. And no technical things here. Only about explaining things. And surprisingly, the interview was done in google hangout. So no coding on white-board or etc. So simple I think. I’m beginning to lose respect for this company. Why the test is just like that. 😂&lt;/p&gt;

&lt;p&gt;But, the next step was the hardest part. They asked me to do a trial on their company for 2 days. So, then, I join the trials in their company. They give me a task to design some architecture of one of the payment channels in the company. I remember, they ask me to design a full architecture for BRI (one of Indonesian’s Bank) Virtual Account (VA).&lt;/p&gt;

&lt;p&gt;In two days, I need to design a full architecture of the BRI Virtual Account, start from the Bank, and also how should Xendit handle it. With the limited time and resources, I tried my best, not to mention, I don’t have any experience in Payment or Fintech previously.&lt;/p&gt;

&lt;p&gt;Not to mention, on the trial day, I also have an interview with the CTO. I’m starting to feel afraid, my respect point raised again to this company. Now I understand why the previous steps are too easy. I fell to the trap 😱&lt;/p&gt;

&lt;p&gt;And later, on the final day of my trials. They asked me to present my design architecture to the team. Also, they asked me a few pseudo-code to do random VA generator. In a room, crowded by the teams from Software Engineers, QA, TechLead, and the Director Engineering also the HR. I’m starting to nervous and sweating 😂.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mjz_-NVj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A734Cow0_GMDBQqf0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mjz_-NVj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A734Cow0_GMDBQqf0" alt="" width="880" height="586"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@headwayio?utm_source=medium&amp;amp;utm_medium=referral"&gt;Headway&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So much pressure in that room. Not to mention again, everyone is speaking in English. My English is not so good, so it’s really hard to understand what the hell they’re talking about there 😂 *no offense teams ✌️.&lt;/p&gt;

&lt;p&gt;And then, I presented all of my findings of the BRI architecture. Also, so many questions come from the engineers. I don’t remember every question they asked. But I only remember the questions that come from the Director Engineering (to be my Engineering Manager later when I got accepted), that he asked me a few things about database and algorithm stuff.&lt;/p&gt;

&lt;p&gt;Luckily, I passed that hell 😂. That’s was a 1-hour hell for me.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges so far
&lt;/h3&gt;

&lt;p&gt;So, now I’m working as Software Engineer in Xendit. Xendit is my third company after I graduated from 2016 ago. Starting for 6 months in Bornevia, and 2 years in Kurio, and now I’m here in my third year after I graduated from college.&lt;/p&gt;

&lt;p&gt;First impression on Xendit? WOW! Just WOW!&lt;/p&gt;

&lt;p&gt;So here are the challenges that I got for the first year.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Culture Shock!&lt;/li&gt;
&lt;li&gt;Security and Policies!&lt;/li&gt;
&lt;li&gt;Node JS world&lt;/li&gt;
&lt;li&gt;Re-Architect and Re-Factor&lt;/li&gt;
&lt;li&gt;Scaling Across Southeast Asia Region&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Culture Shock!
&lt;/h4&gt;

&lt;p&gt;It’s obvious, I face this one. I’m really shocked and surprised. From the culture, it’s really really different from my previous company. Here, we able to do remote works no limit (even without the covid)&lt;/p&gt;

&lt;p&gt;Our teams located in 5 locations remotely, Jakarta, Singapore, Kuala Lumpur, Philippines, and also in the US, a few engineers work remotely from UK and Canada as well. And because of this, we need to utilize the Google Hangout pretty well.&lt;/p&gt;

&lt;p&gt;So, sometimes, it’s hard for me to feel the flow. When I need to discuss a few things with someone, I must book their calendar. And if she/he in the office (Jakarta), we will be discussing it in the office. But if not, we have to do it through Google Hangout.&lt;/p&gt;

&lt;p&gt;So, in the first week, I’m surely lost. I don’t know what to do. Because I still need to familiarize the culture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z8uBWzG1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A5HOup--Nc9_jq9gA" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z8uBWzG1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A5HOup--Nc9_jq9gA" alt="" width="880" height="586"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@jonathanrados?utm_source=medium&amp;amp;utm_medium=referral"&gt;Jonathan Rados&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not to mention, in my first week, everyone is really busy. I wonder why, but later I found that we have 2 issues and problems that need to solve faster and that issues affect all of the Xendit products.&lt;/p&gt;

&lt;p&gt;So I’m totally lost. No one from the engineer can onboard me well. So, I must learn all of the systems by myself. When I’m stuck, I need to book one of the engineer’s time from the Google calendar. That’s really stressful for me in the first week.&lt;/p&gt;

&lt;h4&gt;
  
  
  Security and Policies
&lt;/h4&gt;

&lt;p&gt;Another challenge that I got here compared to my previous company. Due to the needs for security/compliance, new technology should get buy-in from several stakeholders to make sure we can keep serving customers in an internationally secured way&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g4ycepUZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AzWzsVdXRJuP--goK" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g4ycepUZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AzWzsVdXRJuP--goK" alt="" width="880" height="586"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@gmalhotra?utm_source=medium&amp;amp;utm_medium=referral"&gt;Gayatri Malhotra&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So it’s a bit hard to try new technology. Whatever the tools are, I can propose anything but it’s must follow the PCI (Payment Card Industry) Compliance that already standardized internationally. So it’s hard to find good tools and follow the PCI Compliance. There are so many things we need to check and follow.&lt;/p&gt;

&lt;p&gt;Also, another challenge is, I need to do a PoC and create an RFC document which is pretty new for me. In my previous job, whenever I want to try new technology, I just implement it directly without do PoC and RFC things. Just do it! And deploy it!&lt;/p&gt;

&lt;h4&gt;
  
  
  Node Js World!
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cZ5CEqkn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2ArJMK8tvHK_puvddk5j2_sg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cZ5CEqkn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2ArJMK8tvHK_puvddk5j2_sg.jpeg" alt="" width="800" height="2650"&gt;&lt;/a&gt;If the world created by a Programmer. Comic by Toggl.com&lt;/p&gt;

&lt;p&gt;When I join Xendit, I come from Golang backgrounds. So I don’t have clear experience in JS. In my first company I use JS for 6 months, but I already forgot how to code in JS. The first challenge that I got here is only to re-learn everything. And it’s kind of really hard for me. Even till now, I’m still trying my best to learn Javascript and Typescript.&lt;/p&gt;

&lt;p&gt;And after joining Xendit, it’s like a Deja Vu, and reminding me of my first company, everyone is freestyler and texas mode. Especially when building features and products.&lt;/p&gt;

&lt;p&gt;But, even for that, I also trying to propose Golang here too. I even do a crash course about Golang here. And to be honest, it’s really really hard for me tho. Not only to evangelist the Golang, but I also need to help the team to make the flow more clear and standardized like to do refactor to the flows and the microservices.&lt;/p&gt;

&lt;p&gt;Since right now everything is really freestyler, especially the engineering practice. So no wonder there are so many things that still need to improve here. And that’s what makes me more interested in joining Xendit. This is a scaling problem, and if we can solve this, then I believe Xendit will grow faster than the current conditions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Re-Architect and Re-Factor
&lt;/h4&gt;

&lt;p&gt;Another challenge is, in my second month, I’m proposing re-architecting of our current system for the payment services that handle the Virtual Account and Retail outlets.&lt;/p&gt;

&lt;p&gt;Even I already have done so many re-factor, re-architecture, and re-writing services in my previous company. This is a different story. If in my previous company, if just only a small thing that makes the system complicated or not reaches our selfish standard, we just re-write it.&lt;/p&gt;

&lt;p&gt;And later, when we have a new product, if the system is not good enough to support the new version, we then just re-write another service that has a similar function with it. At least this is what I fell when I work there. I’m not sure now.&lt;/p&gt;

&lt;p&gt;But then, it’s possible because all of the services in my previous company are really well maintained and connected. The flow is clear from top to bottom, or vice versa. So we can easily do re-factor and re-writing.&lt;/p&gt;

&lt;p&gt;It’s different from the Xendit has. Due to the large codebase and high interconnectivity between each service we need to think creatively when doing refactor to roll out a new system but making sure we are still able to serve the scale that we currently have to support all merchants.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gZZsIWu9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AseGit37kjA74EQMT" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gZZsIWu9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AseGit37kjA74EQMT" alt="" width="880" height="586"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@vikasananddev?utm_source=medium&amp;amp;utm_medium=referral"&gt;Vikas Anand Dev&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If only we do something changes to flow in service in a care-free, it can break everything. The services are so fragile.&lt;/p&gt;

&lt;p&gt;So the solutions that we decide then is to build a new entirely flow of microservices, to avoid to break anything with the current productions. And the good thing is since I’m the one who proposes it, I make all the new services written in Golang, no particular reasons choosing Golang over Node, it’s just a matter of interest. Also, with some benefit, compiled to interpreted, it always compiled to be chosen in a matter of performances. Since the system that I proposed for re-architect is a core product, so I think, it’s good to say to choose the performance over the delivery time. We’re not rushing on development, like doing market fit, so it’s okay to choose Go over Node. Besides, the team approved it.&lt;/p&gt;

&lt;p&gt;So, it really really hectic for me, at first, I need to deliver the new architecture, and also, I need to mentor the teams about Golang. This is totally a new challenge that I never faced in Kurio.&lt;/p&gt;

&lt;h4&gt;
  
  
  Scaling Across Southeast Asia Region
&lt;/h4&gt;

&lt;p&gt;Another big challenge that I looking forward to in Xendit is about scaling to SEA Region, from Philippine, Singapore, Malaysia, and Indonesia, and other countries.&lt;/p&gt;

&lt;p&gt;There are a few products that we want to replicate across SEA, like for example right now, Retail outlet payment that available in Indonesia and Philippine, Direct Debit payment, and many more.&lt;/p&gt;

&lt;p&gt;And to support this movement, engineering problems will be very challenging in every aspect. From maintaining the legacy, then refactor to support multi-country. Security and compliance that can differ between country. It was totally a new level game for me.&lt;/p&gt;

&lt;p&gt;So, sometimes I had a dream, which makes me like feels regretting, due to a bad decision to leave Kurio. But for now, I believe it’s not a bad decision, and I keep saying this to myself, I leave Kurio for the better of me, and to solve another level of challenges. And I’m looking forward to solving other challenges in the future wherever my journey ends :)&lt;/p&gt;

&lt;p&gt;&lt;em&gt;*PS, I’m hiring for my team, email me at iman[at]xendit[dot]co. In case you’re interested to join me to solve the scaling problem and many other challenges for building payment infrastructure in SEA&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I drafted this after my probation ended in October 2019 last year, haven’t had time to check and publish until now. So I’ve updated a few words, like “been a year” or “past year” originally it was “been 3 months” or “past 3 months”. So when reading this, imagine it was still October 2019&lt;/p&gt;
&lt;/blockquote&gt;




</description>
      <category>selfimprovement</category>
      <category>startuplife</category>
      <category>programming</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Today I Learned: Golang Live Reload for Development using Docker Compose + Air</title>
      <dc:creator>Iman Tumorang</dc:creator>
      <pubDate>Wed, 23 Sep 2020 11:20:40 +0000</pubDate>
      <link>https://dev.to/bxcodec/today-i-learned-golang-live-reload-for-development-using-docker-compose-air-3f7f</link>
      <guid>https://dev.to/bxcodec/today-i-learned-golang-live-reload-for-development-using-docker-compose-air-3f7f</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wn2Jx9Bo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Aa6j_E_LMTNzXwwJ_" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wn2Jx9Bo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Aa6j_E_LMTNzXwwJ_" alt="" width="880" height="809"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@jeremyperkins?utm_source=medium&amp;amp;utm_medium=referral"&gt;Jeremy Perkins&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today, I’m trying to create a live-reload for my Golang application. It’s just a simple application of REST API.&lt;/p&gt;

&lt;p&gt;For your information, &lt;strong&gt;&lt;em&gt;Live-Reload&lt;/em&gt;&lt;/strong&gt; is a mechanism that will reload our application on every file change. So it will keep up to date with your code. I’m not sure it’s the same as a hot reload. But people said live-reload is reloading the application when a file changes. And hot reload, will only refresh the changed file without losing the state of the application, so it’s not restarting the entire application.&lt;/p&gt;

&lt;p&gt;Also based on &lt;a href="https://stackoverflow.com/a/41429055/4075313"&gt;this Stackoverflow answer&lt;/a&gt;,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Live reloading&lt;/strong&gt; reloads or refreshes the entire app when a file changes. For example, if you were four links deep into your navigation and saved a change, live reloading would restart the app and load the app back to the initial route.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hot reloading&lt;/strong&gt; only refreshes the files that were changed without losing the state of the app. For example, if you were four links deep into your navigation and saved a change to some styling, the state would not change, but the new styles would appear on the page without having to navigate back to the page you are on because you would still be on the same page.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I guess, for this article, I can say this a live-reload because I’m not sure we’re able to use hot reload for Golang. Because for Golang, for every change, we need to restart the application from re-compiling it and re-run it. So the most possible way to do it is to use live reloading instead of hot reloading.&lt;/p&gt;

&lt;h3&gt;
  
  
  Background
&lt;/h3&gt;

&lt;p&gt;But before telling the details, I want to talk about the background, why I just learned this live-reload after so many years?&lt;/p&gt;

&lt;p&gt;I’ve been working with Golang for 3 years now, but I never use live-reload when working on my projects. It’s not because I’m an ignorant person, I do want to use live-reload for development.&lt;/p&gt;

&lt;p&gt;But for the Golang case, I have a few reasons back then, why I haven’t used it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Golang is a compiled programming language.&lt;/strong&gt; My mindset to think like it’s impossible to have live-reload for compiled programming language. Because we need to compile the application and run it. Well, technically I wasn’t wrong, even we can like to create a live-reload tool, to watch the changes and re-compile the application (in away, it’s live-compile-run)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Golang run/build is quite fast&lt;/strong&gt; compared to any compiled programming language in the market, also one of my reason not to consider live-reload yet, at that time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Making my own live-reload tools.&lt;/strong&gt; Well, it’s just reinventing the wheel, not to mention the time that needed to build it is not worth it, back then. Even sometimes when I’m working a larger project, I’ll feel regretted not to started making my own live-reload tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;I’ve tried a few tools but disappointed by the performance.&lt;/strong&gt; Actually, I’ve tried once back then, I forgot what’s the tool, but back then, it consumes a lot of my laptop resources. It making my laptop lagging. Been trying to some frameworks as well that have a live-reload feature, but again it consuming a lot of resources every time I save my file. Instead of increasing my productivity, it’s making me more stressed because of the lagged laptop.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;My projects relatively depend on small dependencies&lt;/strong&gt; (libraries, framework, database, any extra layer), so compiling and running the application is not an issue yet at that time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So then, now I’m looking for a live-reload feature because my projects getting bigger and also the dependencies as well. Like it depends on Google Pubsub, Firebase, Mongo, Postgres, Redis, all other libraries. Just to start the app, it will take time. So when developing new features, it will really take time only to run it locally. That’s the time when I try to consider to use live-reload for my productivity. Well at least, reducing the time needed for manual intervention like compile and building the application. Even milliseconds matter when you are in zone LOL.&lt;/p&gt;

&lt;p&gt;So then, I have a few criteria for the live-reload,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The most important is, it’s not making my laptop lagging. Performance is the most important, I don’t want to get stressed by a lagged laptop. I got a bad experience when using one of the Go frameworks that have a live-reload feature, it frustrates me. In the beginning, it just works fine, but later when I make frequent changes, it made my laptop lagging like a hell. Even when typing in the editor, it’s got delayed.&lt;/li&gt;
&lt;li&gt;Configurable! I want those tools can be configurable with my own settings for the live-reload.&lt;/li&gt;
&lt;li&gt;Easy to use and portable. So everyone can use it no matter what’s the environment. Generally speaking, I would say, it’s dockerized at least. Since I know docker has become a standard tool for engineers now.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And based on that criteria, I tried to find many tools on the internet. I found a few good tools, but then, I remember, my colleague use a live reload for our internal project in the company.&lt;/p&gt;

&lt;p&gt;Out of curiosity, I then try to explore that tool. It’s named, “Air”. The repository can be found here, &lt;a href="https://github.com/cosmtrek/air"&gt;&lt;strong&gt;Air&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Live Reload With Air
&lt;/h3&gt;

&lt;p&gt;The first impression, I feel skeptical, because it’s just like yet another live-reload tool. I have to install it, then run it in my project. But, when I check the Github repo, the winning solutions that they provide is,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Configurable.&lt;/strong&gt; I can configure my settings based on my need.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Portable.&lt;/strong&gt; Since it just a binary, I can use it to docker, and make it portable.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Setting Up My Live Reload Environment
&lt;/h4&gt;

&lt;p&gt;To use the Air live-reload tools, you see in the Github repository. But for me, I use Docker compose to manage my live-reload development environment.&lt;/p&gt;

&lt;p&gt;So what I will need to have,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application the I will develop.&lt;/li&gt;
&lt;li&gt;Installed docker on my laptop.&lt;/li&gt;
&lt;li&gt;Dockerfile for development.&lt;/li&gt;
&lt;li&gt;Docker-compose file for development.&lt;/li&gt;
&lt;li&gt;Customize the configurations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the idea is, I will use docker-compose to manage the live-reload with using Air.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GTtieLLm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/751/1%2AhOHq_HZ75aRigVmZJ2zWog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GTtieLLm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/751/1%2AhOHq_HZ75aRigVmZJ2zWog.png" alt="" width="751" height="331"&gt;&lt;/a&gt;Live reload idea with Docker compose + Air&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Making the Dockerfile for Development&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The first step is to make the docker file for development.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The docker file is basically only to download and install the Air binary and make it as the entry point for the docker.&lt;/p&gt;

&lt;p&gt;If you see from the above docker file, you will see this line&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;RUN curl &lt;span class="nt"&gt;-fLo&lt;/span&gt; install.sh https://raw.githubusercontent.com/cosmtrek/air/master/install.sh &lt;span class="se"&gt;\ &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod&lt;/span&gt; +x install.sh &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; sh install.sh &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cp&lt;/span&gt; ./bin/air /bin/air
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is only to get the installation script, then run it, and making it &lt;strong&gt;/bin&lt;/strong&gt;  folder.&lt;/p&gt;

&lt;p&gt;After that, then we will make the docker-compose file for the development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Making the Docker-Compose file for Development&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;The next step is to create a docker-compose based on the previous dockerfile. As you can see, in that docker-compose file, I set the dockerfile to &lt;strong&gt;dev.Dockerfile&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev.Dockerfile&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Also, the other important thing is, I set the volume link to my current directory,&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./:/app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It means the container will use my current directory and attached it to &lt;strong&gt;/app&lt;/strong&gt; in the container. So If there are any changes in my directory, it will also change the file in the container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Set your application config&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Since we will do a live-reload, so, we also need to provide a config file for our application to be able to run. The config file can be different, usually, people use ENV variable or  &lt;strong&gt;.env&lt;/strong&gt; file. Or just a config file, like &lt;strong&gt;config.jso&lt;/strong&gt; n or &lt;strong&gt;config.toml&lt;/strong&gt; etc.&lt;/p&gt;

&lt;p&gt;You need to define it if you want to make a live-reload the application. Just assume you will need to run the application, what config you need.&lt;/p&gt;

&lt;p&gt;In my case, on my local, I will create a file named &lt;strong&gt;config.toml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="py"&gt;title&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Configuration File for Menekel"&lt;/span&gt;
&lt;span class="py"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="py"&gt;contextTimeout&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"2"&lt;/span&gt;
&lt;span class="nn"&gt;[server]&lt;/span&gt;
  &lt;span class="py"&gt;address&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;":9090"&lt;/span&gt;
&lt;span class="nn"&gt;[database]&lt;/span&gt;
  &lt;span class="py"&gt;host&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"mysql"&lt;/span&gt;
  &lt;span class="py"&gt;port&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"3306"&lt;/span&gt;
  &lt;span class="py"&gt;user&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"root"&lt;/span&gt;
  &lt;span class="py"&gt;pass&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"root"&lt;/span&gt;
  &lt;span class="py"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"article"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And this config will be copied to the container as well, so we will be able to run it in the container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Set the Air configurations&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;The next step is, create the Air config. Create a new file with named  &lt;strong&gt;.air.toml&lt;/strong&gt; and then configure it based on your needs. I’ll try to break down any important configuration terms from the config here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a. CMD syntax&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[build]&lt;/span&gt;
&lt;span class="err"&gt;//...&lt;/span&gt;
&lt;span class="py"&gt;cmd&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"go build -o ./tmp/app/engine app/main.go"&lt;/span&gt;
&lt;span class="err"&gt;//....&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The bold one from above is for the command that you need to compile your application. In my case, I just use &lt;strong&gt;go build -o ./tmp/app/engine app/main.go&lt;/strong&gt;  . So if you guys want to copy this to your application, make sure the build command is correct.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;b. BIN syntax&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[build]&lt;/span&gt;
&lt;span class="err"&gt;//....&lt;/span&gt;
&lt;span class="py"&gt;bin&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"tmp/app"&lt;/span&gt; 
&lt;span class="err"&gt;//....&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The bold text above is only the directory of the compiled binary exist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;c. FULL_BIN syntax&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[build]&lt;/span&gt;
&lt;span class="err"&gt;//...&lt;/span&gt;
&lt;span class="py"&gt;full_bin&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"./tmp/app/engine http"&lt;/span&gt;  
&lt;span class="err"&gt;//...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next one from above is how your application will be started after compiled. In my case, to run the compiled application I will need to pass the argument &lt;strong&gt;http&lt;/strong&gt; to determine I want to run the HTTP server.&lt;/p&gt;

&lt;p&gt;So if your application doesn’t need any argument, you don’t have to add any additional arguments.&lt;/p&gt;

&lt;p&gt;If your application using the ENV variable, you will also need to pass it like,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="py"&gt;full_bin&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"ENV1=mysql ENV2=localhost . /tmp/app/engine http"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Tricks:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure your application to be able to read  &lt;strong&gt;.env&lt;/strong&gt; file, so you don’t have to list all the ENV in the Air config.
&lt;/li&gt;
&lt;li&gt;Use a config file, but bare the consequences.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;d. INCLUDE_EXT and EXCLUDE_DIR syntax&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[build]&lt;/span&gt;
&lt;span class="err"&gt;//...&lt;/span&gt;
&lt;span class="py"&gt;include_ext&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"go"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"yaml"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="py"&gt;exclude_dir&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;["tmp"]&lt;/span&gt;
&lt;span class="err"&gt;//...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;include_ext&lt;/strong&gt; will watch for every file that has an extension as listed. If there are any changes in any file in the project, it will trigger Air to re-compile the application.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;exclude_dir&lt;/strong&gt; is to tell, any changes on that directory won’t trigger the Air to recompile the application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;e. DELAY syntax&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[build]
//...
delay = 1000 # ms
//...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sometimes, we may make too frequent changes, so to reduce the instant build every saved file, we can add a delay. So it won’t directly compile the application for frequent changes. This one is pretty helpful since compiling and re-running the application will consume the CPU, if we compile to frequent it will make our laptop lagged. And with this, we can configure the delay to reduce the pace for compiling the app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Final Steps: Making It All Running.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The final step is, you can run the application by using docker-compose,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And check for the log in your terminal,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose logs &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the demo here,&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/o3bhZPNpCus"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;After trying this, at least I can get work faster. If the changes are too frequent, I just need to add the delay in the config, so it won’t affect my laptop resources.&lt;/p&gt;

&lt;p&gt;The example repository can be seen here, &lt;a href="https://github.com/golangid/menekel"&gt;&lt;strong&gt;Menekel&lt;/strong&gt;&lt;/a&gt;, or you check this &lt;a href="https://github.com/golangid/menekel/pull/4"&gt;PR&lt;/a&gt; if you want to see what changes I made for adding the live-reload.&lt;/p&gt;

&lt;p&gt;Also, if you guys have any better ideas, tools, let me know, put in the comment below so the other people also can try it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you guys found this useful, like, and share so other people can reach this, spread the love, and knowledge.&lt;/em&gt;&lt;/p&gt;




</description>
      <category>golangtools</category>
      <category>go</category>
      <category>softwaredevelopment</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to Add Your Recently Published Medium Articles to Your GitHub Readme</title>
      <dc:creator>Iman Tumorang</dc:creator>
      <pubDate>Sun, 16 Aug 2020 16:32:37 +0000</pubDate>
      <link>https://dev.to/bxcodec/how-to-add-your-recently-published-medium-articles-to-your-github-readme-1dg2</link>
      <guid>https://dev.to/bxcodec/how-to-add-your-recently-published-medium-articles-to-your-github-readme-1dg2</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Show off your latest Medium work on GitHub&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XxEY9xMD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AbfiChcHfUSEr-cXQ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XxEY9xMD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AbfiChcHfUSEr-cXQ" alt="" width="880" height="586"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@christinhumephoto?utm_source=medium&amp;amp;utm_medium=referral"&gt;Christin Hume&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;GitHub recently released a new feature that allows you to create a Readme profile, so you can now customize your GitHub profile page.&lt;/p&gt;

&lt;p&gt;You can see an example on &lt;a href="https://github.com/bxcodec"&gt;my GitHub profile&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mb4s1AZS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AiqOkHcV05JuxKtqEdV0FkA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mb4s1AZS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AiqOkHcV05JuxKtqEdV0FkA.png" alt="" width="880" height="481"&gt;&lt;/a&gt;My GitHub profile.&lt;/p&gt;

&lt;p&gt;This feature is really nice. It makes your GitHub profile look more professional and content-rich. In the future, I expect GitHub to look like LinkedIn for developers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introducing GitHub Readme — Recent Medium Articles
&lt;/h3&gt;

&lt;p&gt;I’ve seen a lot of plug-ins that people have made, like the GitHub stats card, programming language stats, and even games (e.g. this &lt;a href="https://github.com/timburgan/timburgan"&gt;online chess game&lt;/a&gt; or even &lt;a href="https://github.com/alfari16/alfari16"&gt;tic-tac-toe&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://github.com/alfari16/alfari16"&gt;this readme&lt;/a&gt;, the user has included a list of their recently published Medium articles. But it’s only available on their profile. To achieve the same results, I need to copy their code, which takes time.&lt;/p&gt;

&lt;p&gt;That’s the original idea behind this plug-in. I created a separate repository with a customized function. I then made the function more generic so everyone can add their recently published Medium articles to their GitHub readme.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/hk6MoV-qWW8"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps
&lt;/h3&gt;

&lt;p&gt;To use this plug-in, you only need to add this script to your GitHub readme:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/media/60fa24f400869923cd2f970850db9bf1/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/60fa24f400869923cd2f970850db9bf1/href"&gt;https://medium.com/media/60fa24f400869923cd2f970850db9bf1/href&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So the format is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://github-readme-medium-recent-article.vercel.app/medium/&amp;lt;medium-username&amp;gt;/&amp;lt;article-index&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;medium-username: Your medium username/profile&lt;/li&gt;
&lt;li&gt;article-index : Your recent article index (e.g. 0 means your latest article)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The full steps can be seen in &lt;a href="https://github.com/bxcodec/github-readme-medium-recent-article"&gt;my repository&lt;/a&gt;. Also if you’ve found any issues, just open an issue or create a PR directly on that repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  More About This Plug-In
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;I’m using &lt;a href="https://vercel.com/"&gt;Vercel&lt;/a&gt; for the static hosting and the serverless function to retrieve the recent articles. I might be able to add a custom domain, but that’s for later.&lt;/li&gt;
&lt;li&gt;I’m using an RSS feed from Medium. You can get your RSS feed from Medium by entering this URL: &lt;a href="https://medium.com/feed/@imantumorang"&gt;https://medium.com/feed/@&lt;/a&gt;yourMediumUsername.&lt;/li&gt;
&lt;li&gt;Then convert it to JSON using API RSS to JSON:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[https://api.rss2json.com/v1/api.json?rss\_url=https://medium.com/feed/@imantumorang](https://api.rss2json.com/v1/api.json?rss_url=https://medium.com/feed/@imantumorang)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Make it generic so everyone is able to use it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To make it generic so people can directly pass their Medium username to the serverless function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://github-readme-medium-recent-article.vercel.app/medium/@imantumorang
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I need to make the folder of my serverless function as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;└── medium
    └── [user]
        └── [index].ts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The [user] directory is required so I can make the username dynamic. I was stuck on this problem when I was making the plug-in. Actually, I can create it using query-param like ?username=@imantumorang, but from my experience and keeping REST in mind, making it path-param is the proper way to tell that the param is required. Also, I want to keep the experience the same as when you’re visiting your Medium profile (e.g. medium.com/@imantumorang).&lt;/p&gt;

&lt;p&gt;I knew that comments in the Medium articles would be added automatically to the RSS. To only display the article, I added a filter function on the function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (thumbnail.includes("cdn")) {
        fixItem.push(element)    
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So I only enabled articles that have a thumbnail. I’m still looking for a workaround for this because if the article doesn’t have any thumbnail, it will be skipped. At least for now, if your article has a thumbnail, it will be displayed when we import it to your GitHub readme.&lt;/p&gt;

&lt;p&gt;It may take time to be able to get your recent article to be listed (for a new article) since the API RSS to JSON is cached. Please wait around 1-3 hours after the article has been published on Medium.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Well, I think that’s all for now. If you find any issues, you can directly open an issue on GitHub. I’ll try to help as much as I can.&lt;/p&gt;




</description>
      <category>softwaredevelopment</category>
      <category>opensource</category>
      <category>medium</category>
      <category>github</category>
    </item>
    <item>
      <title>How To Do Pagination in Postgres with Golang in 4 Common Ways</title>
      <dc:creator>Iman Tumorang</dc:creator>
      <pubDate>Tue, 26 May 2020 05:06:05 +0000</pubDate>
      <link>https://dev.to/bxcodec/how-to-do-pagination-in-postgres-with-golang-in-4-common-ways-4kph</link>
      <guid>https://dev.to/bxcodec/how-to-do-pagination-in-postgres-with-golang-in-4-common-ways-4kph</guid>
      <description>&lt;h4&gt;
  
  
  A few examples of pagination on Postgres, with Benchmark, written on Golang
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H6PnMSH3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AhayUe6QTgRGqgtdJ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H6PnMSH3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AhayUe6QTgRGqgtdJ" alt="" width="880" height="586"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@gitsela?utm_source=medium&amp;amp;utm_medium=referral"&gt;Ergita Sela&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Hi again everyone&lt;/em&gt;, it’s been a long time I haven’t published any article. There’s a lot of things happen, like from the pandemic and much more stuff. This pandemic affecting me mentally personally, like this self-quarantine is really exhausting and stressing me enough. I wish this Covid-19 pandemic will be ended before Christmas this year. 😭&lt;/p&gt;

&lt;p&gt;On this rare occasion, after fighting with the boredom and laziness, I found a spirit to finish this article. Start from me when building our new application in my current job, I’m curious about a few things, and in this part, it’s about pagination. Like how to handle pagination in a better way based on my understanding LOL. *&lt;em&gt;My proposed idea might not be the best, so if you guys have a better way, a vast experience than me, put your comments below yaa!!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;TBH, I never care more details about this in my previous job because we all have the same perspective, and we only like to have 10 engineers in my previous company, so we can have the same perspective. But now I care about this, since we have a lot of engineers in my current job, and everyone has a different perspective.&lt;/p&gt;

&lt;p&gt;So, I’m just curious, what’s the better way in building pagination on Postgres on top of the application, with my case I’m using Golang for the application.&lt;/p&gt;

&lt;p&gt;Actually, there are 2 famous styles of pagination:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cursor based pagination&lt;/li&gt;
&lt;li&gt;Offset based pagination&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article I’ll only cover those 2 style paginations in 4 different common ways that Backend engineer usually does, or at least, what I know so far since I know how to code.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do pagination with page number, pretty common, the user only sends the page number, and we handle it internally, I use offset in the database level.&lt;/li&gt;
&lt;li&gt;Do pagination with offset and limit, pretty common since the &lt;strong&gt;RDBMS&lt;/strong&gt; features. The user will directly send the offset number from query param.&lt;/li&gt;
&lt;li&gt;Do pagination with a simple query with an auto incremental ID as the PK, quite common for auto incremental ID in the database. Which is the ID is treated as the cursor.&lt;/li&gt;
&lt;li&gt;Do pagination with UUID as the PK combined with the created timestamp, also known as the seek-pagination method, or keyset pagination method. And the combined key will be hashed into a cursor string.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So what I’m gonna do here are, I’ll create those 4 pagination implementations, and do a small benchmark from code, I’ll using Golang Benchmark. The goal of this article is just to satisfy my curiosity LOL. I know I can read people’s articles, but I want to do it with my own version.&lt;/p&gt;

&lt;p&gt;TL;DR&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All the code used here already pushed to my Github repository, &lt;a href="https://github.com/bxcodec/go-postgres-pagination-example"&gt;github.com/bxcodec/go-postgres-pagination-example&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Conclusions can be seen at the bottom of this article&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pagination On REST API
&lt;/h3&gt;

&lt;p&gt;To give you some context, *&lt;em&gt;in case you don’t know what is pagination used for&lt;/em&gt;. Pagination is used to paginate your response, LOL. Well, I don’t know how to rephrase it better.&lt;/p&gt;

&lt;p&gt;I’ll create an example, let’s say I have this endpoint, in REST API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;GET /payments
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And this endpoint will fetch all payments from the API. As we know, in bigger scale application that has tons of data set, these payments may have thousands or millions of data rows. And as a user, I want to fetch my payments list.&lt;/p&gt;

&lt;p&gt;From a database perspective, querying all the records will takes time a lot. I can imagine how long it will be if we have a million records and fetch all the data. So, in that case, people introduce what they called pagination. It works like pages on the books, that each page contains a bunch of words.&lt;/p&gt;

&lt;p&gt;But for this endpoint, each page will contain a list of payment details, so we can still fetch the payment faster but maybe it will truncated into multiple pages until we can fetch all the payment records.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /payments?page=1 // to fetch payments in page 1
GET /payments?page=2 // to fetch payments in page 2
GET /payments?page=3 // to fetch payments in page 3
... etc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You may have seen this style in any endpoint, or maybe something like this as well.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;GET /payments?limit&lt;span class="o"&gt;=&lt;/span&gt;10 // initial request &lt;span class="k"&gt;for &lt;/span&gt;fetch payment
GET /payments?limit&lt;span class="o"&gt;=&lt;/span&gt;10&amp;amp;cursor&lt;span class="o"&gt;=&lt;/span&gt;randomCursorString // with cursor
GET /payments?limit&lt;span class="o"&gt;=&lt;/span&gt;10&amp;amp;cursor&lt;span class="o"&gt;=&lt;/span&gt;newrandomCursorString // &lt;span class="k"&gt;for &lt;/span&gt;next page
GET /payments?limit&lt;span class="o"&gt;=&lt;/span&gt;10&amp;amp;cursor&lt;span class="o"&gt;=&lt;/span&gt;anotherNewrandomCursorString
... etc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And many more, this is what we called pagination. We truncate our list of data into a few segments and send it to the client, so we still maintain the performance of the application and the client won’t lose track when fetching our data.&lt;/p&gt;
&lt;h4&gt;
  
  
  1. Pagination with Page Number
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /payments?page=1 // to fetch payments in page 1
GET /payments?page=2 // to fetch payments in page 2
GET /payments?page=3 // to fetch payments in page 3
... etc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Have you seen pagination like those above? TBH, I never have seen any pagination like those, not in public API if I remember correctly. But, I’ve ever created pagination with that’s style, around 4 years ago, on my first job-test after graduated.&lt;/p&gt;

&lt;p&gt;So the logic quite complicated in the backend, but it will simplify from the user experience,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First I’ll set the default limit, let’s say 10. _Per page is 10 item_s.&lt;/li&gt;
&lt;li&gt;And each page number will be multiplied to the default limit&lt;/li&gt;
&lt;li&gt;Then I’ll use it as the offset to the database.&lt;/li&gt;
&lt;li&gt;And, the user can fetch the items based on the requested page number.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So then, I try to build again a simple application for this kind of method. With &lt;strong&gt;100K rows of data&lt;/strong&gt; , I try to benchmark it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benchmark Result&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The drawback of this pagination method is&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance-wise, it’s not recommended. The bigger the data set, the bigger the resource consumption.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But the benefit of using this method, the user feels like opening a book, they will just need to pass the page number.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Pagination with Offset and Limit
&lt;/h4&gt;

&lt;p&gt;Pagination with offset and limit is quite common to engineers. This comes because of the feature of RDBMS that supports offset and limit for querying.&lt;/p&gt;

&lt;p&gt;From the application level, there’s no extra logic, just passing the offset and limit to the database, and let the database do the pagination.&lt;/p&gt;

&lt;p&gt;How is usually looks like,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /payments?limit=10 // initial 
GET /payments?limit=10&amp;amp;offset=10 //fetch the next 10 items
GET /payments?limit=10&amp;amp;offset=20 //fetch the next 10 items again
... etc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;From the client-side, they only need to add the offset params, and the API will return the items based on the given offset.&lt;/p&gt;

&lt;p&gt;And from database level, which is RDBMS, it will look like this below,&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
    &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;payments&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt;  &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;created_time&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
&lt;span class="k"&gt;OFFSET&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Benchmark Result&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The drawback of this pagination method&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance-wise, it’s not recommended. The bigger the data set, the bigger the resource consumption.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The benefits of this pagination method&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Very easy to implement, no need to do complex logic things in the server&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. Pagination with Auto Incremental PK of the ID
&lt;/h4&gt;

&lt;p&gt;This pagination method was also pretty common. We set our table to be auto increment, and use that as the page identifier/cursor.&lt;/p&gt;

&lt;p&gt;How it’s used in REST&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /payments?limit=10
GET /payments?limit=10&amp;amp;cursor=last_id_from_previous_fetch**
GET /payments? limit=10&amp;amp;cursor=last_id_from_previous_fetch
... etc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;How it looks like in database query level&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
   &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;payments&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt;
  &lt;span class="n"&gt;Id&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Or for descending&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
   &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;payments&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt;
  &lt;span class="n"&gt;Id&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;Id&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Benchmark Result&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The drawback of this pagination method&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The only drawback of this pagination method is, when using the auto-increment id, it will be problematic in the world of microservice and distributed system.
Like id with &lt;strong&gt;20&lt;/strong&gt; can exist in Service Payment and Service User. It’s unique in the same application context. It will different if each ID using UUID, it’s “&lt;em&gt;practically unique”&lt;/em&gt; (means, there’s a very small possibility of duplicate generated UUID). So some people trying to use UUID instead as the PK. Read more details about UUID and auto-increment keys &lt;a href="https://medium.com/@Mareks_082/auto-increment-keys-vs-uuid-a74d81f7476a"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The benefits of this pagination method&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easy to implement, no need to do complex logic things in the server.&lt;/li&gt;
&lt;li&gt;The best way to do pagination that I know so far from performance-wise, since it’s using autoincrement ID.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4. Pagination with UUID Combined with Created Timestamp
&lt;/h4&gt;

&lt;p&gt;I’m not sure this is pretty common, but I see that a few articles do this kind of pagination. The context is, the table not using auto incremental id, but it’s using the UUID instead. But then people wondering how to do pagination, adding a new column with auto incremental number is a wasting resource. So for myself, what I do is, using the created timestamp of my rows, and combine it with the PK which is the UUID.&lt;/p&gt;

&lt;p&gt;This is the database schema&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;And for the faster queries, I make an index with multiple tables which is the PK and the created timestamp, as you can see from the above schema, I made an index named idx_payment_pagination.&lt;/p&gt;

&lt;p&gt;So the logic is,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I’ll use the UUID which is my primary key, and combine it with create timestamp&lt;/li&gt;
&lt;li&gt;Combine those two into a string, then I encode it to base64 string&lt;/li&gt;
&lt;li&gt;And return that encoded string as a cursor for the next page, so the user can use it to fetch the next page of their request.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example of how I made the cursor on application level&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;And this is how it looks like in the REST endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /payments?limit=10
GET /payments?limit=10&amp;amp;cursor=base64_string_from_previous_result
GET /payments?limit=10&amp;amp;cursor=base64_string_from_previous_result
... etc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;But in the database, the query will look like this,&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;payments&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;created_time&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="s1"&gt;'2020-05-16 03:15:06'&lt;/span&gt; &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;created&lt;/span&gt; &lt;span class="nb"&gt;timestamp&lt;/span&gt;
&lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="s1"&gt;'2a1aa856-ad26-4760-9bd9-b2fe1c1ca5aa'&lt;/span&gt; &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;this&lt;/span&gt; &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="n"&gt;UUID&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;created_time&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Benchmark Result&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The drawback of this pagination method&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The performance may not be the best like using the autoincrement id. But it’s consistent even we will have millions of data&lt;/li&gt;
&lt;li&gt;Quite tricky and advanced, we need to understand the index, because if we didn’t add an index, this query will really take time on a big dataset. And also we need to careful when handling timestamps. Even I, still facing some issues when querying the timestamp when doing this.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The benefits of this pagination method&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The ID is UUID, so it’s practically globally unique across microservice in the organizations.&lt;/li&gt;
&lt;li&gt;The performance is consistent from the beginning until querying the last page of the data&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;Alright, after doing all the benchmark, I’ve come with some conclusions.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Performances: Faster to Slower
&lt;/h4&gt;

&lt;p&gt;From the benchmark results (using the Golang benchmark tool), the faster one is using the autoincrement PK. See the chart below, the smaller the faster, the chart for the &lt;strong&gt;average-time&lt;/strong&gt; needed for each operation in nanoseconds. This chart is might not be a good representation, it’s should be better if I make it in 95th, 97th, etc percentile, but I got this value from the benchmark result. So I assume this is already good enough for the representation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/media/72d7697bf386da3132095d3d5983bbf8/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/72d7697bf386da3132095d3d5983bbf8/href"&gt;https://medium.com/media/72d7697bf386da3132095d3d5983bbf8/href&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pagination with autoincrement ID is the faster, followed by UUID/created time, and PageNumber and LimitOffset. And this is only with 100K rows of data. And it will grow bigger as the data grow as well. So with only 100K data, even it still under 1 second, but the differences already quite high when using autoincrement compared to limit offset.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Development: Faster to Slower
&lt;/h4&gt;

&lt;p&gt;Implementation difficulties from easy to hard&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Using Offset&lt;/strong&gt; , because we just only passing the offset and limit directly to the database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Using PageNumber&lt;/strong&gt; , this is opinionated, some people may have different logic, but for my case I put this in the top two.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Using autoincrement ID&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Using UUID with created time&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Code artifacts
&lt;/h4&gt;

&lt;p&gt;For the code, I’ve pushed it to my GitHub repository, can be found here, &lt;a href="https://github.com/bxcodec/go-postgres-pagination-example"&gt;https://github.com/bxcodec/go-postgres-pagination-example&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  The issues that I face when doing this
&lt;/h4&gt;

&lt;p&gt;When doing all of these things, obviously I face some issues, but I’ve resolved it, and I also learn about this. I’ve written the things that I learned here, as well in this article: &lt;a href="https://dev.to/bxcodec/til-becareful-on-postgres-query-for-less-than-or-equal-on-timestamp-3gi8-temp-slug-6869327"&gt;TIL: Becareful on Postgres Query, for Less than Or Equal on Timestamp&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Author Suggestion
&lt;/h4&gt;

&lt;p&gt;As a software engineer, and as the author of this article, I recommend to use autoincrement ID when doing pagination, but if your system or you don’t want to use autoincrement ID as the PK, you may consider of using keyset pagination, with my case using UUID + created_time timestamp.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reference
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Tons of Stackoverflow answer, I forgot which one, but all answers that related to &lt;strong&gt;pagination with Postgres&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.jooq.org/2013/10/26/faster-sql-paging-with-jooq-using-the-seek-method/"&gt;Faster SQL Pagination with jOOQ Using the Seek Method&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/xngwng/rest-api-design-filtering-sorting-and-pagination-5c96"&gt;REST API Design: Filtering, Sorting, and Pagination&lt;/a&gt;
&lt;a href="https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href"&gt;https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;




</description>
      <category>database</category>
      <category>go</category>
      <category>postgres</category>
      <category>pagination</category>
    </item>
    <item>
      <title>TIL: Becareful on Postgres Query, for Less than Or Equal on Timestamp</title>
      <dc:creator>Iman Tumorang</dc:creator>
      <pubDate>Tue, 26 May 2020 04:21:09 +0000</pubDate>
      <link>https://dev.to/bxcodec/til-becareful-on-postgres-query-for-less-than-or-equal-on-timestamp-4b74</link>
      <guid>https://dev.to/bxcodec/til-becareful-on-postgres-query-for-less-than-or-equal-on-timestamp-4b74</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NHkL56Yj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A-9RkFzLnxCdtsFwt" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NHkL56Yj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A-9RkFzLnxCdtsFwt" alt="" width="880" height="586"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@jonasjacobsson?utm_source=medium&amp;amp;utm_medium=referral"&gt;Jonas Jacobsson&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So a week ago, I do some experiments related to pagination on Postgres with combined UUID and timestamp. I got an interesting problem here, it’s happened when I’m doing a query for my created timestamp on the same timestamp.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context
&lt;/h3&gt;

&lt;p&gt;To give some context, I have a database schema like this.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;And out of curiosity, I’m doing a load testing on my application, with many concurrent users up to 100 concurrent inserts. And this caused my database to have a lot of rows that have the same timestamp. For example can be seen below.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;As you can see, the created_time was the same for all the records (may up to 100 records).&lt;/p&gt;

&lt;h3&gt;
  
  
  Problems
&lt;/h3&gt;

&lt;p&gt;So, when I’m doing query based on the created time, there’s weird behavior, I thought it was a bug or something.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT \* FROM payment\_with\_uuid 
WHERE 
created\_time **&amp;lt;= '2020-05-24 21:27:10'**
ORDER BY created\_time DESC LIMIT 10;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;So, from that query, what I want to achieve is, I want to select all record that has the created_time &lt;strong&gt;less than or equal to (&amp;lt;=)&lt;/strong&gt;the given timestamp.&lt;/p&gt;

&lt;p&gt;But what I got instead is only all the timestamp that less than that created timestamp that is filtered. The timestamp that has the same value is not filtered.&lt;/p&gt;

&lt;p&gt;So from this example rows below,&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;The result that I got is only &lt;strong&gt;Kane&lt;/strong&gt; but &lt;strong&gt;Allistair&lt;/strong&gt; and &lt;strong&gt;James&lt;/strong&gt; is not filtered.&lt;/p&gt;

&lt;p&gt;Another weird thing is if I reverse the query, like using &lt;strong&gt;greater than equal to (&amp;gt;=)&lt;/strong&gt; syntax, I can fetch the others like &lt;strong&gt;Allistair&lt;/strong&gt; and &lt;strong&gt;James&lt;/strong&gt; but &lt;strong&gt;Kane&lt;/strong&gt; will be out of order obviously.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solutions and Things that I just Learned
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The timestamp is an integer UNIX-timestamp under the hood
&lt;/h4&gt;

&lt;p&gt;So, after searching on the whole internet, and Stackoverflow obviously, even asking my friend, I got the answer, that basically, the timestamp that stored in the database is UNIX-timestamp under the hood.&lt;/p&gt;

&lt;p&gt;For example, if you see the database records from the above example, the stored timestamp is like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**2020-05-24 21:27:10**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Actually this is not the real value, because Postgres actually stores the UNIX-timestamp version of that rows. It may have nano or microseconds, so it’s not only stopped on the seconds version.&lt;/p&gt;

&lt;p&gt;It may look like this,&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**2020-05-24 21:27:10.37091**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We don’t know. Because for the sake of formatting, Postgres round it only to the second value.&lt;/p&gt;

&lt;p&gt;So, I change my query to more specific, including the nano or microseconds, into looks like this, and which is works.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT \* FROM payment\_with\_uuid 
WHERE 
created\_time **&amp;lt;= '2020-05-24 21:27:10.37091****'**
ORDER BY created\_time DESC LIMIT 10;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;But then, there’s a question, how to do this from the application level. But for my case, since I’m using Golang, it’s quite easy.&lt;/p&gt;
&lt;h4&gt;
  
  
  Using RFC3339Nano On Golang
&lt;/h4&gt;

&lt;p&gt;Since my application is built on top of Golang, to handle including the precision, I use &lt;strong&gt;time.RFC3339Nano&lt;/strong&gt; from the &lt;strong&gt;time&lt;/strong&gt; package in Golang. Its format looks like this, &lt;strong&gt;"2006–01–02T15:04:05.999999999Z07:00"&lt;/strong&gt;. Or you can see the details &lt;a href="https://golang.org/pkg/time/#pkg-constants"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ckns092F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AgSd1st48nYVav5TKkkWaxA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ckns092F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AgSd1st48nYVav5TKkkWaxA.png" alt="" width="880" height="457"&gt;&lt;/a&gt;time format in Golang package&lt;/p&gt;

&lt;p&gt;I don’t know how to do it in other programming languages, but the key is to include the precision as well for the query on the timestamp.&lt;/p&gt;

&lt;p&gt;So in the case of doing a query that accepting the timestamp from the end-user, we need to format it first using the RFC3339Nano, and send it to the database. So the query still valid.&lt;/p&gt;

&lt;p&gt;But why I need to use format, if I can use it without format? Because if we don’t format it, the time data struct will contain the milliseconds and nanoseconds as well? So to give the context, why I need this, especially in my case.&lt;/p&gt;

&lt;p&gt;So I will use the &lt;strong&gt;created_time&lt;/strong&gt; for pagination to fetch the next page as a cursor, I will get that timestamp, and convert it to a string. And that string will be used by the user for fetching a new page as a cursor.&lt;/p&gt;

&lt;p&gt;And in my application, I will convert that string back to timestamp and use it for the query. And because of that, we need to do it carefully. Since in Golang, when we convert the timestamp to a string, it might not include the precision that we got from the Postgres. That’s why I need to format it with time.RFC3339Nano.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;So I still keep maintaining the precision even I convert it to string and revert it back to timestamp.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://stackoverflow.com/questions/39119783/postgres-using-timestamps-for-pagination"&gt;Postgres: using timestamps for pagination&lt;/a&gt; — Stackoverflow (on the comment’s thread)
&lt;a href="https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href"&gt;https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href&lt;/a&gt;
* * *&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>postgres</category>
      <category>softwareengineering</category>
      <category>todayilearned</category>
      <category>pagination</category>
    </item>
    <item>
      <title>Consuming SQS Message using Golang in EKS (Elastic Kubernetes Service) from AWS</title>
      <dc:creator>Iman Tumorang</dc:creator>
      <pubDate>Mon, 09 Mar 2020 05:55:42 +0000</pubDate>
      <link>https://dev.to/bxcodec/consuming-sqs-message-using-golang-in-eks-elastic-kubernetes-service-from-aws-3p2p</link>
      <guid>https://dev.to/bxcodec/consuming-sqs-message-using-golang-in-eks-elastic-kubernetes-service-from-aws-3p2p</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bLfoWYoT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AIsERlXqHqtY1myaG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bLfoWYoT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AIsERlXqHqtY1myaG" alt="" width="880" height="584"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@davideragusa?utm_source=medium&amp;amp;utm_medium=referral"&gt;davide ragusa&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few days ago, I was working on SQS in Golang. Quite tricky but stressing enough for me. Because I stuck for 5 days only to make my consumer worked well in EKS.&lt;/p&gt;

&lt;p&gt;To give some context, &lt;strong&gt;SQS&lt;/strong&gt; stands for Simple Queue Service. It is a message queue service provided by AWS. More details about it, you can see on the official page of &lt;a href="https://aws.amazon.com/en/sqs/"&gt;SQS from AWS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To make it short, assume that I have an application that will need to consume a lot of messages from SQS.&lt;/p&gt;

&lt;p&gt;Simply, we can use the AWS SDK Go for consuming the SQS message. But the flow is quite different from what I have experienced with Google Pubsub. In GCP, they’ve prepared a complete SDK that already has a function to long consuming/streaming the message from the Google Pubsub. In AWS, especially SQS, we need to make a long looping and making a REST call to pull the queued message.&lt;/p&gt;

&lt;p&gt;So what we do is basically just copying this awesome article does “&lt;a href="https://medium.com/@questhenkart/sqs-consumer-design-achieving-high-scalability-while-managing-concurrency-in-go-d5a8504ea754"&gt;SQS Consumer Design: Achieving High Scalability while managing concurrency in Go&lt;/a&gt;”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func (c \*consumer) Consume() {
 for w := 1; w &amp;lt;= c.workerPool; w++ {
  go c.worker(w)
 }
}func (c \*consumer) worker(id int) {
 for {
  output, err := retrieveSQSMessages(c.QueueURL, maxMessages)
  if err != nil {
   continue
  } var wg sync.WaitGroup
  for \_, message := range output.Messages {
   wg.Add(1)
   go func(m \*message) {
     defer wg.Done()
     if err := h(m); err != nil {
       //log error
       continue
     }
     c.delete(m) //MESSAGE CONSUMED
   }(newMessage(m))

   wg.Wait()
  }
 }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Yeap, I just copying his code. Because it’s looking good already. With worker pattern. So why just not using it right?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TLDR;&lt;/strong&gt; But now, I have 2 problems&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chain Credentials Issue&lt;/li&gt;
&lt;li&gt;Timeout Error when Consuming Message from SQS&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Problem Statement
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Chain Credentials
&lt;/h4&gt;

&lt;p&gt;Our problem is not with the worker or stuff. It begins with us following the SDK introduction.&lt;/p&gt;

&lt;p&gt;So the thing is, the default SDK has a sequential order for authentication. If you go to this page, “&lt;a href="https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html"&gt;Configuring the AWS SDK for Go&lt;/a&gt;”, you will found about chain credentials order.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ngK_Bjn9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AKD0CBURrYmTZOPF6l8zE6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ngK_Bjn9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AKD0CBURrYmTZOPF6l8zE6g.png" alt="" width="880" height="467"&gt;&lt;/a&gt;Chain Credentials AWS&lt;/p&gt;

&lt;p&gt;From that documentation, we can learn that the SDK will look for ENV keys first. If not exists, then it will be looking to the shared credential file. And if the credential file also doesn’t exist, it will look at the IAM Role on EC2 and so on.&lt;/p&gt;

&lt;p&gt;So, what’s the problem here is, in our use-case, since the default configurations of the SDK will load ENV variable first, if not exists, then it will be looking to the shared credentials file, it can be like &lt;strong&gt;AWS_WEB_IDENTITY_TOKEN_FILE&lt;/strong&gt;. And for local workspace, it will look up to credentials file in AWS config which is located in &lt;strong&gt;~/.aws/credentials&lt;/strong&gt; (Mac/Linux) (&lt;em&gt;for more info, you can read&lt;/em&gt; &lt;a href="https://aws.amazon.com/id/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks/"&gt;here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;It becomes a problem for us. Since the project will not be used only for us but maybe other engineers in a different team. We afraid that, to other engineers that have a configured AWS config in their locals, which is located in &lt;strong&gt;~/.aws/credentials&lt;/strong&gt;. We assume there will be a chance to them to run the application in local, but using production-access credentials that were configured in &lt;strong&gt;~/.aws/credentials&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So, what we really want are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local development, we use ENV variable&lt;/li&gt;
&lt;li&gt;On Staging and Production, we use IAM Role.&lt;/li&gt;
&lt;li&gt;Avoid using the Shared Credentials File on the first place, to avoid accidentally using a production-access credentials file by the other engineers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;First Attempt Solution&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Our first solution is, we will detect the IAM Role first if it exists then ok. If doesn’t exist, then our application will lookup for the ENV key which is &lt;strong&gt;AWS_ACCESS_KEY_ID&lt;/strong&gt; and &lt;strong&gt;AWS_SECRET_ACCESS_KEY&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So then, we customized our chain credentials, into like this below&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;If you see the above functions, especially the &lt;strong&gt;credProviders&lt;/strong&gt; , we specify the orders for the credentials chaining provider, first, it will look up to the Instance IAM Role, and then lookup for the ENV provider. So we basically remove the authentication using the shared credentials file. So whenever the engineers have a configured AWS credentials key in their PC which is located in &lt;strong&gt;~/.aws/credentials&lt;/strong&gt; , it still safe, since the SDK will look for the IAM Role and ENV key only.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Issue&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
After making a custom chain credentials, we found another problem. It can be run well in EC2 instance, but not in our EKS. In short, our chain credentials only working on EC2 instances level. And for EKS case, it will work if we only allow IAM Role on the Nodes level.&lt;/p&gt;

&lt;p&gt;For security-wise, for EKS, we want to use the IAM role on the service account level or in pods level, not at node level as we can find in the example here, “&lt;a href="https://aws.amazon.com/id/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/"&gt;Introducing fine-grained IAM roles for service accounts&lt;/a&gt;”.&lt;/p&gt;

&lt;p&gt;So it obvious, our custom chain credentials are not working well for EKS. So we need to change the chain method.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Solutions&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
To fix it, we then sit together with our infra engineers. Discuss and debugging the application. Takes a whole day, with many attempts, we finally decide to customized the solution logically in our application.&lt;/p&gt;

&lt;p&gt;Turns out, to make it work in EKS, we have to enable the shared credentials file. It’s because in EKS we will use &lt;strong&gt;AWS_WEB_IDENTITY_TOKEN_FILE&lt;/strong&gt; , based on this &lt;a href="https://aws.amazon.com/id/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/"&gt;article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So to use the web identity token file, we have to enable credentials using a file. Which are we avoid in the first place, but for the sake of work well in EKS, we then decide to enable chain auth with the shared credential file.&lt;/p&gt;

&lt;p&gt;But to avoid engineer’s local shared credential file &lt;strong&gt;~/.aws/credentials&lt;/strong&gt; being used accidentally, we decide to handle it logically in the application (statically written in code).&lt;/p&gt;

&lt;p&gt;We use an ENV variable &lt;strong&gt;APP_ENV&lt;/strong&gt; to check if the environment is local or not.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// IsLocal will return true if the APP\_ENV is not listed in those
// three condition

func IsLocal() bool {
  envLevel := MustHaveEnv("APP\_ENV")
  return envLevel != "production" &amp;amp;&amp;amp; 
         envLevel != "staging" &amp;amp;&amp;amp;   
         envLevel != "integration"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;So no matter the &lt;strong&gt;APP_ENV’s&lt;/strong&gt; value, if it’s not &lt;strong&gt;production&lt;/strong&gt; or &lt;strong&gt;staging&lt;/strong&gt; or &lt;strong&gt;integration&lt;/strong&gt; we assume it as local.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;As you can see, if the environment is not local (&lt;strong&gt;if !IsLocal()&lt;/strong&gt;), we use the shared config file. And if it’s in local, we use ENV key, and it’s required, so we can avoid accidentally using production-access credentials in engineer’s local workspace &lt;strong&gt;~/.aws/credentials&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;With this, we finally solve our problem with credentials for the SDK safely. We will enforce the engineer’s to use ENV key in local (with using &lt;a href="https://github.com/localstack/localstack"&gt;localstack&lt;/a&gt;). And will using shared credentials file in staging and production&lt;/p&gt;

&lt;h4&gt;
  
  
  Using Customized HTTP Client
&lt;/h4&gt;

&lt;p&gt;Another problem, it’s when we trying to consume the SQS message. But before telling the details about that, to give some context, in the AWS SDK Documentation pages, we found this article, “&lt;a href="https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/custom-http.html"&gt;Creating a Custom HTTP Client&lt;/a&gt;”. On that page, we found how to customize our HTTP Client.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ri8heuly--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AcYb5jx4Rn-n4BmYbiy4x6Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ri8heuly--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AcYb5jx4Rn-n4BmYbiy4x6Q.png" alt="" width="880" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hIqLIFqh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AGBmavELtV-yL2iLWr-YtHQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hIqLIFqh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AGBmavELtV-yL2iLWr-YtHQ.png" alt="" width="880" height="798"&gt;&lt;/a&gt;Custom HTTP Client on AWS SDK&lt;/p&gt;

&lt;p&gt;So then, we just follow these steps. We customized our HTTP Client. We set the timeout, the maximum idle connection, etc. And then, we run locally (thanks to &lt;a href="https://github.com/localstack/localstack"&gt;localstack&lt;/a&gt;). We test it locally, and everything is fine. Perfectly worked.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Issue&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
But then, when we tried to deploy it in our EKS. After deployed to EKS, we got a lot of errors.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;time="2020-02-06T07:23:02Z" level=error msg="there was an error reading messages from SQS RequestError: send request failed\ncaused by: Post [https://sqs.ap-southeast-2.amazonaws.com/](https://sqs.ap-southeast-2.amazonaws.com/): net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
time="2020-02-06T07:23:02Z" level=error msg="there was an error reading messages from SQS RequestError: send request failed\ncaused by: Post [https://sqs.ap-southeast-2.amazonaws.com/](https://sqs.ap-southeast-2.amazonaws.com/): net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Actually this is not really a big problem, but it will annoying and increase our log storage because this will run realtime due to our long-running pulling message from SQS using HTTP call. Solving this, I take 2 days!!! My whole 2 days ruined because of this. WTF!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Solutions&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
I was frustrated, there’s no one who understands why this issue happening in my team. And then, I decided to ask in Gopher’s slack group.&lt;/p&gt;

&lt;p&gt;Thanks to &lt;a href="https://app.slack.com/team/UJMFVPR8F"&gt;&lt;strong&gt;André Eriksson&lt;/strong&gt;&lt;/a&gt; and &lt;a href="https://app.slack.com/team/U46RP390W"&gt;&lt;strong&gt;Zach Easey&lt;/strong&gt;&lt;/a&gt; and others in Slack Gopher &lt;a href="https://gophers.slack.com/archives/C029RQSEE/p1580979593499800"&gt;https://gophers.slack.com/archives/C029RQSEE/p1580979593499800&lt;/a&gt;. They help me to solve my problem here.&lt;/p&gt;

&lt;p&gt;So to summarize, this has happened because we use long polling enabled, but with the long polling, connection enabled, we also customized our HTTP client.&lt;/p&gt;

&lt;p&gt;So, the solution is, for long polling consume the message from SQS, we shouldn't customize our HTTP client, we use the default one. Because, if we customized the HTTP client (like the timeout, etc) it will compete with the long polling connection time, and that’s causing the &lt;strong&gt;request canceled (Client.Timeout exceeded while awaiting headers)&lt;/strong&gt; errors happened a lot.&lt;/p&gt;

&lt;p&gt;This is quite tricky, because as far as I know, to avoid the bad timeout and user experience, I usually customized my HTTP Client. But for this case, for SQS, since we enable the long polling call, the right solution is only using the default HTTP Client that doesn’t have the timeout settings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;The takeaways from this whole projects when integrating SQS with Go in EKS, are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We manually handle the custom chain credentials for the AWS SDK to avoid accidentally using production-access credentials file in the local workspace engineers, &lt;strong&gt;~/.aws/credentials&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;To enable the IAM role on the pods level, we need to allow the SDK to use Shared Credentials file, which is quite tricky since we disable it on the local workspace.&lt;/li&gt;
&lt;li&gt;We handle the chain credentials logically in code, using the extra ENV key, like  &lt;strong&gt;APP_ENV&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;If you enable the long polling connection to consume the SQS message, we must use default HTTP client, don’t customize its timeout, or you will face a lot of timeout error.
&lt;a href="https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href"&gt;https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>sqs</category>
      <category>kubernetes</category>
      <category>softwaredevelopment</category>
      <category>go</category>
    </item>
    <item>
      <title>HTTP — PATCH Method! I’ve Thought the Wrong Way!!!</title>
      <dc:creator>Iman Tumorang</dc:creator>
      <pubDate>Tue, 18 Feb 2020 09:40:29 +0000</pubDate>
      <link>https://dev.to/bxcodec/http-patch-method-ive-thought-the-wrong-way-hep</link>
      <guid>https://dev.to/bxcodec/http-patch-method-ive-thought-the-wrong-way-hep</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rz9obEVr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AnIYKJjAv-ufoRMNW" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rz9obEVr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AnIYKJjAv-ufoRMNW" alt="" width="880" height="587"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@freetousesoundscom?utm_source=medium&amp;amp;utm_medium=referral"&gt;Free To Use Sounds&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve been working on Software Engineering for more than 3 years now. And I’m surprised, in these 3 years, I never face any PATCH endpoint yet, until now.&lt;/p&gt;

&lt;p&gt;And because of that, I never really use the PATCH Method before, and never develop it either. I remember 3 years ago when the first time I asked my senior about the difference between PUT and PATCH was only the changed value. PUT will replace the entire item, and PATCH changing the specified field.&lt;/p&gt;

&lt;p&gt;That’s it! But in reality, in my careers, we never used the PATCH method. The reason is that we still didn’t need it at that time. So I now live with this belief, and whenever I talk to others, they usually understand what I mean.&lt;/p&gt;

&lt;h4&gt;
  
  
  My Original Thought about PATCH
&lt;/h4&gt;

&lt;p&gt;So, since I know about HTTP Method and REST. I only use these methods, GET, DELETE, POST, and PUT. I never have a chance (yet) to use PATCH.&lt;/p&gt;

&lt;p&gt;We know clearly, how the PUT work. And I’ve been using PUT for a long time now. For example, I have this JSON. And I’m doing PUT operation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /user/bxcodec
{
  "name": "Iman",
  "username": "bxcodec"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Applying HTTP PUT&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PUT /user/bxcodec
{
  "name": "Iman Tumorang",
  "username": "bxcodec"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Final Results&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /user/bxcodec
{
  "name": "Iman Tumorang",
  "username": "bxcodec"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then, derived from PUT, then PATCH, from what people say, “ &lt;strong&gt;PATCH not replacing the entire item, but only the specified one&lt;/strong&gt; ”&lt;/p&gt;

&lt;p&gt;So, from that statement, in default, this is what I think how PATCH work. For example, I have this JSON.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /user/bxcodec
{
  "name": "Iman",
  "username": "bxcodec"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Applying HTTP Patch&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PATCH /user/bxcodec
{
  "name": "Iman Tumorang"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Final Results&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /user/bxcodec
{
  "name": "Iman Tumorang",
  "username": "bxcodec"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Right? This is how I think HTTP-PATCH should work. And I live with this belief for more than 3 years.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Plot Twists
&lt;/h4&gt;

&lt;p&gt;But, just now, when I write this, I found this on Golang docs here: &lt;a href="https://golang.org/pkg/net/http/"&gt;https://golang.org/pkg/net/http/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xT4wSQFr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A_wiYDs1fk9SHUijw7h2xnA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xT4wSQFr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A_wiYDs1fk9SHUijw7h2xnA.png" alt="" width="880" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Out of curiosity, I notice a different value there, I found a comment for RFC 5789. Why the heck it has a different RFC, not like the others? Why is it different?&lt;/p&gt;

&lt;p&gt;After looking at the RFC 5789, I found a weird thing. From the description,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pH3nOlO9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AeSO7WCG1sXltplNrbpyP3Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pH3nOlO9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AeSO7WCG1sXltplNrbpyP3Q.png" alt="" width="880" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1PSrh8tU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/782/1%2AUR7nnaz43MfNnLdqjFPW9A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1PSrh8tU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/782/1%2AUR7nnaz43MfNnLdqjFPW9A.png" alt="" width="782" height="304"&gt;&lt;/a&gt;RFC 5789&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;With PATCH, however, the enclosed entity &lt;strong&gt;contains a set of instructions&lt;/strong&gt; describing how a resource currently residing on the origin server should be modified to produce a new version.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, from the RFC docs, it says, in the request body, I have to specify a set of actions or instructions to modify the current resource in request URI. As we can see in the example above. But how I do set the actions in the request body? How should I format it?&lt;/p&gt;

&lt;h4&gt;
  
  
  Introducing JSON PATCH
&lt;/h4&gt;

&lt;p&gt;To answer the previous question, how should I format the action list in my request body? I found an interesting thing, it’s another RFC called JSON-Patch (RFC 6902). This is an RFC to explain how to use JSON-Patch for JSON operations.&lt;/p&gt;

&lt;p&gt;A short example of how JSON-PATCH worked is more like this. For example, we have a JSON here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "Iman",
  "username": "bxcodec"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the JSON-PATCH.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  { "op": "replace", "path": "/name", "value": "Iman Tumorang" },
  { "op": "add", "path": "/likes", "value": ["go", "blogging"] }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Patched results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "Iman Tumorang",
  "username": "bxcodec",
  "likes": [
    "go",
    "blogging"
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So in JSON-PATCH, I have to define the operations, the path, and the value for modifying the resources. And all the available commands can be seen in the &lt;a href="https://tools.ietf.org/html/rfc6902"&gt;RFC6902&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So, here the plot twist, surprisingly, HTTP PATCH and JSON-PATCH have a correlation. When I open the RFC 6902, I got this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P_jj8PWQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Atq1Mcg789vW9MUZMAIfOJw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P_jj8PWQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Atq1Mcg789vW9MUZMAIfOJw.png" alt="" width="880" height="208"&gt;&lt;/a&gt;RFC 6902&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;JSON Patch is a format (identified by the media type “application/ json-patch+json”) for expressing a sequence of operations to apply to a target JSON document; &lt;strong&gt;it is suitable for use with the HTTP PATCH method&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Gotcha!&lt;/p&gt;

&lt;p&gt;So, in simple, for HTTP-PATCH, we can use JSON-PATCH for the request body. So the action list for the request body, we can define it using JSON-PATCH.&lt;/p&gt;

&lt;p&gt;So, to combine HTTP-PATCH and JSON-PATCH, if I put it into an example, it will be more like this below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /user/bxcodec
{
  "name": "Iman",
  "username": "bxcodec"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Applying HTTP Patch&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PATCH /user/bxcodec
[
  { "op": "replace", "path": "/name", "value": "Iman Tumorang" }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Final Results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /user/bxcodec
{
  "name": "Iman Tumorang",
  "username": "bxcodec"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Conclusions
&lt;/h4&gt;

&lt;p&gt;Today, after living with a wrong belief, I learned new things. HTTP-PATCH has a different way to implement it. We can use JSON-PATCH as the request-body on the HTTP-PATCH request method.&lt;/p&gt;

&lt;p&gt;But, I don’t know, to be honest, I still feel this way is not really convenient. And it adds an extra layer of our implementations, compares to like what I used to be thought. Luckily, there’s already a lot of JSON-PATCH library out there, like &lt;a href="https://github.com/evanphx/json-patch"&gt;https://github.com/evanphx/json-patch&lt;/a&gt; for Golang, &lt;a href="https://www.npmjs.com/package/fast-json-patch"&gt;https://www.npmjs.com/package/fast-json-patch&lt;/a&gt; in Typescript, and many more. So I don’t think it will be hard to implement it in my next projects. But yeah, let’s see haha.&lt;/p&gt;

&lt;p&gt;But yeah, even it more convenient to change directly the value for patching (without using JSON-PATCH, and not following the RFC 5789) LOL, I will try to follow this in my next projects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href"&gt;https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href&lt;/a&gt;&lt;/p&gt;




</description>
      <category>programming</category>
      <category>rfc5789</category>
      <category>httprequest</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>One of the Important Skills that Every Engineer Should Have: Write A good README</title>
      <dc:creator>Iman Tumorang</dc:creator>
      <pubDate>Tue, 29 Oct 2019 06:13:42 +0000</pubDate>
      <link>https://dev.to/bxcodec/one-of-the-important-skills-that-every-engineer-should-have-write-a-good-readme-1bcl</link>
      <guid>https://dev.to/bxcodec/one-of-the-important-skills-that-every-engineer-should-have-write-a-good-readme-1bcl</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1pi_Fp2y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A7ZGJReGlAHXlOwzk" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1pi_Fp2y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A7ZGJReGlAHXlOwzk" alt="" width="880" height="586"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@nourwageh?utm_source=medium&amp;amp;utm_medium=referral"&gt;Nour Wageh&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As nowadays, Software is really shaping the whole world. There are so many companies and products built by technology that help the human race in their activity. Let’s say, when I want to discuss with someone, I just open WhatsApp or Telegram, and I can ask all of my friends there.&lt;/p&gt;

&lt;p&gt;When I want to book a hotel or just a small room, there’s already AirBnB. Flight ticket? There are Traveloka or Tiket, etc. For Transportations? We can use Gojek or Grab. Hungry? Go-Food or Grab-Food. (&lt;em&gt;I’m talking about tech-product in SEA especially in Indonesia&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;All of those products were created by the technology itself. And no wonder, Software Engineer becomes the hottest job nowadays. To make a great product, it needs great software. And to make the great software, need the great engineers behind them too.&lt;/p&gt;

&lt;p&gt;Not only commercial products but open-source products also become going popular now. Let’s say like Kubernetes, Docker, Golang, NodeJS, etc.&lt;/p&gt;

&lt;p&gt;Not only the big open-source project. There are a lot of small open-source projects too, like a library for specific functions. And it can be written in Golang, NodeJS, Java, etc.&lt;/p&gt;

&lt;p&gt;But, the problem is, there’s a lot of projects that might be useful to society but it’s doesn’t have a good or descriptive README. And it will make the library or open-source project become useless or my uncle said: “not classy”.&lt;/p&gt;

&lt;p&gt;If this happens for the library (doesn’t have the README), then we can just switch to another library that has a similar function and has a better README, to explain how to use it, how to run and install any dependencies.&lt;/p&gt;

&lt;p&gt;But what happens if this happens to internal projects? For the internal company's project? Can we switch to another service that has a similar function? LOL! 🔥&lt;/p&gt;

&lt;h3&gt;
  
  
  What is README?
&lt;/h3&gt;

&lt;p&gt;Anyway, before moving further, let’s find what’s the meaning of README? Why is it important? How to create it?&lt;/p&gt;

&lt;p&gt;So, if we look from Wikipedia,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A README file contains information about other files in a directory or archive of computer software. A form of documentation, it is usually a simple plain text file called READ.ME, README.TXT,[1] README.md (for a text file using markdown markup), README.1ST — or simply README ~ &lt;a href="https://en.wikipedia.org/wiki/README"&gt;https://en.wikipedia.org/wiki/README&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In general, README is just a simple general information of software/project that exists in the same directory/repository of the project. README is used to explain the details project in a short way. Like, what is the project used for? What are the dependencies, how to install it, how to compile it, or how to run it in local? Where to go when there’s an issue. What is the current status of the project, is it experimental, production, or just still on development.&lt;/p&gt;

&lt;p&gt;We put all of that information in the README. So when another engineer reads that project, they can understand the project just in 5 minutes without having to look to the source code, what is the project used for, like the function or the endpoint (if REST API projects), etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why is it important?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Yeah, why this is so important?&lt;/p&gt;

&lt;p&gt;Before saying this is important, I want to clarify first, all content I write here is purely in my opinion. You can be agreed or not, but I’m saying what is the right thing based on my experience. If you do not agree, let’s discuss in the comments. Maybe I’m wrong, maybe I've experienced the wrong practice because I also want to learn what’s the right one.&lt;/p&gt;

&lt;p&gt;So why this is important?&lt;/p&gt;

&lt;p&gt;First, let me ask a question, what is a good software? How do we define that a software is good enough?&lt;/p&gt;

&lt;p&gt;In my opinion, one of the parameters to say software is “Good” is when that software is useful to others. Meaning that it’s used by others not only us who developed it but other people too. It can be engineers, it can be users, it can be a system, it can anything that used that software.&lt;/p&gt;

&lt;p&gt;And building software is really cost us much time, and maybe money so it can be used by others. Also, when building that software, we maybe working on the team. Writing the same product to make useful software.&lt;/p&gt;

&lt;p&gt;And that’s when we have to think about README projects. I have a few reasons why we need README in our projects. Such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Engineer Movement&lt;/li&gt;
&lt;li&gt;Visibility and Development Pace&lt;/li&gt;
&lt;li&gt;We’re not the only one!&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Engineers Movement
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Jvp63m7b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2ACmG9ocYdcTKPL_Ly" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Jvp63m7b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2ACmG9ocYdcTKPL_Ly" alt="" width="880" height="584"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@tormius?utm_source=medium&amp;amp;utm_medium=referral"&gt;Adri Tormo&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When working in a company, it’s true that we won’t work on the same project in our lifetime. There will be a new interesting project or maybe we will move to another team. Or maybe we will be promoted to a higher level, so we won’t be doing coding again.&lt;/p&gt;

&lt;p&gt;If this happens, the new engineer that will responsible for our sh*t will be having a hard time for the transitions. It’s okay if it’s only one repository. But what happens if we leave many repositories that we maintained. The new engineer that replacing our positions will be having a hard time and stress.&lt;/p&gt;

&lt;p&gt;Or maybe, we worked on a team. And all of our team members already understand very clear the whole projects including other project repositories. And even without the README, every team member already understands all of the repositories used for. And then we started to think, README is not really important. Because everyone already understands all of the repositories that our team maintained.&lt;/p&gt;

&lt;p&gt;But, the things is, if our company grows and scaled, so eventually, our team will be growing too, we will hire a new engineer to help our team to manage the services. &lt;em&gt;Unless the team is super-engineers or we call&lt;/em&gt; &lt;a href="http://10x.engineer/"&gt;&lt;em&gt;10X engineers&lt;/em&gt;&lt;/a&gt;&lt;em&gt;, that can maintain everything only by just a few engineers without hiring new engineers, if that’s the case, my apologies to underestimate the team’s skill. 🙇‍♂️&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So, no matter how boring is writing the README, please keep in mind, we’re not the only one who will be working on the project. We also need to think about the future hired engineer that will handle all of these projects.&lt;/p&gt;

&lt;p&gt;Let me tell you a dark story, I’ve experience working on a project that doesn’t have a README. It takes 3 weeks for me only to make it run in my local. There’s no guidance. I need to look to source code, what are the configurations, such as database URI, PubSub env variable, etc. And it’s really frustrating!&lt;/p&gt;

&lt;h4&gt;
  
  
  Visibility and Development Pace
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ubl0dyMv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AoqMf5cfil_-xEpcf" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ubl0dyMv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AoqMf5cfil_-xEpcf" alt="" width="880" height="495"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@marcojodoin?utm_source=medium&amp;amp;utm_medium=referral"&gt;Marc-Olivier Jodoin&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Only by seeing the README, we can understand what is the repository used for, without spending times to look on the source code.&lt;/p&gt;

&lt;p&gt;Let me give you a comparison. I have 2 repositories here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/bxcodec/saint"&gt;https://github.com/bxcodec/saint&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/bxcodec/gomodmultiplies"&gt;https://github.com/bxcodec/gomodmultiplies&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In 5 seconds, can you understand how to use both of them in your project? Both of that repositories are a library for math operations for multiplication. Which one is more understandable?&lt;/p&gt;

&lt;p&gt;So that’s one of the reasons why we need to put README in our projects. It doesn’t have to fancy, has color and everything. At least, we can use it without asking the person who built it. We can import it to our projects (if it’s a library). Or we can run it locally if it a service or REST API project.&lt;/p&gt;

&lt;p&gt;And the faster engineers to understand about the repositories, the faster they will work. And the development pace will still the same from the start to the end.&lt;/p&gt;

&lt;h4&gt;
  
  
  We’re not the only one!
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kP1pOdfM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2ACCxHNBeMCt03JyUs" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kP1pOdfM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2ACCxHNBeMCt03JyUs" alt="" width="880" height="659"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@austindistel?utm_source=medium&amp;amp;utm_medium=referral"&gt;Austin Distel&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then there’s a question, “what if I built software for myself? It’s useful for me, but maybe not useful to others”. Well, if it just for personal consumption. It’s up to you to create the README or not. But, when you solve a problem for yourself, it can be people’s problem too. So, putting a good readme maybe help others to use it.&lt;/p&gt;

&lt;p&gt;Let me tell another story that happened a few months ago, it’s when I’m developing a feature in my side projects. I found a blog post of an engineer that says he creates a library as his side project just for fun. But I think, what he worked on is amazing. He creates an Angular plugin for the text editor, which has the same function as Medium has.&lt;/p&gt;

&lt;p&gt;But when I’m looking to the repository in Github. (I don’t want to post it here, because I don’t want to make it look bad since he built it for fun). I read the README, and it is not updated. I can’t use the library. WTF. So then I just passed it and looking for other libraries that have similar functions that have better README and guidance on how to use one.&lt;/p&gt;

&lt;p&gt;So, what I’m saying is even it’s just for a fun project. Not used in production or in our company. Putting an updated and descriptive README will help millions of people out there. We don’t know that what we worked for side projects will useful to others on another side of the earth.&lt;/p&gt;

&lt;h3&gt;
  
  
  Takeaways
&lt;/h3&gt;

&lt;p&gt;So takeaways from this story are we have to create the README in every project we worked on.&lt;/p&gt;

&lt;p&gt;Even I still learning and enforce myself to create README in all of my projects. I remember in my previous company, sometimes I also too lazy to put the README. But at the time I don’t think about how if the team is scaled, or I don’t have many repositories and projects to maintain, so I never consider the README in my past experience.&lt;/p&gt;

&lt;p&gt;So to be honest, I also just learning the importance of README, after I finally experience the hard time to learn application that doesn’t have a README. I’m totally confused and stressed. And it’s not fun! NOT FUN!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href"&gt;https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href&lt;/a&gt;&lt;/p&gt;




</description>
      <category>softwareengineering</category>
      <category>coding</category>
      <category>softwaredevelopment</category>
      <category>readme</category>
    </item>
    <item>
      <title>Understanding about RFC 3339 for Datetime Formatting in Software Engineering</title>
      <dc:creator>Iman Tumorang</dc:creator>
      <pubDate>Mon, 14 Oct 2019 13:34:26 +0000</pubDate>
      <link>https://dev.to/bxcodec/understanding-about-rfc-3339-for-datetime-formatting-in-software-engineering-4jo7</link>
      <guid>https://dev.to/bxcodec/understanding-about-rfc-3339-for-datetime-formatting-in-software-engineering-4jo7</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Small and simple things that not every engineer knows, but very important things that every engineer should understand.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hx-DJCbS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AeKULaIgRlJNEPbHH" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hx-DJCbS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AeKULaIgRlJNEPbHH" alt="" width="880" height="586"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@aronvisuals?utm_source=medium&amp;amp;utm_medium=referral"&gt;Aron Visuals&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few days ago, when building the &lt;a href="https://mabar.id"&gt;Mabar&lt;/a&gt;, we have a great debate about Datetime formatting. There’s an issue that our timezone is not consistent. So, later after a long fight and discussion, we come to an agreement to use a standard for our timezone.&lt;/p&gt;

&lt;p&gt;We finally decide to use the RFC 3339 as the standard for the date-time format. This means, both backend and frontend will use this format to communicate about the DateTime format. And also, we agreed to use UTC+0 as the default timezone, even when creating the event and receiving the event detail from the server. And let both frontend and backend convert it based on their timezone.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;So what is RFC 3339?&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction RFC 3339
&lt;/h3&gt;

&lt;p&gt;RFC stands for Request For Comment. RFC is a formal document from the Internet Engineering Task Force (&lt;a href="https://www.ietf.org/"&gt;IETF&lt;/a&gt;) that is the result of committee drafting and subsequent review by interested parties.&lt;/p&gt;

&lt;p&gt;There are already so many RFC documents released by this committee. And become standard in every business. One of their documents is the RFC 3339, a document for DateTime formatting. &lt;em&gt;The link for the RFC 3339 can be accessed here:&lt;/em&gt; &lt;a href="https://www.ietf.org/rfc/rfc3339.txt"&gt;&lt;em&gt;https://www.ietf.org/rfc/rfc3339.txt&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Generally, if you look at the RFC document it mainly discusses the DateTime formatting, and to summarize you will see how the proposed DateTime format like the example below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2019-10-12T07:20:50.52Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yap, just like that.&lt;/p&gt;

&lt;p&gt;But there’s a question. For Datetime formatting, there’s already standardized by the ISO. It’s ISO 8601 standard for Datetime. And for people that already familiar with ISO 8601, the RFC 3339 is pretty similar. What’s different?&lt;/p&gt;

&lt;p&gt;So, if you look again at the RFC document. RFC 3339 use/follow the ISO 8601 profile for the Internet DateTime. Clearly stated in chapter 5.6&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1r6CiIUl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A1PiiebUwve3R3WiCzUa6LQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1r6CiIUl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A1PiiebUwve3R3WiCzUa6LQ.png" alt="" width="880" height="539"&gt;&lt;/a&gt;RFC 3339 Chapter 5.6&lt;/p&gt;

&lt;p&gt;So actually, there are no big differences between Date ISO 8601. The only small thing that makes these 2 different is the “T” syntax between date and time. ISO 8601 uses the “T” character to separate the date and time. In RFC 3339, you can replace the “T” character with only using space.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# This is acceptable in ISO 8601 and RFC 3339 (with T)&lt;/span&gt;

2019-10-12T07:20:50.52Z

&lt;span class="c"&gt;# This is only accepted in RFC 3339 (without T)&lt;/span&gt;
2019-10-12 07:20:50.52Z

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just it. So overall, it still the same as the ISO 8601.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding about Time Zone
&lt;/h3&gt;

&lt;p&gt;I’m a bit shocked when I’m asking about RFC 3339 to around my close friend, there’s no one understand about this. Even for ISO 8601, it’s only a few of them who know the details about this.&lt;/p&gt;

&lt;p&gt;Especially reading the TimeZone format. From 1–10 of my friend, I can say only 1 understand about the timezone format.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2019-10-12T07:20:50.52Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Take a look above example. When I asked someone about it, they will say it is 2019–10–12 07:20:50.52 in Jakarta time, because we live in Jakarta. So he will assume it was Jakarta time. But that’s wrong. Because if we convert that time to Jakarta time it should be: 2019–10–12 14:20:50.52.&lt;/p&gt;

&lt;p&gt;How is it possible? How do I can say that the Jakarta time of that example is should be 2019–10–12 14:20:50.52?&lt;/p&gt;

&lt;p&gt;In RFC 3339, we can also know the time-zone from the format. It displayed in the “ &lt;strong&gt;Z”&lt;/strong&gt; syntax. “Z” means UTC+0. “Z” stands for Zulu timezone which is the same with GMT or UTC (&lt;a href="https://stackoverflow.com/a/9706777/4075313"&gt;https://stackoverflow.com/a/9706777/4075313&lt;/a&gt;). So if we put Z on the DateTime, it’s mean its timezone is UTC+0.&lt;/p&gt;

&lt;p&gt;More detailed example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;2019-10-12T07:20:50.52Z  &lt;span class="c"&gt;#(UTC+0)&lt;/span&gt;
2019-10-12T07:20:50.52+00:00 &lt;span class="c"&gt;#(UTC+0)&lt;/span&gt;
2019-10-12T14:20:50.52+07:00 &lt;span class="c"&gt;#(UTC+7)&lt;/span&gt;
2019-10-12T03:20:50.52-04:00 &lt;span class="c"&gt;#(UTC-4)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look at the bold text. It will explain how the timezone is written in the DateTime format.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Is It Important?
&lt;/h3&gt;

&lt;p&gt;This is really important to handle the request that comes from many timezones. If your application only accepts requests as the same as your timezone (for now), then this might be not really treated well by the engineer.&lt;/p&gt;

&lt;p&gt;And for the engineer, why is it really important even a few programming languages like Golang already handle this? It is because sometimes the engineer not understand it very well, and may come brings a new issue later.&lt;/p&gt;

&lt;p&gt;Let me tell you a story.&lt;/p&gt;

&lt;p&gt;So, I will tell an example of why this is really important. This has just happened recently to me and my friend, not in Mabar, but my friend from my office. We have an issue about transactions pending that we need to take a look at and verified.&lt;/p&gt;

&lt;p&gt;In our application, we use UTC+0, and he(my friend) knows this. So to fix the issue, I just need to get the details and check if there’s another transaction that pending around the pending one.&lt;/p&gt;

&lt;p&gt;So, the problem comes are when I asked about the transaction time (created time) of the row in Database. He adds the hour automatically with our timezone which is UTC+7, but not change the timezone format. And it brings an extra time for me to verify all transactions of that time that should be intended to. Because I’m querying the wrong DateTime range.&lt;/p&gt;

&lt;p&gt;For example,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#This is our DateTime stored in DB&lt;/span&gt;
2019-10-12T07:20:50.52Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I asked him to tell me the transaction time through chat. And he gives this to me.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# He adds the hour with +7 but not change the timezone&lt;/span&gt;

2019-10-12T14:20:50.52Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So what’s wrong here? Look at the “hour” value. He added it with 7 and send it to me. But the timezone is still “Z” (UTC+0). Since I know to read the RFC 3339, I’m thinking this is UTC+0 because he wrote it “Z”.&lt;/p&gt;

&lt;p&gt;And, with that date-time, I’m doing some queries to DB. But, what I got is not exactly what I want. And when I’m trying to query by the ID, it shows different than what he gives to me. I’m afraid there’s an update query to DB that causing it different than what he gives to me.&lt;/p&gt;

&lt;p&gt;And it brings confusion to me. And to verify it, I ask him again, is it UTC+0 value or not, and he said, that he already added +7 to the hour. And I feel like arghhh…. 🤯&lt;/p&gt;

&lt;p&gt;And then, I say to him, when adding the hour, please change the Timezone too. So it does not bring confusion to whoever will read the date-time.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Takeaways&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This is a really important thing for Engineers to understand the format to avoid inconsistent data. Luckily, from my experience above, it stills me who read the given date-time. What if there’s an application that will read the Datetime, then there will be so much inconsistency data, because we add manually and it’s the wrong timezone.&lt;/p&gt;

&lt;p&gt;So here are the takeaways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Z”: stands for Zero timezone (UTC+0). Or equal to +00:00 in the RFC 3339.&lt;/li&gt;
&lt;li&gt;RFC 3339 is following the ISO 8601 DateTime format. The only difference is RFC allows us to replace “T” with “space”.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  References:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.ietf.org/rfc/rfc3339.txt"&gt;https://www.ietf.org/rfc/rfc3339.txt&lt;/a&gt;
&lt;a href="https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href"&gt;https://medium.com/media/1f83e4a733ad3206f47e6dd38aa4fc6d/href&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>programming</category>
      <category>softwaredevelopment</category>
      <category>softwareengineering</category>
      <category>rfc3339</category>
    </item>
    <item>
      <title>Today I Learned — Fix: go get private repository return error reading sum.golang.org/lookup</title>
      <dc:creator>Iman Tumorang</dc:creator>
      <pubDate>Mon, 23 Sep 2019 04:02:25 +0000</pubDate>
      <link>https://dev.to/bxcodec/today-i-learned-fix-go-get-private-repository-return-error-reading-sumgolangorglookup-4kjh</link>
      <guid>https://dev.to/bxcodec/today-i-learned-fix-go-get-private-repository-return-error-reading-sumgolangorglookup-4kjh</guid>
      <description>&lt;h4&gt;
  
  
  Fixing error: go get gomodule on private module with error message pattern verifying git-host… reading sum.golang.org/lookup … 410 gone
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9DZK7gzi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AcTBtBcPBD7gbyWy5RKGWLw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9DZK7gzi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AcTBtBcPBD7gbyWy5RKGWLw.png" alt="" width="880" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today on a bright day, Sunday Funday, but ruined by something that made me furious for an hour. Actually, I’ve stuck with this issue since last night (Saturday Night), but since I’m really tired, I decide to stop and rest and continue again on Sunday.&lt;/p&gt;

&lt;p&gt;So, I’ve been working on Mabar (&lt;a href="https://mabar.id"&gt;https://mabar.id&lt;/a&gt;) as one of my side projects. It’s still on the beta version, still lack of many features, you can try it on Android Play Store &lt;a href="http://bit.ly/2mcWvv0"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have a high expectation on this Mabar, it should be a platform, that available on many interfaces (Mobile, Desktop, Web). For the tech-stack, I use Golang as the backend, Kubernetes as the infrastructure, and Digital Ocean behind them as the Server.&lt;/p&gt;

&lt;h4&gt;
  
  
  Problems
&lt;/h4&gt;

&lt;p&gt;So the issues is, I have a private module (Golang module), a simple library that will be imported by my backend API. But somehow, I can’t get the module and always bring errors when I do the go get command.&lt;/p&gt;

&lt;p&gt;Let’s say the package name is lucifer. And it always throw error on the terminal. That really made me mad on it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ go get -v bitbucket.org/compay/lucifer
go: finding bitbucket.org/compay/lucifer latest
go: downloading bitbucket.org/compay/lucifer v0.0.0-20190921175342-61a76c096369
**verifying bitbucket.org/compay/lucifer@v0.0.0-20190921175342-61a76c096369: bitbucket.org/compay/lucifer@v0.0.0-20190921175342-61a76c096369: reading** [**https://sum.golang.org/lookup/bitbucket.org/**](https://sum.golang.org/lookup/bitbucket.org/gokar/lucifer@v0.0.0-20190921175342-61a76c096369) **compay** [**/lucifer@v0.0.0-20190921175342-61a76c096369**](https://sum.golang.org/lookup/bitbucket.org/gokar/lucifer@v0.0.0-20190921175342-61a76c096369) **: 410 Gone**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you look the message, it says the package is gone or not available in sum.golang.org.&lt;/p&gt;

&lt;p&gt;At first, I think this happens because I forgot to enforce SSH on bitbucket, as I ever write it here: &lt;a href="https://medium.com/easyread/today-i-learned-fix-go-get-private-repository-return-error-terminal-prompts-disabled-8c5549d89045"&gt;https://medium.com/easyread/today-i-learned-fix-go-get-private-repository-return-error-terminal-prompts-disabled-8c5549d89045&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But, it still not worked well. It still keeps returning an error when I do the go get commands, even I force it only using SSH.&lt;/p&gt;

&lt;h4&gt;
  
  
  Root Causes
&lt;/h4&gt;

&lt;p&gt;So, after searching for this issue on the internet, I found the root-causes. This only happens on Golang from version 1.13. I can verify this after reading this release &lt;a href="https://golang.org/doc/go1.13#modules"&gt;https://golang.org/doc/go1.13#modules&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, this happened because there’s a new feature about proxy in this Golang version.&lt;/p&gt;

&lt;h4&gt;
  
  
  Solutions
&lt;/h4&gt;

&lt;p&gt;Actually there are a few solutions that we can choose.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Using GOPRIVATE&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As stated in the release doc of Go 1.13,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The new GOPRIVATE environment variable indicates module paths that are not publicly available. It serves as the default value for the lower-level GONOPROXY and GONOSUMDB variables, which provide finer-grained control over which modules are fetched via proxy and verified using the checksum database.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Means, to solve the issue above, we can just fill the GOPRIVATE variable in our system. Add this command in the ~/.bashrc. &lt;em&gt;*Change the export value based on your company/org name.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**export**  **GOPRIVATE** ="gitlab.com/ **idmabar** ,bitbucket.org/ **idmabar** ,github.com/ **idmabar**"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And to verify if this worked, you can do the go env command. It’s should be more like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/imantumorang/Library/Caches/go-build"
GOENV="/Users/imantumorang/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"  
GOOS="darwin"
GOPATH="/Users/imantumorang/go"
**GOPRIVATE="gitlab.com/idmabar,bitbucket.org/idmabar,github.com/idmabar"**
GOPROXY="[https://proxy.golang.org,direct](https://proxy.golang.org,direct)"
GOROOT="/usr/local/Cellar/go/1.13/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.13/libexec/pkg/tool/darwin\_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO\_ENABLED="1"
GOMOD=""
CGO\_CFLAGS="-g -O2"
CGO\_CPPFLAGS=""
CGO\_CXXFLAGS="-g -O2"
CGO\_FFLAGS="-g -O2"
CGO\_LDFLAGS="-g -O2"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And now I can do the go get command for my private repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ go get bitbucket.org/company/lucifer
go: finding bitbucket.org/company/lucifer latest
go: downloading bitbucket.org/company/lucifer v0.0.0-20190921175342-61a76c096369
go: extracting bitbucket.org/company/lucifer v0.0.0-20190921175342-61a76c096369
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So this env-variable will tell the go get command to use the private host proxy to retrieve the package.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Using GONOSUMDB&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another solution, maybe using the GONOSUMDB variable. I still don’t try this, but it seems works after reading this proposal &lt;a href="https://go.googlesource.com/proposal/+/master/design/25530-sumdb.md"&gt;https://go.googlesource.com/proposal/+/master/design/25530-sumdb.md&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So you can set this in your environment variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**export**  **GONOSUMDB** ="gitlab.com/idmabar,bitbucket.org/idmabar,github.com/idmabar"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Actually this issue only happens in new Golang version 1.13 and afterward. So before updating your Golang version, make sure to set this environment variable.&lt;/p&gt;

&lt;p&gt;Here a few links that may be related to this issue, thanks to &lt;a href="https://stackoverflow.com/users/12052086/noveaustack"&gt;noveaustack&lt;/a&gt; for finding this and posting this in Stackoverflow, I just reposted this because I just know this issue and as a new thing I learned.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;References&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Stackoverflow answer: &lt;a href="https://stackoverflow.com/a/57887036/4075313"&gt;https://stackoverflow.com/a/57887036/4075313&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Proposal about Go Sum DB for go module: &lt;a href="https://go.googlesource.com/proposal/+/master/design/25530-sumdb.md"&gt;https://go.googlesource.com/proposal/+/master/design/25530-sumdb.md&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Proxying Checksum DB: &lt;a href="https://docs.gomods.io/configuration/sumdb/"&gt;https://docs.gomods.io/configuration/sumdb/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Github Issues related to this issue: &lt;a href="https://github.com/golang/go/issues/33985"&gt;#33985&lt;/a&gt; and &lt;a href="https://github.com/golang/go/issues/32291"&gt;#32291&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>go</category>
      <category>api</category>
      <category>programming</category>
      <category>softwaredevelopment</category>
    </item>
  </channel>
</rss>
