DEV Community

Cover image for How we build and maintain a Golang API service
JingIsCoding
JingIsCoding

Posted on • Updated on

How we build and maintain a Golang API service

Building a scalable, secure, high available api service is not an easy task, especially if there is rapid development and deployment been constantly made. It requires clean and tests-covered codebase, robust infrastructure, real-time visibility and monitoring into system behavior and performance, error reporting and incident triggering system, and more importantly, people need to foster an engineering culture that aims at digging out the root cause instead of putting out patches to fix the immediate problem, beware of the tech debt and understand the tradeoff of short-term deliverables versus long term maintainability.

Here at chowbus, we have been building microservices on Golang/Ruby for a few years now, we are still learning and experimenting in various areas. Meanwhile, there are also some working practices/paradigms I’d like to discuss with the community. In this article we will share a boilerplate that is a simplified version extracted from the production codebase and we will talk about some basic ideas that should help maintaining the project quality and extends its lifespan.

Please refer to https://github.com/JingIsCoding/api-server-boilerplate for a quick setup. We will not dive into the details of the boilerplate in the article as the readme file provides a much deeper introductions on different aspects of the codebase.

Hierarchy and data-flow

Image description

One general rule when build an application is to keep modules/layers loosely coupled with each other. In the context of Golang, you might want to always declare interfaces between modules/layers, that will make tasks easier like writing unit-testings since you could easily mock out dependencies to cover all logic path.

Another aspect is to be careful about the object that you pass down the call stack, think about if you pass a request object from client all the way to the data layer. This will implicitly bind all of the components together and make it hard to reuse the underlying logic if you ever need to add another endpoint.

Once the application is up and running in production and clients or other services start to consume the APIs, it then comes the real challenge where we need to alter the existing behavior and provide more functionalities to meet new business needs but have to not break existing clients that might be running on different versions. In the following sections we will talk about some general ideas to keep your application healthy.


Clean code

If you have not, I would strongly recommend you take the time and read this book https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882.

There are many good things in this book, one particular thing can not be emphasized enough is paying attention to the naming of the variables, function and modules, and keep revisiting if you could think of something better. Just mention a few cases here.

Be specific as much as possible.

instead of use variable name like item or data

for _, item := range productService.GetProducts() {
   ...
}
Enter fullscreen mode Exit fullscreen mode

Better to just name thing as it is

for _, product := range productService.GetProducts() {
   ...
}
Enter fullscreen mode Exit fullscreen mode

that way, you will know right away what you are dealing with and refer to it as product in the block.

Single(Limited) responsibility

With the growth of complexity of the business logic and expansion of functionalities, unavoidably, we will find both the length of the source file and the length of certain functions get longer and messier to a point where one file might be 1000 lines long and one function that takes 10 arguments with nested if else blocks, which later becomes untestable because the tests cases will grow exponentially with the number of arguments. To manage this better, alway try to ask the questions that:

  • Does the function do its name implies? For example, there might be a function called createUser and in the code it not only creates the user but also sends out an welcome email. If that is the case then we should move the sending email to another function and test them separately, if they have to be called in one transaction, then we could wrap them into another caller function called createUserAndSendEmail so other ppl will know the side effects of the function on this abstraction level.

  • Should we break up one file into smaller (more specific) files? Assuming we have a order.go file that deals with all order related functions, when it starts to get out of hand we could try to break it into files that concerns different aspects of this domain. For instance, we could create a order_creator.go that specifically deals with order creation procedures, and a order_history.go that keeps the history for user to query at anytime, and maybe an order_report.go file that is for statistics purposes.


Protect the data models

Models are at the core of your application, and remember you will not be able to change your data model easily because you always have to carry the historical data for compatibility and traceability reasons, it will take a longer and longer to migrate the old data as the application accumulates more of them. and it is also risky to edit/remove data if some part of the code still consumes them that you may not be aware of , so be extremely careful if you ever need to make changes to the core data structures.

Think before adding another field.

Sometime it feels much easy to simply add another field on the data model when a new requirement comes in, for instance, if we want to know if a certain model is modified since last time it is saved, it is easy to add a boolean flag on the model to reflect that, but the potential problem is that after 3 months, people might no longer remember what was this field used for and it becomes a tech debt when refactoring. Instead, if you could derive the state from the existing updated_at field before save to database to know if this data is modified.

Keeps your core business models from others models

Making a distinguish between the core data models from so-called data transfer models (DTOs) that is used to transfer data between layers and modules is a good practice, since the DTOs will likely change for different purposes with the refactoring of function argument and return types. However, the core data models should remain the same as long as the business domains that we are dealing with remain the same.

Maintain a consistent yet flexible interface

Another very challenge task most of the APIs has to overcome is to support multiple versions of the clients at the same time, an iOS application might take up to days or even weeks to pass the apple review and release a new version, the adoption rate will also slowly goes up unless you kill the older version completely. So in the most cases, we have to support older api schemas and delivery new functions with new schemas at the same time. the approach we took is to alway version our APIs like /api/v1/user /api/v2/user and define request and response objects separately (some ppl define serializers which is also fine) so we could easily adjust the client schema without affecting how the application works internally.


Testing

At current stage, we have full coverage on unit tests and integration (API) tests with a few UI tests as we still rely on QA to do regressions on every release. If you have limited engineering resource like us that build and maintain full coverage E2E tests is not feasible, I would recommend to focus on the quality of the unit tests as writing unit test is not only about the correctness of the program but more importantly it serves as “document” that each test case explains a situation that a function should cover, this is important because as the application get larger and more complicated, there will be more edge cases that will be easily forgotten and neglected, which is often the obstacle when refactoring the code because ppl fear any change could break existing behavior.

Constantly refactoring

No matter how careful we build the application, it will always get messier and harder to manage, not only because the accumulation of business logics but also the team is learning and discovering new way to solve certain problems, it is always necessary to spend time to revisit the existing code.

One thing we find to be quite fun is that we always allocate a few hours of the Friday afternoon to allow engineers to demo their PRs or proposals in a live session to all the engineers and explain the rational behind it, that way it encourages ppl to communicate and everyone would have chance to either learn from others or give some input.


Conclusion.

Building a solid software requires a engineering culture rather than a few practices, we should always pay attention to small things and keep rethinking our approach and make changes even if it does not have directly or immediately benefit to the product.

Oldest comments (0)