L5/Senior developer, what's next?
This is a company that contacts buyers with companies, these companies sell their invoices receivable to payers, sell their invoice for a quick payment losing a small percentage, the company that sells can reinvest and expedite its operation, and the buyer of the invoice use your money and earn a percentage at the time the payer pays the bill, in Colombia this is an excellent idea since companies can take between 15 and 20 days to pay for the products they already received; Mesfix is currently expanding its product range.
In my experience in this company the best of the development team is its culture; Manuel his CTO makes each of its members feel as part of his Family.
One of the things that impressed me most about the Mesfix team and its technology was the very intuitive and organic way in which they implemented the micro services methodology, in a simple way they managed to form the idea that a backend for frontend consults an orchestrator that may or may not be related to a micro service, this orchestrator would be responsible for unifying the information without performing logical business operations and the micro services would be responsible for operating the logical queries requested by the orchestrator and operating the information either by storing or reorganizing it, just great.
When I arrived, they had already gone through a process where much of the monolithic architecture was separated by passing it to a microservice architecture, and my task was to help the team implement good development practices that were not available at the time.
- [GitFlow](#gitFlow) - [Docker](#docker) - [Unittest](#unittest) - [API RestFull](#api-restfull) - [Documentation](#documentation) - [jenkins and continuous automation](#jenkins-and-continuous-automation) - [Micro data service and django admin](#micro-data-service-and-django-admin)
The first thing was to organize the process of developing functionalities in the repository, we went on to implement the GitFlow methodology, with the nomenitlatures, feature / name-functionality that would be the name of the branch to create a new functionality, hotfix / fix that would be the name of the branch to fix errors in production and the master branch would be blocked from merge to pass the code integrations using only pull requets, I know that it is not all the GitFlow standard but for a team that had not worked using branches and pull request It would be an excellent start.
From machine installation to installation in Docker; When I arrived at Mesfix, all the developers installed the platform on their local machine, some with Linux, others with Mac, this was quite complicated since there were varied problems when installing or performing maintenance of not knowing why in some machines it worked and in others, in addition to the same problems happening in production, it was then that the docker environment was developed for development mode, and then for production mode.
From this experience, what I remember the most and the most joy was that when we had finished the development mode so that the team could work faster, at that time the company supplied a Mac to each developer and we could try it on these, here the result It was a success and we went to work more calmly and without relying directly on our machines to start the developing platform.
Unitary endpoints tests were implemented using ava.js, so that developers could use it, we match the test file architecture to the services architecture, and add a command to a Makefile so that they could run the tests in three different ways , a) run all the tests, b) run all the tests of a service, and c) run all the tests of a functionality.
This is the organization of the test files:
root-test-files service_1 functionality_1.js functionality_2.js service_2 functionality_1.js functionality_2.js
make start-testing make start-testing service_1 make start-testing service_2 functionality_2
An important part of the software development that we wanted to improve was to adhere to the industry standards and stop developing instinctively, so the standard we gave priority was to the RestFull API, the general idea was to optimize the loading time and improve the search performance, for which we choose the most delayed endpoints, we study them and rethink them adhering to the standard, and this development initially optimized the performance of the commercial area and subsequently of the clients giving a great boost to the company.
We must always look at the documentation when we do not want a system to be dependent on those who develop them, this part is always the most complicated, because there are many standards, because the documentation is not maintained, because the development is very Fast and there is no time to do it, there are many things that can happen in the process, and a problem we wanted to attack was the fact of not knowing where to leave the documentation, for this we noticed that most of our resources were rest and we were doing Resfull new versions, we decided to add the documentation to the same endpoints using a parameter to be consulted, where the first thing we evaluated was the precence of the parameter and if it happened the endpoint documentation was delivered to the client, now the documentation was in a markdown file, which the endpoint read and transformed into HTML to deliver to the client that made the request, in addition to these Markdown files could also be seen through the github interface giving it an extra point, in case a developer needs the documentation, he does not need to invoke the endpoint and instead we give him the link to the documentation.
jenkins and continuous automation
On this part I was not very much in front of the development but I did have the opportunity to guide a co-worker that I was learning, and this might not be a good antecedent for the result, but the guide was the key part to realize this functionality, in general several key points were worked on which could be the starting point for the future scalability of the project:
- pipeline: a deployment flow that might be able to identify potential problems before, during and after deployment
- environments: possible deployments with different goals in different areas of development, with fidback included.
Micro data service and django admin
The storage, centralization through databases, and administration by the area of operations is an essential part of the study of the clients, and an MVP was carried out with the goal of not having direct interaction with the design area to give it a growth fast independent of other areas, we have decided to use the Django admin since it technically had these characteristics, since when programming the django admin it reacts visually to the programming lines, and it got to be programmed in a very advanced level of python, and the project has growth potential, but from this experience I specifically have another post to which you can go What is the Django admin for?
Thank you and see you soon
Top comments (0)