DEV Community

Cover image for Let's create microservices system from scratch - part 3
DigitalCrafting
DigitalCrafting

Posted on • Edited on

Let's create microservices system from scratch - part 3

In the last article we decided that the most important thing while developing microservices system is to have a good way of managing and running them.

After that, the first feature I wanted to implement was to create proxy and service discovery mechanism. How would that work ?

  1. The first to start should be discovery-service which will hold the list of available services and their location,
  2. After discovery is started, all other services should start and register themselves with it using REST Api,
  3. Requests to the services will go through api-gateway which will query discovery for the location of the service and then proxy the requests.

Sounds simple, and the actual implementation is, but there are a few things that needed to be done before I could finish it:

  1. Create Rest API standard to follow,
  2. Control startup order,
  3. Tell each service where it is,
  4. Connect services to each other,
  5. Be able to test it, which is the reason we need to #6,
  6. Refactor the CLI

Let's walk through them.

REST API standard

For our proxy to work, it needs to be able to somehow connect the request to a specific service. We should also consider the versioning of our services. What I came up with for root URI is this:
GET /api/<service-name>/<version>
In example:
GET /api/user/v1

Which is pretty straightforward, the gateway can just split request URI by / and use the second value to determine the target service.

As for versioning there are actually few methods to do this:

  1. through URI path (that's what I did)
  2. through header of a request
  3. through query parameter
  4. through content negotiation

While #2 is probably more correct from a RESTful point of view, I feel that #1 is more natural and easier to implement. As for the others, I didn't even consider them as they seem weird to me.

This is by no means the complete standard, just something to let me finish current feature.

Controlling startup order

While this feature is available from docker-compose.yml configuration it doesn't work as expected because it waits for the container to start and not the service.
Pretty neat solution is to use bash script and curl to determine whether the discovery-service is up or not. If yes, then we can start other services.

The discovery-service URL will be different on localhost and in docker-compose so, for now, I created 2 different scripts with different URLs (yeah, I know, I could use one and pass parameters, but the problems were piling up, so I just used what works for now, I will refactor it later) which will be copied to a correct place by CLI when building the project using:
theonecli (local|docker) build

The script itself is rather simple:

serviceUp=false

for (( i=0; i<10; i++ ))
do
  if $(curl --output /dev/null --silent --head --fail http://the-one-discovery:8081/discovery/v1/)
  then
    serviceUp=true
    break
  else
    echo "Service not up - try $i"
  fi
  sleep 2
done

if [ $serviceUp == true ]
then
  echo "Service is up - starting command"
  exec $@
else
  echo "Service is unavailable"
fi
Enter fullscreen mode Exit fullscreen mode

And the example usage can be look up, for example in the user service Dockerfile.

Services location

This is actually something, that would be done by administrators. Each service should have it's own IP and PORT, which should be passed to them via config file. They should also know the location of the discovery-service, which will be one of the few that have not dynamically assigned IP, but a static one. I did that by extending the config files from previous article.

Interservice communication

This part is actually the easiest one. On localhost the services are available, well, on localhost so the only think we need to do is to assign different PORT numbers.

When using docker-compose.yml it's almost as easy - we just just need create internal network in the config file:

networks:
  the-one-network:
    driver: bridge
Enter fullscreen mode Exit fullscreen mode

and specify this network, as the one, each service belongs to:

the-one-user-service:
    build: ./python/user-service
    container_name: the-one-user-service
    ports:
      - 8083:8083
    networks:
      - the-one-network
Enter fullscreen mode Exit fullscreen mode

Testing the proxy and discovery mechanism

Now, that's where all the hell broke loose when I started developing this feature. As it turns out, testing microservices is the second biggest obstacle right after managing them and there are some solutions to consider but I'll leave that topic for another whole article. Suffice to say, there is no easy way to do this, and the most complete approach would be to start everything you need to and test it then. In my case, I need to start 4 services:

  • discovery-service
  • api-gateway
  • user-service
  • outpost-service

And check if user and outpost are correctly registered withing discovery, and if the api-gateway correctly proxies the request. That's when I realized that my CLI Tool is not good enough to actually do this and I had to refactor it. After that, I simply run:
curl localhost:8080/api/user/v1
and checked if the response is what I expected. This is by no means sufficient testing strategy and I will work on that.

Refactoring the CLI

First of all, I decided to move docker-compose functionality to docker command in the CLI, meaning, when typing:
theonecli docker start
we will actually start docker-compose.yml configuration. After killing it, we have to call
theonecli docker stop
and there is no option to run individual docker images right now. After that, I also had to:

  • fix the build process, so that wait-for-service.sh script will be copied into correct directory of each serivice,
  • change the Dockerfiles so that the services start in correct order
  • fix starting multiple services on localhost by spawning new gnome-terminal for each service

After all that, we end up with 2 levels of configuration files:

  • project level
  • service level

On project level we have the wait-for-service.sh scripts for each environment and on the service level we have service specific configurations, again, for each of our environments. Right now we only have 2 environments and it will probably not change, since it's only personal project and not a commercial one, but we can already see that the amount of configuration needed for running this project is going to bubble up, especially when we add databases and UI application.

Summary

All of that took me waaay to much time to finish, but, I'm learning about these challenges right now 😉

This article was more of a log of what happened, the next one will be more theoretical, probably about testing options for microservices.

I hope this article will be of some help, and will let you avoid some problems with your projects. Current version is available on my github, feel free to download it and try it out.

Top comments (0)