Microservices are increasingly popular way of developing software in which different teams build small pieces of functionality. Teams can work together to deliver an application with a wide range of different features.
Microservices aim to speed up the process of delivering software and make it easier to modify and update.
This post will look at how to get your microservices dev environment up and running with Docker.
Let's learn about microservices architecture by first establishing an understanding of a legacy monolith application architecture. With monolithic architecture, all the services and modules are part of a singular unit.
Each and every service develops, deploys, and scales together. Engineers must write the whole application in a single language with a single runtime to maintain and scale the monolith even when a particular service—say, a WebSocket server—would perform in a much more efficient manner if written in Golang, for instance.
Microservices architecture, splits thisone colossal application into into many small services, allowing engineering teams to build quicker, innovate faster, and scale only what's required. With this approach, you leave the one-size-fits-all architecture requirements behind.
Now, engineers and teams (like DevOps) are able to scale up only those required services and not everything at the same time - but that option still remains. Still, the only difference between microservices architecture and a monolith architecture is that scaling everything at once is no longer a constraint but rather a choice that teams hardly ever make.
Microservices are all the rage in the world of web development. They're so popular that they've managed to infiltrate the world of enterprise software development. But why exactly are they getting so much heat? Let's look at a few benefits of microservices architecture.
- It helps you cut costs rather smartly and invest in scaling up only those services that require some additional juice. If you're in the e-commerce industry, you could scale up your recommendations engine during the holidays to cater to that enormous demand.
- Selecting the tech stack of your services is in your hands. For instance, you can write your API gateway in NodeJS, your web-scraping engine in Python, and your real-time chat server in Golang.
- Individual deployment of all the services allows developers to quickly fix issues on any service without necessarily affecting all of the other services.
Basically, containerization is the process of using a bucket that holds all your source code, dependencies, frameworks, and libraries required to run a particular service or an application. Docker has to be mentioned when talking about containerization, as it's the easiest to set up and doesn't demand that much configuration, so even developers can dockerize their application or service quickly.
All you need is an essential Dockerfile in which you mention what base image your application is run upon. For example, if your app requires a Node runtime, Node offers official Docker base images for you to get started. Do you want to go pro? Build your app on a Linux base image with fine control over the network and port configuration, and Docker has your back.
Once you have all the services containerized—or, for the sake of this post, dockerized—what do you do with these buckets? How do you redeem those advantages of auto-scaling and all the shiny bells in reality? Here comes orchestration, in which Kubernetes is the industry standard for open-source container orchestration.
In simpler words, Kubernetes will take care of all your needs ranging from automated deployment scaling to managing those buckets.
"Kubernetes can scale without increasing your ops team." —Kubernetes
Orchestration also enables you to automate one of the most common incident response methods—i.e., "restarting the server will work for now"—where the developers fix the issue.
This is called self-healing in Kubernetes terms, where it does that for you. Having set appropriate monitoring and alerts, you can configure Kubernetes to reboot the containers in the event of lethal failures.
All of this is fine, as by this point you should know the basic definitions and how everything works in a production environment with all of these tools.
Let's move on to learn how to set up a microservices dev environment.
Note: Microservices environment, like any environment, can be set up locally or on the cloud. In this post, we’re going to cover how to set up a microservices dev environment locally or on cloud platforms that work like they are local (like Nimbus).
As developers, we have many tools and languages at our disposal to build applications and services. But when it comes to managing and deploying those services, things can get tricky. We may have to deal with technologies we don't know about; use new tools; or spend a lot of time on tedious, repetitive tasks.
Compose is here to make things easier. It's a tool defined in a very concise manner.
"Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services. Then, with a single command, you create and start all the services from your configuration." —Docker
Follow the official installation guide and continue reading.
Here's an example of setting up a dev environment with Docker Compose. To start, here are some of the example services we need to orchestrate:
client app—the web app built on React your end users use to browse inventory, talk to support, and place orders
server app—an Express app written in Node.js
real-time chat service—a WebSocket server written in Golang
database service—a Postgres instance as the RDBMS
database admin service—an instance of Adminer for a visual approach on DBA
cache service—a Redis instance for caching commonly/frequently requested data that is not updated that frequently
All of these examples and references are from various sources. They don't exactly fit together, but for the sake of this post, let's assume all these services belong to one product and proceed with writing a config for the docker-compose.
./docker-compose.yaml version: "3" services: client-app: image: example/client-app:latest restart: always environment: - NODE_ENV=production ports: - 5000:5000` server-app: image: example/server-app:latest restart: always environment: - NODE_ENV=production depends_on: - database-service ports: - 3000:3000 chat-service: image: example/chat-service:latest restart: always environment: - NODE_ENV=production ports: - 3001:3001 database-service: image: postgres:13-alpine restart: always ports: - 5432:5432 volumes: - ./.example-data/postgres-store:/var/lib/postgresql/data environment: POSTGRES_USER: root POSTGRES_PASSWORD: example_password POSTGRES_DB: example_db database-admin-service: image: adminer:4-standalone restart: always ports: - 8080:8080 environment: ADMINER_DEFAULT_SERVER: database-service cache-service: image: redis:3-alpine ports: - 6379:6379 volumes: - ./.example-data/redis-store:/data
With that the services are ready. The config for docker-compose is ready. Now what? Let’s utilize the CLI commands docker-compose offers to spin up our docker-compose project.
Use docker-compose up to create and start containers. In the command below, -d shows we'll be running docker-compose in the detached mode.
docker-compose -f docker-compose.yaml up -d
Use docker-compose down to stop and remove containers.
docker-compose -f essentials.yaml down
Use docker-compose logs to view output from the containers.
That's it. That's all you need to get a microservices dev environment up and running.
Though microservices have been around since the 2000s, they are still a new way of building software for many engineers, and it can be challenging to wrap one's head around the concept.
We hope this post has helped you understand what microservices are, how they work, and how you can use them to help your business. If you have any other questions or concerns, don't hesitate to contact us.
Thank you for reading!
Originally posted on www.usenimbus.com - the makers of fast and secure cloud environments that work like your laptop.
This post was written by Keshav Malik. Keshav is a full-time developer who loves to build and break stuff. He is constantly on the lookout for new and interesting technologies and enjoys working with a diverse set of technologies in his spare time. He loves music and plays badminton whenever the opportunity presents itself.