1. Introduction
According to Ultraboard Games, the mancala game has existed for about 7000 years. This means that since the middle neolithic phase of History, mankind has been playing this game. The first time I came across this game was when my grandmother had a game that she would call The Stones Game. Later in life, when I came across some colleagues who invited me to play board games at ISEL, I realized that this “Stones game” was actually called Mancala. Back in the day, I made a version of this game in C. Unfortunately that version is long gone. Too much time has passed and at the speed technology goes, I became more interested in other languages like Scala, Kotlin, and of course Java. 💻
Since 2005 I could never find that game again. Back in 2015, I got the motivation to create Mancala again using Java, OOP design patterns, application of ACID principles, and all the rules invented by Robert Martin (a.k.a. Uncle Bob) to create a Clean Code. These are also many times referred to as SOLID principles. This motivation came out because I was looking for an opportunity to work on an amazing project with different technologies. In the end, I met some amazing colleagues from Eastern Europe who during coffee brought that Mancala subject up. I took a deep dive through the memory lane during my university years and I thought. “Why don’t I just make Mancala all over again”. It turns out that little time later I had the first version of “The Stones Game” complete and MancalaJE was a reality. 🎉
In this article, we are going to set up a deployment environment using minikube. To be very direct, minikube basically generates a virtual machine which contains all the kubernetes setup. With minikube, we can spawn a complete virtualization and get it up and running with all our services. Just like kubernetes, we can create services, load balancers, deployments, pods and containers. Minkube is more limited and you can look at it from a simple perspective. It’s basically kubernetes with more limitations, which for testing can really make a difference and leverage the complete pipeline setup. 🙌
2. Requirements
For this article, we are expected to know what a container, an image, docker and virtualization are. We are also expected to work on a machine with virtualization capabilities. If you are using your work computer, chances are, that what you are working on, isn’t really an operating system. Instead, you might be working on a virtual machine image without even knowing it. If that is the case, just work your 8 hours a day and try the lessons of this article at home with your mega personal machine. A Mac Pro is all the rage at current times, but any machine with at least 16Gb, 8 core, and 250Gb disk space will do. You can also run it with something less, but you might face difficulties. Dual cores can be really painful when running virtualization environments. Further, we need virtualization software which can be Virtual Box or VMWare. It’s entirely up to you. Just remember that in order to be successful with this article you’ll need to have a complete installation. Trial versions will probably be difficult to work with. For details about VMWare please have a look at the projects repo. For purposes of simplicity, we will be using Virtual Box in this article only.
3. Implementation Design in a nutshell
In this article we only need to understand a few basic things about the implementation. The only thing we have here is a front-end service, a back end service and a database service. These three could work together in a single container, but that’s not what we want. We are interested in setting them up in different environments and somehow enable them to communicate with each other.
For this we have minikube. Minikube will start kubernetes and make it run on a virtual machine. For basic understanding of what we are going to build and tob get a picture of it, let’s have a look at the following diagram:
This is just a main overview of what we want to achieve. At this point our interest is just to simplify as best as possible the actual implementation of the game. What’s important is that we have to guarantee the interactions between the client, the front-end, the back-end and the database.
Now let’s have a look at the Kubernetes dashboard and see what we are going to try to achieve:
What we are looking at is three pods that have been created. Let’s keep this in mind. We are going to get pods running. For now, we are just going to go briefly through a couple of important aspects of Kubernetes in general and greatly simplify what they are.:
- Pods — pod is a collection of running containers
- ReplicaSet — replica set is a set of identical pods running at the same time
- Deployments — deployment provides declarative updates for Pods and Replica sets
- Services — n abstract way to expose an application running on a set of Pods as a network service.
- Cluster — unit which contains all separate running elements described above. We are going to have a dive into these four concepts and see if we can make a successful deployment using basic Kubernetes syntax with minikube. Here is a good visualization of what we are going to achieve:
4. Configuration
Let’s first checkout the project from GitHub. Now let’s go to the root folder. What we are going to do here is to create our minikube.
4.1. Starting minikube
First, we configure our VM driver to use virtual box:
minikube config set vm-driver virtualbox
Then we build our minikube virtual machine and mention that we want to use another range of ports. This is important because we want to use port 80 as our exchange port for our cluster:
minikube start — vm-driver=virtualbox — extra-config=apiserver.service-node-port-range=1–30000
This should be the result:
😄 minikube v1.7.3 on Darwin 10.15.3
✨ Using the virtualbox driver based on user configuration
⌛ Reconfiguring existing host …
🏃 Using the running virtualbox “minikube” VM …
🐳 Preparing Kubernetes v1.17.3 on Docker 19.03.6 …
▪ apiserver.service-node-port-range=1–30000
🚀 Launching Kubernetes …
🌟 Enabling addons: dashboard, default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use “minikube”
Now let’s make kubectl understand that it’s working with minikube:
kubectl config use-context minikube
Once we are done with this. We should now be able to create our images. In order to do this, we create our images inside the newly created virtual machine. That virtual machine is called minikube and we can see that perfectly if we go to virtual box and check the virtual machine statuses:
We have now finalized our main setup for minikube. We have accomplished the creation of a minikube. Now let’s get our images ready and make sure they are available for deployment.
4.2. Creating the images
At this moment we know that we need to create 3 images. In our repo we have named these as the following:
- mancalaje-postgresql — This is the image that runs the database.
- mancalaje — This is our back end service which serves the business logic.
- mancalaje-fe — This is the image that runs the front end code.
Let’s check the details of our docker files.
4.2.1. Mancala JE Postgresql
We can find this docker file in docker-psql/Dockerfile:
FROM library/postgres:12
This is a very simple docker file, which basically creates a new image from an already existing image library/postgrs:12. Since our new image is empty, we don’t strictly need to do this. This is just to make this tutorial simple and to keep the focus on what we are trying to achieve.
In the same folder we can find the .env file:
POSTGRES_USER=postgres
POSTGRES_PASSWORD=admin
POSTGRES_DB=mancalajedb
Using this environment file we can easily start a container using our newly created image with a username/password of postgres/admin and database of mancalajedb.
4.2.2. Mancala JE
We can find this docker file in mancalaje-service/Dockerfile:
FROM jesperancinha/je-all-build:0.0.1
ENV runningFolder /usr/local/bin/
WORKDIR ${runningFolder}
RUN apt-get update
RUN [“/bin/bash”, “-c”, “debconf-set-selections <<< \”postfix postfix/mailname string test\””]
RUN [“/bin/bash”, “-c”, “debconf-set-selections <<< \”postfix postfix/main_mailer_type string ‘No configuration’\””]
RUN apt-get install -y — assume-yes postfix
RUN touch /etc/postfix/main.cf
COPY target/mancalaje-service*.jar ${runningFolder}
COPY entrypoint.sh ${runningFolder}
ENTRYPOINT [“entrypoint.sh”]
This image is created as an addition to one of my preset created images called jesperancinha/je-all-build:0.0.1. It’s only important to note that in this image I’m just making sure of three important things:
- Postfix installation.
- The placement of the running Spring Boot jar file.
- The call to the ENTRYPOINT. This the script that will start once the container is running.
Let’s have a look at the startup entrypoint script:
#!/usr/bin/env bash
postfix start
java -jar -Dspring.profiles.active=prod mancalaje-service-1.1.1-SNAPSHOT.jar
Here we are starting postfix and starting the application. Notice that the profile is prod.
Since the profile is prod and this is a Spring Boot application, let’s further take some time to analyze the application-prod.yml of our application:
spring:
session:
jdbc:
table-name: spring_session
initialize-schema: always
store-type: jdbc
jpa:
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQL9Dialect
jdbc:
lob:
non_contextual_creation: true
hibernate:
ddl-auto: update
datasource:
driver-class-name: org.postgresql.Driver
url: jdbc:postgresql://mancalaje-postgresql:5432/mancalajedb
username: postgres
password: admin
As we can see, we have configured our application to run against a database server named mancalaje-postgresql. This is very important to keep in mind because this is where we begin to see our first indication that we are working in a virtual environment. We will see further on, how will this work in minikube with Kubernetes.
4.2.3. Mancala JE Front End
We finally reach the creation of the front end image. The docker file to this image is located in mancalaje-fe/docker-files/Dockerfile:
FROM jesperancinha/je-all-build:0.0.1
ENV runningFolder /usr/local/bin/
WORKDIR ${runningFolder}
RUN apt-get update
COPY default.conf /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/nginx.conf
COPY build /usr/share/nginx/html
CMD nginx -t && nginx && tail -f /dev/null
In this docker file, we are just making sure that our NGINX is properly configured and that our build done with react-scripts build will be copied to the deployment location.
Let’s have a quick look at the NGINX setup. First let’s look the main nginx.conf file. It is located in the same folder:
user root;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
add_header 'Referrer-Policy' 'unsafe-url';
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Essentially this is just the default configuration file that comes with NGINX. However, let’s zoom in on the last line: include /etc/nginx/conf.d/*.conf;. Here we are specifying where we want our subdomains to be configured. For our case we only need the default.conf file:
server {
listen 80;
listen [::]:80;
root /usr/share/nginx/html;
server_name _;
location /api/ {
proxy_pass http://mancalaje:8087/api/;
}
}
It’s important to understand this file in order to realize how we connect the front end to the back end. Let’s look at the line where we see: http://mancalaje:8087/api/;. This is where we proxy all our calls to our api to the back end. Our back-end service will then run on a virtualized container under the name of mancalaje.
Finally it is important to notice the last line of this image creation file: CMD nginx -t && nginx && tail -f /dev/null. An important lesson is to be taken with the use of this command. CMD is the first command executed after image creation. The reason why none of its commands has been placed during image creation is because the start of NGINX will read the configuration files and try to find where the server mancalaje is. At this point, NGINX won’t be able, of course, to find server mancalaje. Let’s continue and well see how this will work.
4.2.4. Setting it all up
At this stage, we have created all of our necessary configuration and Docker files. Let’s create the images. We need first to login to minikube and then create our images inside. This is the command list we need to follow:
minikube ssh
cd /mancalaje/docker-psql
docker build — file=Dockerfile — tag=mancalaje-postgresql:latest — rm=true .
cd /mancalaje/mancalaje-service
docker build — file=Dockerfile — tag=mancalaje:latest — rm=true .
cd /mancalaje/mancalaje-fe/docker-files
docker build — file=Dockerfile — tag=mancalaje-fe:latest — rm=true .
Once we are done with all of these commands we should be able to see a list of all images:
docker images
This is the output:
REPOSITORY TAG IMAGE ID CREATED SIZE
mancalaje-fe latest 4d2ba941a166 19 hours ago 1.24GB
mancalaje latest 65f681e03990 20 hours ago 1.33GB
mancalaje-postgresql latest 73119b8892f9 4 days ago 314MB
At this point we are ready to start creating our pods, services, replication sets, load balancers and whatever suits our needs in a Kubernetes fashion.
4.3. Creating the deployments
4.3.1. Mancala JE Postgresql
Let’s start by looking at the deployment declaration located in the deployment file. For this case, it is located at docker-psql/postgres-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mancalaje-postgresql
spec:
selector:
matchLabels:
app: mancalaje-postgresql
tier: backend
replicas: 1
template:
metadata:
labels:
app: mancalaje-postgresql
tier: backend
spec:
containers:
- name: mancalaje-postgresql
image: mancalaje-postgresql:latest
env:
- name: POSTGRES_USER
value: "postgres"
- name: POSTGRES_PASSWORD
value: "admin"
- name: POSTGRES_DB
value: "mancalajedb"
imagePullPolicy: Never
ports:
- containerPort: 5432
Let’s analyze this file quickly. First, let’s just realize that for such a simple application, the naming of things like metadata and app, don’t really matter and aren’t relevant to our deployment. However, our container name is. This is the name used to identify and make connections through the network. Notice also that I have placed the Postgres configuration parameters in the yaml file itself. For this article it’s best to keep everything in the same file to further understand this file. Kubernetes files have a parameter called kind. This is how we define what sort of action we are configuring. In this case we are configuring Deployment. We can also see the property replicas. In our example we are only going to use replicas of one instance. We are also allowing our containerPort to be available on port 5432.
4.3.2. Mancala JE
For the Spring Boot service, the deployment file is located at: mancalaje-service/mancalaje-deployment.yaml. This is the configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mancalaje
spec:
selector:
matchLabels:
app: mancalaje
tier: backend
replicas: 1
template:
metadata:
labels:
app: mancalaje
tier: backend
spec:
containers:
- name: mancalaje
image: mancalaje:latest
imagePullPolicy: Never
ports:
- containerPort: 8087
It’s important to notice that port 8087 is being exposed. Since we didn’t specify this in the Dockerfile, we need to specify that this port will be opened and used within our network.
4.3.3. Mancala JE Front End
We finally reach our final deployment file. This is our front end deployment file. It is located in mancalaje-fe/mancalaje-fe-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mancalaje-fe
spec:
selector:
matchLabels:
app: mancalaje-fe
tier: frontend
track: stable
replicas: 1
template:
metadata:
labels:
app: mancalaje-fe
tier: frontend
track: stable
spec:
containers:
- name: mancalaje-fe
image: mancalaje-fe:latest
imagePullPolicy: Never
ports:
- name: http
containerPort: 80
We can clearly see that this deployment file doesn’t differ that much from the previous one. In fact, the only functional difference is just the fact that it is using port 80 to expose the container.
4.4. Creating the services
Before looking at our different services setup, let’s think about how we want to start our services up. We have a database that we need in order to save user data, to save the game state and in general keep user logged data and user logged off data. We are 100% sure we do not want to expose a running container with our database in it. We are now pretty much sure that we want to keep this database isolated from the outside. In the same way, we only want users to perform authenticated operations. Although the services should be available to the outside, it’s pretty easy to realize that we want some control over how requests are made and who accesses the REST methods. We need to allow some access to them, but let’s just say that this will happen via a proxy. In this way we are still keeping our backend isolated from the outside. In this case we know that a running container with our application in it, it’s also supposed to be isolated. Finally we know that interactions with our applications happen via the user and what the user does in the front end. If anything should be open to the outside is our container running the front end. This is what the Front end is supposed to do. Once authenticated, our user can interact with the Front End and play the game, create a new game, register, re-register, win a game, lose a game, etc. What the user still won’t (or shouldn’t :)) be able to do is hack into the system. Note that systems can always be hacked. All we can do is take measures to make it more difficult.
4.4.1. Mancala JE Postgresql
Let’s have a look at the service declaration for our database image in the deployment file found in docker-psql/postgres-deployment.yaml.
kind: Service
apiVersion: v1
metadata:
name: mancalaje-postgresql
spec:
selector:
app: mancalaje-postgresql
tier: backend
ports:
- protocol: TCP
port: 5432
type: ClusterIP
All of the fields are pretty self-explanatory. The one where we should give more attention to is the field type. As mentioned before, this is where we define the type of service we want to run. As we mention in the introduction to this segment, we want our database service to run isolated. That is what type ClusterIP is for. It means that our service will be available within the cluster and from the outside, we won’t be able to reach it. We have exposed our port, but haven’t really used it. To do that we define a port in the ports field to have value 5432.
4.4.2. Mancala JE
Let’s now see how we have our main Spring Boot service configured. It’s configured in the deployment file located at: mancalaje-service/mancalaje-deployment.yaml:
kind: Service
apiVersion: v1
metadata:
name: mancalaje
spec:
selector:
app: mancalaje
tier: backend
ports:
- protocol: TCP
port: 8087
type: ClusterIP
The configuration is the same. The only difference is the used port. In this case it is 8087. For the same reasons it is of type ClusterIP.
4.4.3. Mancala JE Front End
Finally let’s look at the services Front End. This one can be found in the deployment file located at mancalaje-fe/mancalaje-fe-deployment.yaml:
kind: Service
apiVersion: v1
metadata:
name: mancalaje-fe
spec:
type: NodePort
selector:
app: mancalaje-fe
tier: frontend
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
nodePort: 80
There are some differences to this declaration. The Front End service needs to be in some way open to the outside world. We have already declared previously in our NGINX configuration how that proxy should work. To make this possible we are using in this example NodePort. This will expose different ports to our needs. We have 3 different port configurations. To access our service we need nodePort to be 80. To get this port to be available in the cluster we say that port is 80. Inside our cluster we are also mapping to targetPort 80. This is the reason why we need to configure so many 80’s. It has only to do with the fact that they serve different purposes.
5. Firing up the environment
We finally reached the fun part of this setup. To test our environment at this stage let’s just try a few commands. Let’s first look at the kubeclt get nodes command:
kubeclt get nodes
This is the result:
NAME STATUS ROLES AGE VERSION
minikube Ready master 4m4s v1.17.3
The result of this command lets us know that our node is named as minikube and that it is ready. In later articles I will take a deep dive into the usage of ROLES. Wa have the age of our node which in this case is approximately 4 minutes.
Let’s try kubectl get pods. We should be getting this as a return message: No resources found in default namespace. This is logic because we haven’t created anything yet. Let’s do that now. Let’s start by effectively creating the deployments with our deployment setup in our deployment files. In the root folder let’s run these commands:
kubectl create -f docker-psql/postgres-deployment.yaml
kubectl create -f mancalaje-service/mancalaje-deployment.yaml
kubectl create -f mancalaje-fe/mancalaje-fe-deployment.yaml
These three commands will use the images created in minikube to fire up deployments and services. Let’s run the kubectl get pods again:
kubectl get pods
These are the logs:
NAME READY STATUS RESTARTS AGE
mancalaje-5ffdbf7ddf-5mj8l 1/1 Running 0 7s
mancalaje-fe-6455c45cfb-twrv2 1/1 Running 0 4s
mancalaje-postgresql-7f85dd455-kc4kk 1/1 Running 0 7s
We now realize that our pods are running. Let’s have a look if we have deployments ready and the matching services. First let's see our deployments. Let’s run kubectl get deployments:
kubectl get deployments
And this is the result:
NAME READY UP-TO-DATE AVAILABLE AGE
mancalaje 1/1 1 1 3m20s
mancalaje-fe 1/1 1 1 3m17s
mancalaje-postgresql 1/1 1 1 3m20s
So indeed. We have three deployments ready. Let’s see if our create command has also created services. This can be achieved by running command kubectl get services:
kubectl get services
And we should see something like this in the console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12m
mancalaje ClusterIP 10.100.182.171 <none> 8087/TCP 4m38s
mancalaje-fe NodePort 10.107.130.151 <none> 80:80/TCP 4m35s
mancalaje-postgresql ClusterIP 10.107.82.239 <none> 5432/TCP 4m38s
Great 👍! We’ve got our services running. However we cannot access them, because they have not been exposed yet. All we need to achieve that is to run the command minikube service mancalaje-fe.
minikube service mancalaje-fe
In the console we should see a table like this one:
NAMESPACE | NAME | TARGET PORT TYPE | URL |
---|---|---|---|
default | mancalaje-fe | http type | http://192.168.99.118:80 |
And this at the end:
🎉 Opening service default/mancalaje-fe in default browser…
This command exposes the only service we want to expose outside the cluster. And that service is our front end service. As we can see, there are quite a few steps in getting a service to be exposed outside the cluster.
Finally, we will see the opening screen of my MancalaJE game:
We did it! We just launched a complete minikube environment in a virtual machine using Virtual box!
Let’s just try to create a user. I will create myself. If we click REGISTER, we should see the following screen:
I have already filled in the details for my test. The fireproof for this test is that we must be able to register a user. If we can do that, then it means that we have reached the database, registered a user and gone back to the initial screen with a positive 😊message.
We should see this now:
If my calculations are correct there should be no email sent through the wire. We will discuss this in later articles. Great! We did it. Launch successful! 🚀!
6. Dashboard.
For the final chapter of this article let’s have a look at the dashboards. Minikube comes packed with a dashboard visualization mode. To see it let’s run minikube dashboard:
minikube dashboard
🔌 Enabling dashboard …
🤔 Verifying dashboard health …
🚀 Launching proxy …
🤔 Verifying proxy health …
🎉 Opening http://127.0.0.1:57913/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser…
This is the main dashboard page. Here we can monitor the secrets, the node, the services, the replica sets, the pods and finally the deployments. This concludes our session about the basic functionalities of minikube applied to my mancala version: MancalaJE.
7. Cheat sheet
Let’s see if we can remember how to use all the commands. I’ve placed them all here. The idea is to enjoy running them at the root directory of the project and check results ▶️:
minikube delete # Just in case 😉
minikube config set vm-driver virtualbox
minikube start - vm-driver=virtualbox - extra-config=apiserver.service-node-port-range=1–30000
kubectl config use-context minikube
minikube mount .:/mancalaje
minikube ssh
cd /mancalaje/docker-psql
docker build - file=Dockerfile - tag=mancalaje-postgresql:latest - rm=true .
cd /mancalaje/mancalaje-service
docker build - file=Dockerfile - tag=mancalaje:latest - rm=true .
cd /mancalaje/mancalaje-fe/docker-files
docker build - file=Dockerfile - tag=mancalaje-fe:latest - rm=true .
exit
kubectl create -f docker-psql/postgres-deployment.yaml
kubectl create -f mancalaje-service/mancalaje-deployment.yaml
kubectl create -f mancalaje-fe/mancalaje-fe-deployment.yaml
kubectl delete service mancalaje-postgresql
kubectl delete deployment mancalaje-postgresql
kubectl delete service mancalaje
kubectl delete deployment mancalaje
kubectl delete service mancalaje-fe
kubectl delete deployment mancalaje-fe
minikube service mancalaje-postgresql
minikube service mancalaje
minikube service mancalaje-f
kubectl get deployments
kubectl get services
kubectl get pods
kubectl get nodes
8. Conclusion
In this article we have seen how to fire up a minikube application which depends on a typical MVC setup. We have a database, a service and a front end. This application has been fully developed with Spring Boot, ReactJS and PostgreSQL.
I hope that with this setup I was able to explain how I was able to set up my own Minikube environment in my machine and with that to have given guidance on how to set this up. Minikube isn’t difficult to setup and that is precisely where it’s potential lies.
With minikube we can perform a lot of useful operations:
* Images are created in an isolated form. If we remove our VM(Virtual Machine), we remove everything at once.
* It’s a nice and compact way of visualizing our environment and making tests
* We can always make different configurations.
Minikube runs a single node though. In case we want more nodes we are probably better off going full K8s. With this article we still haven’t seen in practice how the different types of configuration can help us.
After exercising and going through this article we can just remove everything with one simple command:
$ minikube delete
And it’s gone… 🗑. All images, containers, pods… all gone! 👍
I have placed all the source code of this application in GitHub
I hope that you have enjoyed this article as much as I enjoyed writing it.
Thanks in advance for your help, and thank you for reading!
9. References
- OpenShift by Tutorials Point
- Learn Open Shift
- Minishift
- Minishif Installation
- Fabric 8 maven plugin
- Fabric 8
- What is minikube, kubectl and kubelet by Andreth Salazar
- Install Minikube
- Installing Kubernetes with Minikube
- Trigger a Spring Batch job with a JMS message
- Postman
- Spring Boot OAuth2 Social Login with Google, Facebook, and Github - Part 1
- Shields IO
- How To Install Oracle Java 8 and OpenJDK 11 on Ubuntu 18.04, 19.04
- Build and Deploy a Spring Boot App on Kubernetes (Minikube)
- Apache MQ BrokerService
- Apache MQ BrokerFactory
- Apache MQ ActiveMQConnectionFactory
- Using Meslo LG with the Windows Console
- Install iTerm2 with Solarized in 2017
- Oh My Zsh will not make you a 10x developer...but you may feel like one
- Oh My ZSH!
- How to add Custom Fonts to Command Prompt in Windows 10
- How can I add additional fonts to the Windows console?
- Powerline fonts
- Microsoft Terminal
- How To Install Minikube on Ubuntu 18.04 / Debian 10 Linux
- Install KVM on CentOS / RHEL / Ubuntu / Debian / SLES / Arch Linux
- choco package? #9
- Workstation Player
- Try VMware Workstation Pro
- Download VMware VIX 1.15.7
- Automation Tools and SDK(s)
Top comments (0)