Kubernetes has become the de-facto tool for container orchestration and has a solid community. The whole cloud-native era began with the evolution and Kubernetes and is still growing. As a result, Kubernetes is not just popular but has become a way of deploying applications to make sure they are highly available and scalable. The developer community is focused on this tool, and every day, many companies use Kubernetes to safely deploy their applications to production. Since it has become the talk of the cloud-native town, we thought to show you how you can easily use Kubernetes to deploy a simple python application.
Pre-requisites:
- Download and install Python 3 from the official website
- Install Fastapi with the command
pip install fastapi
- You will also need an ASGI server for production such as Uvicorn, install it with the command
pip install "uvicorn[standard]"
- Signup at Harness platform [the CD module]
- Have access to the Kubernetes cluster to deploy our application. You can also use Minikube or Kind.
Tutorial:
Assuming you know the concepts of Kubernetes, we will go straight to writing a simple python application. First, create a very basic python app in Fastapi. What is Fastapi? According to their own website, ‘FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints.’
Copy the below code into the main.py
file,
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
return {"message": "Hello World"}
You can run the server with the following command,
uvicorn main:app --reload
You should see the following output when you visit the http://127.0.0.1:8000/
Create a Dockerfile for this app to run as a container.
FROM python:3.8.10
COPY requirements.txt /
RUN pip3 install -r /requirements.txt
COPY . /app
WORKDIR /app
ENTRYPOINT ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8500", "--reload"]
Create a requirements.txt
file and include the two libraries as dependencies.
Build the application as an image locally first with the following command,
docker build -t simple_app .
Tag the image and push the image to Docker Hub with the following commands,
docker tag simple_app:latest [dockerhub username]/simple_app:latest
docker push [dockerhub username]/simple_app:latest
Create Kubernetes manifest files to deploy and expose the application as a service.
Create a deployment.yaml
file at the root of the application, and add the following code/manifest specifications,
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-deployment
labels:
app: simple-app
spec:
replicas: 1
selector:
matchLabels:
app: simple-app
template:
metadata:
labels:
app: simple-app
spec:
containers:
- name: fastapi
image: docker hub username/simple_app:latest
ports:
- containerPort: 8500
Create service.yaml
file and add the following code/manifest specifications,
apiVersion: v1
kind: Service
metadata:
name: simple-service
labels:
app: ecommerce
spec:
selector:
app: simple-app
type: LoadBalancer
ports:
- port: 8500
targetPort: 8500
Next, apply the kubectl commands to deployment and service yaml files.
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
Then, go and check the pods running healthily and properly as expected with the command kubectl get pods
.
Now, it's time to see our application exposed to the external world.
Use the command kubectl get svc
You will see the external IP exposed and we can use it to see our application running.
curl 192.168.0.107:8500
{"Hello":"World"}
Add a simple test configuration to test the application.
You can easily use Harness to continuously integrate and deploy any applications with simple steps and configurations.
For the sake of simplicity and as per today’s trend, we have considered Kubernetes deployment as an example. We have used the GCP Kubernetes cluster and that is where our deployment is going to happen. You can choose your favourite cloud provider to create a Kubernetes cluster to deploy the app.
Harness has a pretty sleek UI and can easily help developers do CI/CD effortlessly. Once you sign-up at Harness, select the Try NextGen tab and you will be presented with the new CI/CD experience and capabilities. Start with the module you like, we will be selecting Continuous Integration and Continuous Delivery in our case. First, select the Continuous Integration module, add the required steps and run it before doing Continuous Delivery.
Just make sure to have all the required connectors up and running. Also, make sure you have the Delegate installed on your target cluster.
You might ask - what is Delegate and why it is required? Well, The Harness Delegate is a service/software you need to install/run on the target cluster [Kubernetes cluster in our case] to connect your artifacts, infrastructure, collaboration, verification and other providers with the Harness Manager. When you set up Harness for the first time, you install a Harness Delegate.
We will not dig deeper about Delegate in this article as it can be a separate blog in itself. For now, just know that the Delegate performs all deployment operations for you. If you want to know more about Delegate, you can read here.
I just showed you how to deploy your simple python application to Kubernetes using kubectl and Harness. We have a well-documented MERN Stack application repository that you can fork and start understanding the complete CI/CD pipeline. The code for the application is in the harnessapps/MERN-Stack-Example repository, the Kubernetes configuration is in the harnessapps/MERN-Stack-Example-DevOps repository.
Ready to get your hands-on on the Harness CI/CD?
The below-mentioned links will walk you through both Continuous Integration and Delivery.
Happy DevOpsing!
Top comments (1)
Thanks for sharing 😊