<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Valon Januzaj</title>
    <description>The latest articles on DEV Community by Valon Januzaj (@vjanz).</description>
    <link>https://dev.to/vjanz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vjanz"/>
    <language>en</language>
    <item>
      <title>Hosting Django Static Files in AWS using S3 and CloudFront: A Comprehensive Guide</title>
      <dc:creator>Valon Januzaj</dc:creator>
      <pubDate>Tue, 17 Oct 2023 07:48:50 +0000</pubDate>
      <link>https://dev.to/vjanz/hosting-django-static-files-in-aws-using-s3-and-cloudfront-a-comprehensive-guide-42o5</link>
      <guid>https://dev.to/vjanz/hosting-django-static-files-in-aws-using-s3-and-cloudfront-a-comprehensive-guide-42o5</guid>
      <description>&lt;p&gt;In today’s digital landscape, having a reliable and scalable infrastructure for hosting static files is crucial for web applications. Django, a popular Python web framework, offers seamless integration with Amazon Web Services (AWS) to accomplish this task. In this guide, we will explore how to host Django static files in AWS using CloudFront, a powerful content delivery network (CDN) service, to ensure high availability and fast content delivery.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The article is a step-by-step guide on how to achieve the goal, but nevertheless, I assume that the reader has basic knowledge of Django and AWS.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F8064%2F0%2AGVZKEfjN8MRp7JCN" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F8064%2F0%2AGVZKEfjN8MRp7JCN" alt="Photo by [Faisal](https://unsplash.com/@faisaldada?utm_source=medium&amp;amp;utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral)"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Django Static Files
&lt;/h2&gt;

&lt;p&gt;Before diving into the technical aspects, it’s important to understand Django static files. These files include CSS stylesheets, JavaScript scripts, images, and other assets that are served directly by the web server without any processing. Managing static files efficiently enhances the overall performance and user experience of your Django application.&lt;/p&gt;

&lt;p&gt;Normally, when you deploy a service into production using Django, especially when you put the settings variable DEBUG=False Django excepts you to provide a way to handle these static files like a cloud service or CDN. What happens in this flow it’s that whenever the service wants to access those files, they’re fetched from the CDN or a cloud service instead of living in the same codebase as the web service itself.&lt;/p&gt;

&lt;p&gt;For this article, we are going to use AWS Services called &lt;strong&gt;S3&lt;/strong&gt; and &lt;strong&gt;CloudFront&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt; is a highly scalable and durable cloud storage service provided by Amazon Web Services (AWS). It allows you to store and retrieve large amounts of data, such as images, videos, documents, and backups, in a secure and reliable manner. S3 provides an object-based storage model, where each object is stored in a bucket and accessed using a unique key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon CloudFront&lt;/strong&gt; is a content delivery network (&lt;strong&gt;CDN&lt;/strong&gt;) also provided by Amazon Web Services. It helps deliver content, such as web pages, images, videos, and other static or dynamic files, to users with low latency and high transfer speeds. CloudFront caches the content at edge locations, which are distributed globally, closer to the users, reducing the distance and network latency. This improves the performance of delivering content to users across different regions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;



&lt;h3&gt;
  
  
  Setting up IAM User
&lt;/h3&gt;

&lt;p&gt;Login with your root user and then continue to create a new user that you will use during this exercise. You’ll need to create a user that has access to &lt;br&gt;
S3 and CloudFront:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;strong&gt;IAM &amp;gt; Users &amp;gt; Add New User&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter the username you want for your user and in the next step, select &lt;strong&gt;Attach policies directly&lt;/strong&gt; tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attach these two policies: &lt;a href="https://us-east-1.console.aws.amazon.com/iamv2/home?region=us-east-1#/policies/details/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2FCloudFrontFullAccess" rel="noopener noreferrer"&gt;CloudFrontFullAccess&lt;/a&gt; and &lt;a href="https://us-east-1.console.aws.amazon.com/iamv2/home?region=us-east-1#/policies/details/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2FAmazonS3FullAccess" rel="noopener noreferrer"&gt;AmazonS3FullAccess&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or if you want to skip this part (for simplicity), just use the root user &lt;strong&gt;(but this is not recommended in any case for the sake of security)&lt;/strong&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup S3 and CloudFront
&lt;/h3&gt;

&lt;p&gt;First, we need the S3 bucket on which we are going to upload our static files. Navigate to the &lt;strong&gt;&lt;a href="https://s3.console.aws.amazon.com/" rel="noopener noreferrer"&gt;S3 console&lt;/a&gt;&lt;/strong&gt; and click &amp;gt; &lt;strong&gt;Create Bucket&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enter a name for the bucket and select the region&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Keep ACL-s disabled&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tick &lt;strong&gt;Block all public access&lt;/strong&gt; and leave everything else as default&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’re good to go!&lt;/p&gt;

&lt;p&gt;Now let’s set up CloudFront for the newly created Bucket.&lt;br&gt;
Since we kept everything private in our bucket, we need to let CloudFront access some of the folders that exist in our bucket.&lt;/p&gt;

&lt;p&gt;Navigate to the CloudFront console and click &lt;strong&gt;Create a CloudFront distribution&lt;/strong&gt;. After that, you might need to fill up some information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Origin Domain: Choose the S3 Bucket that you created&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Origin Access: Choose &lt;strong&gt;Legacy access identities&lt;/strong&gt; and then click Create new OAI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the bucket policy select: &lt;strong&gt;Yes, update the bucket policy&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Allowed HTTP methods: &lt;strong&gt;GET, HEAD, OPTIONS&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Web Application Firewall (WAF): Do not enable security protections&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Response headers policy &lt;strong&gt;SimpleCORS&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s it, keep everything else as default. Now a new CloudFront distribution will be created and will return back a &lt;strong&gt;Distribution domain name&lt;/strong&gt; which in my case looks something like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://d29u7vv9xp7q8y.cloudfront.net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Great, now let’s make the necessary changes in our Django application.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up our Django application — Uploading static files
&lt;/h3&gt;

&lt;p&gt;In order to upload our static files to the AWS S3 bucket, we need to turn off &lt;strong&gt;DEBUG&lt;/strong&gt; mode and also update some settings and configurations in our Django application. To start off, first, install django-storageslibrary which is a library to handle storages better in Django.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pip install django-storages
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With that done, create a new Python module named storage_backends.py and add the following code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.conf import settings
from storages.backends.s3boto3 import S3Boto3Storage


class StaticStorage(S3Boto3Storage):
    location = 'static'
    custom_domain = settings.CLOUDFRONT_DOMAIN
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will tell that you want to upload static files to a folder named static and the custom domain to handle and redirect the static files would be the &lt;strong&gt;CLOUDFRONT_DOMAIN&lt;/strong&gt; which is set up in settings.py&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE: to not hard-code the environment variables as below and use a mechanism to manage the environment variables. A simple one would be os.getenv().&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Modify the settings and add the following environment variables:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DEBUG = False
AWS_ACCESS_KEY_ID = 'YOUR_AWS_ACCESS_KEY_ID'
AWS_SECRET_ACCESS_KEY = 'YOUR_AWS_SECRET_ACCESS_KEY'
AWS_STORAGE_BUCKET_NAME = 'NAME_OF_BUCKET'
AWS_S3_CUSTOM_DOMAIN = f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com'
AWS_S3_REGION_NAME = "YOUR_LOCATION_IN_AWS"
AWS_S3_SIGNATURE_VERSION = "s3v4"
AWS_QUERYSTRING_EXPIRE = 604800
CLOUDFRONT_DOMAIN = 'YOUR_CLOUD_FRONT.cloudfront.net'

STATIC_LOCATION = "static"
STATIC_URL = f'{CLOUDFRONT_DOMAIN}/static/'
# Add your path in the STATICFILES_STORAGE
STATICFILES_STORAGE = 'django_static.storage_backends.StaticStorage'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This should be enough. Now before you start your Django application, open a new console and collect the static files:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py collectstatic

You have requested to collect static files at the destination
location as specified in your settings.

This will overwrite existing files!
Are you sure you want to do this?

Type 'yes' to continue, or 'no' to cancel: yes

125 static files copied.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now navigate to your AWS account go to your bucket and see if the files are copied to a folder named static , if they’re, everything worked as expected.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing CloudFront Integration:
&lt;/h3&gt;

&lt;p&gt;Now it’s time to test whether CloudFront is serving your static files correctly. Start your Django development server and access your application. Inspect the network requests using your browser’s developer tools to verify that the static files are being fetched from the CloudFront URL you specified.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py runserver    
Performing system checks...

System check identified no issues (0 silenced).
July 30, 2023 - 16:58:13
Django version 4.2.3, using settings 'django_static.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now visit localhost:8000/admin or 127.0.0.1/admin and open developer tools, for example in Google Chrome: CTRL+SHIFT+I and go to the network tab. Click one of the .css files and navigate to Headers to see the requested URL and see if it’s pointing to the CloudFront URL that you defined in &lt;strong&gt;settings.py&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3532%2F1%2AN_JSdwapiulhhoHex040SA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3532%2F1%2AN_JSdwapiulhhoHex040SA.png" alt="Inspecting with Developer Tools"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Enabling CloudFront Cache Invalidation:
&lt;/h3&gt;

&lt;p&gt;To ensure that your users receive the latest versions of your static files, configure CloudFront cache invalidation. There are two common approaches to achieving this. You can manually invalidate the CloudFront cache whenever you update your static files by using the AWS Management Console or AWS CLI. Alternatively, you can implement cache invalidation techniques in your Django application, such as versioning static files or appending a query string parameter.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up
&lt;/h3&gt;

&lt;p&gt;In this article, we have set up a Python application with Django &amp;amp; Deploy and serve the static files from CloudFront.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can find the full source code of the article on the GitHub repository, with the instructions.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.github.com/vjanz/django-static-files-s3-cloudfront" rel="noopener noreferrer"&gt;www.github.com/vjanz/django-static-files-s3-cloudfront&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you found it helpful, please don’t forget to clap &amp;amp; share it on your social network or with your friends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you want to support my work, you can buy me a coffee by clicking the image below 😄&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/valonjanuzaj" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltkzluzpxchzvl208u7q.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any questions, feel free to reach out to me.&lt;/p&gt;

&lt;p&gt;Connect with me on &lt;a href="https://www.linkedin.com/in/valon-januzaj-b02692187/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="http://www.github.com/vjanz" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>python</category>
    </item>
    <item>
      <title>Deploy a dockerized FastAPI application to AWS</title>
      <dc:creator>Valon Januzaj</dc:creator>
      <pubDate>Fri, 03 Feb 2023 09:29:59 +0000</pubDate>
      <link>https://dev.to/vjanz/deploy-a-dockerized-fastapi-application-to-aws-94n</link>
      <guid>https://dev.to/vjanz/deploy-a-dockerized-fastapi-application-to-aws-94n</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhqsp4bp7l246l28kybk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhqsp4bp7l246l28kybk.png" width="800" height="288"&gt;&lt;/a&gt;&lt;br&gt;
You’ve created your FastAPI application and now you want to make it public by deploying it? — No worries got that covered.&lt;br&gt;
In this article, I am going to explain step-by-step from creating a simple application with FastAPI, dockerizing it, and deploying to AWS EC2.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is FastAPI?
&lt;/h2&gt;

&lt;p&gt;From the &lt;a href="https://fastapi.tiangolo.com/" rel="noopener noreferrer"&gt;official docs&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Setting up the project:
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir fastapi-demo
$ cd fastapi-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;After creating the directory for your project, dive in, and create a virtual environment. You can skip this step but it’s always good to have your dependencies isolated from the outside world. I want to stay simple here so I am using the virtualenv tool to create python virtual environments.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ virtualenv &amp;lt;name_of_environment&amp;gt;
$ source venv/Scripts/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now install FastAPI and uvicorn:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install fastapi uvicorn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now let’s set the project structure:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$mkdir src &amp;amp;&amp;amp; cd $_ 
$ touch __init__.py main.py 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now in the main.py let’s just create an instance of the FastAPI application, add a route and test it out:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from fastapi import FastAPI

app = FastAPI*()


*@app.get("/")*
*def root(*)*:
    return *{*"Hello": "World"*}*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;So now we test if the application is working by running the application with &lt;a href="https://www.uvicorn.org/" rel="noopener noreferrer"&gt;uvicorn&lt;/a&gt;. Command has this format: &lt;br&gt;
uvicorn .:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uvicorn src.main:app -- reload # for live-reloading
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Visit &lt;a href="http://localhost:8000/" rel="noopener noreferrer"&gt;http://localhost:8000/&lt;/a&gt; to see the application in action 🔥, or &lt;a href="http://localhost:8000/docs" rel="noopener noreferrer"&gt;http://localhost:8000/docs&lt;/a&gt; to see the API documentation, which is generated automatically? — Isn’t it awesome?🙌&lt;/p&gt;

&lt;p&gt;I am not going to dive deeper into the project structure, how can we organize things better, etc.. as the goal here is just to get started with FastAPI and learn how to dockerize and deploy it to AWS, and for the other tips and tutorials about FastAPI I highly recommend to refer to the &lt;a href="https://fastapi.tiangolo.com/tutorial/" rel="noopener noreferrer"&gt;official docs&lt;/a&gt; as it has really well-organized documentation, and it dives deep in all the concepts of the frameworks.&lt;/p&gt;

&lt;p&gt;We set up a simple application with FastAPI, and now you want to deploy it so you can access it or share it, or whatever is your intention. There are several approaches, like deploying manually by grabbing the source code and running manually as we did locally, but we want to be smarter here, so we’re going to dockerize the application and run it on AWS EC2 instances. Let’s go!&lt;/p&gt;
&lt;h2&gt;
  
  
  Dockerizing the Application
&lt;/h2&gt;

&lt;p&gt;So we basically created the application and tested it locally, all good. We could skip the steps of creating virtual environments and running the code locally, by defining the requirements and just adding the same logic, but it’s always a good practice to run the application locally and test it before going further.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note: I am assuming that you already have docker installed in your machine if you don’t get it from &lt;a href="https://www.docker.com/get-started" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Why would we use docker?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Get rid of the popular saying “&lt;em&gt;it worked on my machine&lt;/em&gt;”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Isolate the dependencies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run in an isolated environment which will serve only to run your application and its dependencies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Easier to deploy, etc.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the root directory of our project create a Dockerfile&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ touch Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.&lt;br&gt;
Before we write our configuration for the Dockerfile, let’s generate our application requirements:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pip freeze &amp;gt; requirements.txt 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Open the Dockerfile with your favorite editor, and write the following:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.8.1-slim # Image from dockerhub

ENV *PYTHONUNBUFFERED *1 
EXPOSE 8000 # Expose the port 8000 in which our application runs
WORKDIR /app # Make /app as a working directory in the container

# Copy requirements from host, to docker container in /app 
COPY ./requirements.txt .

# Copy everything from ./src directory to /app in the container
COPY ./src . 

RUN pip install -r requirements.txt # Install the dependencies

# Run the application in the port 8000
CMD *[*"uvicorn", "--host", "0.0.0.0", "--port", "8000", "src.main:app"*]*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Great, now let’s build our image by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker build -t fastapi-demo .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we are in the same directory we don’t need to specify the path to the Dockerfile as the name of the file is following the conventions. Now, wait for a couple of minutes as you will download the image from the remote repository from the docker hub, and then build your project on top of it.&lt;/p&gt;

&lt;p&gt;To make sure that nothing failed run $ docker images and you should see the name of the image in the list of the images in your machine. Great, now let’s run it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run -dp &amp;lt;host_port:docker_port&amp;gt; &amp;lt;name_of_image&amp;gt;

# -d - Detached mode, runs in the background
# -p - to map the port on where do you want to access the #application in my case localhost:8000/
We have exposed port 8000 in our Dockerfile so we're good to go.

So putting it all together:

$ docker run -dp 8000:8000 fastapi-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To make sure that the image is running ( after the image runs it becomes a container ) check with docker ps and you should see a container with our image running in the port 8000. Great now visit localhost:8000 and you should see the application running just as we did before locally. Done...&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the application to AWS
&lt;/h2&gt;

&lt;p&gt;So we have created our FastAPI application and we have dockerized into an isolated environment that runs everything. That’s great for local, but we want to expose the application to the public, so let’s deploy to AWS which is the most popular cloud provider.&lt;/p&gt;

&lt;p&gt;Everything that we’ll be using in this article is supposed to be without any additional charge, so you can use AWS free tier to follow along, but in general, do a clean-up after you have played around with it.&lt;/p&gt;

&lt;p&gt;Note: I am assuming that you already have an AWS Account and you can use free-tier services. I am going to explain everything step-by-step so let’s continue.&lt;/p&gt;

&lt;p&gt;Create EC2 Instance&lt;br&gt;
For the sake of demonstration, I will do everything from my AWS root account, but it’s not recommended, so make sure that you always create a separate user, and give the necessary permissions to accomplish the tasks.&lt;/p&gt;

&lt;p&gt;Navigate to EC2 service, and click Launch Instance&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvw3htuat7ieq3fr2xhku.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvw3htuat7ieq3fr2xhku.png" width="592" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the Amazon Linux 2 AMI (HVM) as it’s free tier eligible. Stay with the default settings and just click Review and Launch . After that it will ask you to generate a new key which you will use when you need to ssh into the instance, so create a new one, name whatever you want, and then download and keep them in your machine as you will need them right away. Click Launch instance and your instance will be ready in a bit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhz64iw7u5qpn9cf5pg5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhz64iw7u5qpn9cf5pg5a.png" width="702" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the instance has been created you might want to ssh into it, and install docker, so let’s do it.&lt;/p&gt;

&lt;p&gt;In your EC2 Dashboard, after the instance state has changed to running, right-click over it, and choose Connect . That is enough to give you information on how to ssh into your instance. Leave that open, and open a new bash terminal.&lt;br&gt;
Navigate to the downloaded .pem key, and do the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ chmod 400 fastapi-deploy-demo.pem # Substitute with your key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;From your EC2 Dashboard, grab the latest command which tells you how to ssh to the instance and paste that into your terminal, (make sure you’re in the same directory as the .pem file is ):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Same directory where .pem key is located

ssh -i "fastapi-deploy-demo.pem" ec2-user@&amp;lt;your-details-here&amp;gt;.compute-1.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And you should be inside your instance now, so let’s install docker. Just follow along with the commands and you should be fine:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Update the installed packages and package cache on your instance.

sudo yum update -y

# Install the most recent Docker Community Edition package.sudo 

amazon-linux-extras install docker

# Start the Docker service.

sudo service docker start

# Add the ec2-user to the docker group so you can execute Docker #commands without using sudo.

sudo usermod -a -G docker ec2-user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now reboot your EC2 instance, because in most cases it won’t take the configured permissions, so do it from the EC2 Dashboard and follow the steps above to connect again.&lt;/p&gt;

&lt;p&gt;Now write docker info to see if you can run it without sudo, and if you can you’re good to go.&lt;/p&gt;

&lt;p&gt;Create a repository, and push the image to it&lt;br&gt;
As we are deploying in AWS I prefer that we keep everything here, so instead of Dockerhub, I am going to use Amazon ECR.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Navigate to Amazon Container Services (ECR) and create a new repository. Name it whatever you want (fastapi-deploy-demo).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygmotxbslhd97edtfm8q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygmotxbslhd97edtfm8q.png" width="800" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now it’s time to push or local image that we built earlier to this remote repository. You first need to authenticate with your AWS credentials from your AWS CLI (Local), so if you don’t have it go ahead and install it from &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I am assuming that you’re already authenticated using your credentials&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now go to your ECR, and select your created repository, and click VIEW PUSH COMMANDS . Authenticate your docker client with your registry using the first command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpzo1wdznl63ftlsu80f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpzo1wdznl63ftlsu80f.png" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that build the image again using the repository name, tag and push the image to the repository:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Navigate to the project directory first and type the following:

docker build -t fastapi-deploy-demo .

# Tag the image
docker tag fastapi-deploy-demo:latest &amp;lt;YOUR ID&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/fastapi-deploy-demo:latest

# PUSH
docker push &amp;lt;YOUR ID&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/fastapi-deploy-demo:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Done...&lt;br&gt;
Now you’ve got everything set up, we need to ssh to the instance and pull the image, then run it with docker.&lt;/p&gt;

&lt;p&gt;Just as we did locally, now in our instance we have AWS CLI shipped we just need to authenticate by running&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws configure

# This will ask you for:
AWS Access Key ID: []
AWS Secret Access Key: []
Default region name: []
Default output format: []

# Make sure you use the same credentials as you used when you authenticated localy with AWS CLI
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And then authenticate with the docker client, see &lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html#registry_auth" rel="noopener noreferrer"&gt;docs here&lt;/a&gt;&lt;br&gt;
After you have been authenticated you can pull the image from ECR, so go ahead and navigate to the ECR once again, click on the repository and grab the Image URI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ddwg9y9q044ab3yt75e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ddwg9y9q044ab3yt75e.png" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Get back to your EC2 instance and write the type the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull &amp;lt;IMAGE_URI&amp;gt; # that you grab from ecr repository
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After you’ve pulled the image, go ahead and run as we did locally.&lt;/p&gt;

&lt;p&gt;Grab the name of the image first by running docker images and then write:&lt;br&gt;
docker run -dp 80:8000 .. Now we have mapped port 80 in our instance with port 8000 in docker.&lt;br&gt;
Great, verify that the container is running with the command: docker ps&lt;br&gt;
 is so, we have one last thing to configure, the traffic into our instance... 🚥&lt;/p&gt;

&lt;p&gt;Go to your instance in the EC2 dashboard, click on it, and then go to the Security tab, and after that click on your security group(mine is sg-0881f2399bc659e6b), and then click Edit inbound rules .&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mvuwd7pxshenus0cxs6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mvuwd7pxshenus0cxs6.png" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click add a new rule, and select all traffic, which is not good for security but just for sake of demonstration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xakomr337e1xju5642k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xakomr337e1xju5642k.png" width="800" height="103"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, navigate to your EC2 Instance, grab the public IPV4of the instance and test that in your browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lzbvh051bp536o32iw0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lzbvh051bp536o32iw0.png" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have successfully deployed our FastAPI application in your AWS EC2 Instance using Docker 🚀 🔥&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Play around with it, and when you’re done with it, remove the instance and the repository as you don’t want to get any additional charge from AWS.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;In this article, I wanted to show you step-by-step how to deploy your application to dockerize and deploy your application in AWS. &lt;br&gt;
As I did my research out there, this is a nice and straightforward approach to deploying your application, but you can use other services like ECS if you are intentions are to run applications in scale, which is a fully managed container orchestration service, so you can easily scale applications by only having images of your application as we did.&lt;/p&gt;

&lt;p&gt;If you would like to support my work, you can buy me a coffee by clicking the link 😄:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/valonjanuzaj" rel="noopener noreferrer"&gt;BuyMeCoffe&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to reach out to me if you have any questions.&lt;br&gt;
&lt;em&gt;Connect with me on 👉 &lt;a href="https://www.linkedin.com/in/valon-januzaj-b02692187/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://github.com/vjanz" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>crypto</category>
      <category>web3</category>
      <category>blockchain</category>
    </item>
    <item>
      <title>From local development to Kubernetes — Cluster, Helm, HTTPS, CI/CD, GitOps, Kustomize, ArgoCD — Part[2]</title>
      <dc:creator>Valon Januzaj</dc:creator>
      <pubDate>Fri, 03 Feb 2023 00:51:21 +0000</pubDate>
      <link>https://dev.to/vjanz/from-local-development-to-kubernetes-cluster-helm-https-cicd-gitops-kustomize-argocd-part2-4noh</link>
      <guid>https://dev.to/vjanz/from-local-development-to-kubernetes-cluster-helm-https-cicd-gitops-kustomize-argocd-part2-4noh</guid>
      <description>&lt;h2&gt;
  
  
  From local development to Kubernetes — Cluster, Helm, HTTPS, CI/CD, GitOps, Kustomize, ArgoCD — Part[2]
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8o8x62odzffmg76araas.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8o8x62odzffmg76araas.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the second part of the series: From local development to Kubernetes — Cluster, Helm, HTTPS, CI/CD, GitOps, Kustomize, ArgoCD. &lt;a href="https://dev.to/vjanz/from-local-development-to-kubernetes-cluster-helm-https-cicd-gitops-kustomize-argocd-part1-2e64"&gt;You can find the [PART 1] here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This part includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Introduction to GitOps — ArgoCD installation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using Kustomize to write Kubernetes manifest&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Secret management in Kubernetes with SealedSecrets&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add a basic Continuous integration pipeline with GitHub actions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creating and running our services in ArgoCD&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Intro to GitOps — Continuously integrating and deploying applications with ArgoCD, Kustomize, and GitHub Actions
&lt;/h2&gt;

&lt;p&gt;If we go to the &lt;a href="https://github.com/vjanz/kubernetes-demo-app" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; you can find all the manifests that we worked on so far: Deployment, Service, ClusterIssuer, Ingress, and Secret. This is okay at some point… but look at that secret.yaml file, which is very unlikely to be there, especially when it’s encoded with base64 when we know that is so easy to decrypt. Aside from that, it’s so hard to separate environments: production, staging, development, etc. There can also be lots of inconsistencies, where I can have some manifest locally and apply it, and another version of that manifest exists in the repo, so it’s very hard to reproduce the same environment if we delete everything.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ga1q5t6vx80cks9chr9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ga1q5t6vx80cks9chr9.png" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ideally, we would love to have a reproducible environment where everyone that works on infra knows the state of an application and can easily make changes to the manifest while everyone sees the changes, so let’s aim for this!&lt;/p&gt;

&lt;h3&gt;
  
  
  Intro to GitOps — What is GitOps
&lt;/h3&gt;

&lt;p&gt;GitOps is a way to manage infrastructure and applications using Git as a single source of truth. It’s a method to use Git as a centralized source of truth for declarative infrastructure and applications. The idea is to use Git to store the desired state of the infrastructure and applications, and then use automation tools to ensure that the actual state matches the desired state. This approach helps to ensure that the infrastructure and applications are always in a known, good state, and it makes it easy to roll back changes if something goes wrong.&lt;/p&gt;

&lt;p&gt;GitOps solves several problems in software development and deployment, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Version control: By using Git as the central source of truth, GitOps allows teams to easily track changes and roll back to previous versions if necessary.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Collaboration: GitOps allows multiple people to work on the same codebase and infrastructure, making it easier to collaborate and share knowledge.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automation: GitOps uses automation tools to ensure that the desired state of the infrastructure and applications is always in sync with the actual state, reducing human error and increasing efficiency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Auditability: With GitOps, every change is tracked and auditable, making it easier to understand how and why changes were made.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Speed: GitOps allows for faster deployment and rollback, as well as faster iteration and experimentation, as teams can quickly and easily test new features and changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability: GitOps allows teams to scale their infrastructure and applications easily and efficiently, with the ability to easily add and remove resources as needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr39ckod06c4uernf52qc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr39ckod06c4uernf52qc.png" alt="Gitops — source: Braindose" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Gitops is a methodology that uses Git as the single source of truth for infrastructure and application deployments. It is not a specific tool, but rather a way of organizing and managing the deployment process. ArgoCD is a tool that implements the GitOps methodology by automating the deployment and management of applications and infrastructure in a Git-based workflow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What is ArgoCD — Installing ArgoCD in the cluster — Implementing GitOps methodology
&lt;/h3&gt;

&lt;p&gt;ArgoCD is an open-source GitOps tool that automates the deployment of applications to Kubernetes clusters. It uses Git as the source of truth and continuously monitors the state of the cluster to ensure that it is in sync with the Git repository. ArgoCD also provides a web-based UI that makes it easy to view and manage deployments.&lt;/p&gt;

&lt;p&gt;To install ArgoCD we might use the same methodology as before, installing from the helm chart or directly applying the manifests that ArgoCD has published.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;After a couple of minutes, ArgoCD should be installed and we can port-forward the service and access the ArgoCD UI locally, so let’s do that:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl port-forward --namespace argocd svc/argocd-server 3000:443

# Get password - Use this password when loging in from the UI
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now visit localhost:3000 and log in:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;username: admin
password: the password that got output when you executed the command above
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Normally when working in GitOps, there is a known pattern where you have another Git repository where you keep all the manifest/infrastructure related resources in a declarative way. For this one I am going to create a new repository named kubernetes-demo-gitops.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  ~ gh repo create kubernetes-demo-gitops --private
✓ Created repository vjanz/kubernetes-demo-gitops on GitHub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;I will create an Argo AppProject so we can separate the application and not leave them in a default project. Think of an AppProject as a namespace in Kubernetes, just as a layer to isolate the resources.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
I will push argo-project.yaml to my new repo in projects/fastapi-app.yaml so the repo is not empty. Now my GitOps repo will look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
└── projects       
    └── fastapi-app.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now let’s connect the GitHub repo that we just created on ArgoCD. On ArgoCD UI navigate to Settings &amp;gt; Repositories &amp;gt; Connect Repo&lt;/p&gt;

&lt;p&gt;We need to generate an SSH key and add it to the repository (the public key) and we need to add the private key in ArgoCD:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  .ssh ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/pc/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/pc/.ssh/id_rsa
Your public key has been saved in /home/pc/.ssh/id_rsa.pub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now we need to do two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Add the public key that we generated to GitHub deploy keys&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add the private key on ArgoCD&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Copy the public key that ends with the extension .pub and create it in the repository on GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty47273nghkldiqjccjd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty47273nghkldiqjccjd.png" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now navigate in ArgoCD and add the private key of the same key that you generated.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat ~/.ssh/id_rsa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjqhkf331elmtk0ensot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjqhkf331elmtk0ensot.png" alt="Adding a private key and connecting the repo to GitOps" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the repo is connected and we can continue to write the manifests. Argo CD supports a wide variety of templates, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kubernetes manifests in YAML or JSON format&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helm charts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kustomize bases and overlays (We are using this one)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;JSONnet and Jsonnet templates&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ksonnet and ksonnet-lib&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Templates in the Open Policy Agent (OPA) Rego language&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Argo CD’s own JSONnet library (argocd-lib)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Install Kustomize from &lt;a href="https://kubectl.docs.kubernetes.io/installation/kustomize/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  What is Kustomize — Convert manifests in Kustomization way — Start using SealedSecrets to manage secrets
&lt;/h3&gt;

&lt;p&gt;Kustomize is a tool used to customize Kubernetes manifests, which are files that define the desired state of a Kubernetes cluster. It allows you to modify and extend existing manifests, or create new ones, without having to write everything from scratch.&lt;/p&gt;

&lt;p&gt;For example, you may have a base manifest that defines the deployment of a certain application, and you want to use that same manifest in multiple environments, but with some slight variations. With Kustomize, you can create a separate “overlay” for each environment, that specifies the specific changes you want to make to the base manifest, and then apply those overlays to the base manifest to generate a final, customized version that you can use to deploy the application.&lt;/p&gt;

&lt;p&gt;Now we’re going to take the plain manifests that we wrote at the beginning: deployment, ingress, service, etc. and convert them into the format of Kustomize.&lt;/p&gt;

&lt;p&gt;In our &lt;a href="https://github.com/vjanz/kubernetes-demo-gitops" rel="noopener noreferrer"&gt;GitOps repo&lt;/a&gt; create a new directory named apps. This would be the directory where we would list all the apps that we want to manage with ArgoCD. Remember, your Git repo according to GitOps can (and should) manage all the projects and infrastructure for your projects or organization. Copy the deployment and the service that we created in previous parts to apps/fastapi-service/base without making any modification for now. The repository structure should be like this.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── apps
│   └── fastapi-service
│       ├── base
│       │   ├── deployment.yaml
│       │   ├── kustomization.yaml
│       │   └── service.yaml
│       └── overlays
│           ├── development
│           └── production
├── argocd
└── projects
    └── fastapi-app.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;On the base, I added resources that will be part of any environment, but in overlays &amp;gt; development and production I will add stuff that is related only to those environments. Simply said, add everything that is common for all the environments on base and anything that is specific to the environment to the overlays/&lt;/p&gt;

&lt;p&gt;Let’s build the base first. In the base directory, I am going to paste the deployment and service as they’re and then we’ll do some modifications. You can also see that I’ve included a new file named kustomization.yaml where I define which resources I want to include.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - deployment.yaml
  - service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;Normally when you create a new programming feature and push it to the GitHub repository we want to deploy that change. What happens is we create a new docker image and we push it to the registry. After that, we update the Kubernetes deployment to use the new image. So image has to be changed dynamically for example vjanz/kubernetes-demo:v1 can be one version and another one can be vjanz/kuberentes-demo:v2-my-feature&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is where the beauty of Kustomize comes in, as we can put some placeholders in our manifest and then update them as we want. Edit base/deployment.yaml and update:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;containers:
  - name: kubernetes-demo
    image: valonjanuzaj/kubernetes-demo:latest

to:

containers:
  - name: kubernetes-demo
    image: example-image
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;I updated the image name to my-image which doesn’t make so much sense now, because that is not even a valid image name, but now we can use Kustomize to update the my-image to something else in an indirect way.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# change directory to base
$ cd apps/fastapi-service/base
$ kustomize edit set image my-image=valonjanuzaj/kubernetes-demo:v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;This command won’t change deployment.yaml directly, but it will update the kustomization.yaml file:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml

# It will add the following part 
images:
- name: my-image
  newName: valonjanuzaj/kubernetes-demo
  newTag: something
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Next time we use Kustomize to build a specific environment, the tool will look at kustomization.yaml file and it will update the values accordingly, let’s check it as we build the base environment manifests with kustomization:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Assume we are in the base directory
$ kustomize build apps/fastapi-service/base
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;If you see the output and look carefully, you can see that the image is not my-image but it’s whatever we edited before:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;....  
 - envFrom:
        - secretRef:
            name: demo-secrets
        image: valonjanuzaj/kubernetes-demo:something # Updated by Kustomize
        name: kubernetes-demo
        ports:
        - containerPort: 8000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Awesome isn’t it? — Now let’s imagine a scenario where CI (Continuous Integration) pipeline builds a new docker image and tags it with some random hash for example valonjanuzaj/reponame:0447995 then we can easily update the manifests with kustomize and use the newly generated image (we’ll do exactly this when we implement the CI/CD pipeline in the beginning).&lt;/p&gt;

&lt;p&gt;Now remove the images key from kustomization.yaml that is on the base directory, because we will start to create our development environment in our GitOps repo. Onoverlays/development we create a new file named kustomization.yaml which holds a reference to the base file plus you can add any additional resources:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resources:
  # references everything in base directory, as we want to include them
  - ../../base 
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kubernetes-demo-dev # we put the namespace here
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now we have imported deployment and service which are common for all the environments, but we see that we added a namespace to separate resources and we will add specific resources that only make sense for a specific environment. Let’s start by adding ingress for development.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Now we need to create an A record for this subdomain, but I will not go through how to do it as It’s explained above. Let’s add this in kustomization.yaml which is in the development directory.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resources:
  - ../../base
  # added ingress to kustomization on /overlays/development as
  # I want to use this ingress only for development
  - ingress.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kubernetes-demo-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;In the first part when we deployed the application in the traditional way, we say that secret management becomes hard as it was easy to decrypt a base64 value and it wasn’t safe at all to push the secrets in the repository. In GitOps methodology the secrets should also be part of the repository as we want to build a system that is easily reproducible. So let’s find a way to make the secrets safe even when they’re in the repository.&lt;/p&gt;
&lt;h3&gt;
  
  
  Secrets management in GitOps — SealedSecrets
&lt;/h3&gt;

&lt;p&gt;SealedSecrets is a Kubernetes-native solution for managing secrets using GitOps. It allows you to encrypt sensitive information like passwords, API keys, and certificates and store them in your Git repository while keeping them securely encrypted at rest and in transit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1q0hu76v7nnqlbciduwr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1q0hu76v7nnqlbciduwr.png" alt="Source: Bitnami" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To start working with SealedSecrets we need two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Install SealedSecrets in the cluster&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install kubeseal locally (CLI tool) to encrypt the secrets&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To install SealedSecrets in the cluster we can install it again through a helm chart:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  ~ helm search repo sealed                  
NAME                          CHART VERSION APP VERSION DESCRIPTION                                       
my-repo/sealed-secrets        1.2.1         0.19.3      Sealed Secrets are "one-way" encrypted K8s Secr...
sealed-secrets/sealed-secrets 2.7.3         v0.19.4     Helm chart for the sealed-secrets controller.

# Installation
$ helm install sealed-secrets my-repo/sealed-secrets --namespace kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The command will install a controller in the cluster in kube-system namespace and it will also create a certificate that will be used to encrypt the secrets. This is great because even though we commit the secrets in the repo, the secrets are encrypted with a certificate that exists only in our cluster, so they cannot be decrypted with a random certificate.&lt;/p&gt;

&lt;p&gt;To install the kubeseal tool locally, look for the &lt;a href="https://github.com/bitnami-labs/sealed-secrets#installation-from-source" rel="noopener noreferrer"&gt;instructions here&lt;/a&gt;. After you install the kubeseal, we can easily create secrets that we can push to the repository. Let’s grab the secret for Postgres that we created earlier and let’s convert it to a SealedSecret.&lt;/p&gt;

&lt;p&gt;Our old secret.yaml looks like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: demo-secrets
type: Opaque
data:
  POSTGRES_USER: cG9zdGdyZXM=
  POSTGRES_PASSWORD: TDVYT3lacTViUg==
  POSTGRES_PORT: NTQzMg==
  POSTGRES_DB: a3ViZXJuZXRlcy1kZW1v
  POSTGRES_SERVER: cG9zdGdyZXMtcG9zdGdyZXNxbC5wb3N0Z3Jlcw==
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now let’s create a version of this but with kubeseal tool, which uses the certificate that exists in our cluster to encrypt the data:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubeseal \
  --controller-name=sealed-secrets \ # name of the controller
  --controller-namespace=kube-system \ # namespace where controller is 
  --scope cluster-wide \ # To allow decryption from all 
  --format yaml &amp;lt; secret.yaml &amp;gt; sealed-secret.yaml # Output file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now the generated sealed-secret.yaml can be added to the GitOps repo as it’s encrypted using the certificate that is inside the cluster, and the file would look something like this in my case:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  annotations:
    sealedsecrets.bitnami.com/cluster-wide: "true"
  creationTimestamp: null
  name: demo-secrets
spec:
  encryptedData:
    POSTGRES_DB: AgBLjgBM7CIlaEnxTWVedkmhd5HgO+Ep9HUNfwGNLe4K7tFS540xoVvvwV9g7UZJM547dcn3F5thfKal4ilah4UixQ1Y5w9ZG38jf4zo1AwiXaV+1YdvXjC7NLRAQhh3Ya8bwJT7QOJRS0vGioJRWkB9BY5JUHlgTJHvVcuMdwoD0vR34M3Z5XswrMr+uBBLyasrDSKtrhhIOxGTsMHtYWzfWm2UiJRp/s6hnsZG4N5IAFDB8HYcMCWGkTtZ3DGsX6XD30JrK6txpGmRb4PjYIFiKtFgp3uKWHS4XN2rgiK0VdpvgdZbgLVclX24NK+o/P+75cwHyVP6aGY9DIWFmovj3afaEfVYcnO91EC0l0V716HE6q3lKB204hpsZ3ioTPV+9MSzW6YixafX2t+J2wQiUd8q996v5lNWomRPdyjw0P2lXzJUbjkQjHeMK2UBu7xLz/ODb7QDhQpOnKGm/Wz1Tj/brd6vpDWVdntY8+9KDW1n3e6E4po8P2P09ihP4OtkFbD2jKC/56FUvV5y3wlP5XJxn9jqoZJBcq4PGS3cpngjUSimOfc9WpvG3wLhpXSFLlVXrWxoPeklBcX++sy4blFQ34JHXiD+48qk5GY7ceW45PcJOQv6REUUo2OCdWt9KUqrWiC6WgM1STJ7sScWNHL0ito9N3eNm+T8nwaI/jNwSLNCQNHkIUId8t86w9NqclSUsda1xQuGUqj/JU8=
    POSTGRES_PASSWORD: AgCBdPMRPxax/teLQw9fzLiYnXcDOZY+6Ly+eqem2qIePzg65guoTkqnMaQnCi3veUIi4RlimiMBoLxhMe6sHBX2LrEESfbtf2nRtZrs69QNG1lvOKMylXNpNHYISEKh6D3GWApcdYM/phXr0QbMZY6+CP0dMAn3tPTXBj1HZ3MJgwZYMKnKdAQbY49FprHwO0N28te96IvqagdlEIWKkYXBtazHG7lAIJDKfleHDyWa1FLjbtrjb+oXbx3eBd/scKagYdZc/I7EkelbMKNuzGgMRnKjaN/fez1dvwnzPWqRKgiAMQP05jfO15bjOWGqlwU2UFd1RQuB1gzrJViDo3tWI7vYXpegIWbBPes1jCC3y5hybxprGoWMkiMRXmj+anVbLRl1ZH+SRcZldCUOTzhUFI9J/vc2rb6kOj+aetR0eJKrhZ6/SkR5Sa9kHzakUDROmdC9cIzSVfZ3RA1aBSs56JCX7gLvDndPGpT/BFfMMt41DiA6O7TtM3CEM/qB+YDs9XFJVPsDlHkdMziv0bAR5jRNQTa5xCTSMt6VU/ef0+415pv1iJqau85TK5hSptSq/3Fn6ARhTtcw1RpvY3USd8PDVHMbQkdLW5SnEAFp37WUrjjqi7VcrGcVNGwQZbAzzyg4ns3EQ3p3TU4uXzbcTHeLHfzA/NKDRAzqMV1d3PWPlMuDfVqfQaXBPv8LKeFMWNt75URrUov4
    POSTGRES_PORT: AgCLN36HvCVsCc+7MT8IwP9bpcTJcoqh80USdqsUmhwEFYhzAo8Kux3/gwxnghDDvEya9WCQSAEbuAD6hX4Yo7T+sbSD5+oxDRZAU+YFPdYjAJs0tOhMAM2AwAmj1cJLoGFRUqqCFI2uFRExB1nJkr1e0QOgingnnLWPPvUIP0v/Gj3Bh/+FC925LppZcjJxuJ9xyRYuj1bqLoqAmw9YtXPYOArlAYn1t1+xDseSxvAYo7UU1+QCx82zBZVyXnEpyPaGjKsqIE9O4MaV4g62W7VbBNtRbK7lCinggjFzQLv/T8s0IVgmqGMtou4oPamtlZN8OThUZF2W5B+PBBBHsKXIiOAoWVCF27x3mEC7OLlpRwwVpic4y9nDHkLLg2V0Wpcnu8m41voyjQywT8fDP2ogl3sDHeUpouG2UduumWz4PZpDyNBriJ9cZUwa+de00mLftA170scDBqqw3hTkmvnbwoy/+L6mYjJn1/yl1lUUMd4ezYm1Ki7dwRzrXfvy/zIpHHQt2T1Av2JHpCEhlW8DoBPAP5C0nobUUSRxNqvBgeN81GRORohsfKjs76wwSJyyOXuU4Y9eLD0JDZuJ9aei6T/jAd8nubcef7jw2pwIuAQ5RrCGot8mHWQwXjd52/XWqyVUzcksqnMcsuK7u93/SKcKZb9tr2wMzLw75PjYRXYrTblyH5DAmOAhNVjLzculWGDZ
    POSTGRES_SERVER: AgBTTiwCjdNsETS/9aoJzvSVtPsUWfY5ZSnHmsDxQxgaPR1TWbZx1iNuZjOIw0XZm6T3OBnGoVKq09kQdMS0DOvOtZ+XNoP7S+88Ee82lyymMZyCBDlMcAUyHxR6Xa/RpqE1IFtZB5m/aubN23A9vevZkH73cwWTwl/CTVmsb9x0dY6W3NExOG5FQ7HaOsTrnyirDZSyLRYGnYNCeqzY1OFPiPQLcyYJoFwDfATQ7e7x0O3S6vhnj/KeUCxunsMpSsIavjdo/t8DgFtkUhNaWfCr3LWB4WdL2uIeCs8gebyzO7xaxR+/XKCHSrH9WeLHkQknwwfVdWFidGMtUXLvUvOM26EsrmIcAnioD6rxpRtIszWuDYNAl7qdk+s4WsXJFWuiUzALNioWuwUGDlICb6ViWGdlbTXI2W8PYQFHiuCTByGk93hc46T+jdsiM+gxzik5FdhFMAnsqZLzkvfqJfeBT5Sr/+AGfjke/SH5ses/KB+61NtCRiBwaL10S73KwKmzk6wC/zBv1sEICWJhf08Z+VU2q76HcJJXu9Ll66uvo/YWViNPR1W7Rt881QGzof1/MEf3Rc1xy0Ni65Z87mQEMs68wzjLb2eLpPk5x3AAPgjGVQw1CVgnutoOlwZevwayCP/5kNIE5Bzhm4pgx41sjeBItqZxvkXqgNpBIfcxKPs6rECgLRws1tRNId0xLhkkvfma5ckpC0M4UlagplpnfuriNzOnZv3MZ+q2
    POSTGRES_USER: AgCTJA1LSrYSdLn/3IrGyxjbI3LXIu5sN+Swt84RsLDgOvqAuqJ9aL8YHotWwumUBpxxdm3MdCUoTcWqnTgUSst9hRvgrO72YQr9ej2YBe5CBbPXbO6Y7PQNbm0rz73AKo7HAI5GOP77Kd/o2ovos9f1dWLayI7+6HDvl4FjBCHTQ8B+e2kJnBHSB8/P6PdGAOks5qMBK8hCMu9gUjpxygGgZDAiW+ITInbzKABh+6AbMgXSl7WRGZVgwGZ1Mlezh13rDOmMhy/68j2s5HaOlgnvGmJYqyS3k5NegO7nhTwvlqzQCI/zuca8e+PpoedIa6XG3c6Y91psrIwZn2e7KT2z5sXUSRq/IX+PnH+Qx+SJIJP/0UL2cuVal+DXyr1jA2lZSizFBW5ZpLteDVRMFcjwVj0gaz5EUkpeGJPRJa/yQVi/KnB/KHRx8VxxE6k2fDY3NKb9hsx6E0Mjwsc9fdHC5W18pbwc/QGiN7bO9WSCusoibletolLS9eS7YBEORG+4LiYyhzI9KAzp3a9FZu15R4CZW6QNvxjo7sOAMkH71U8JNpz1h77bpKelEgPpVPtS9WCFQ8acGsJn+kVpxV28TdOPX4CKBdiuv/pSKvfyZ54nxZToJZuQ8hwjjUf2LXcqYIqvIR7Oe5wO7JFobKIXHUVcPndhY5SPGjShH9+HhaTvAL1h6DkkeYsPMtoJ47vLgMDDgtnqdA==
  template:
    metadata:
      annotations:
        sealedsecrets.bitnami.com/cluster-wide: "true"
      creationTimestamp: null
      name: demo-secrets
    type: Opaque
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now I will be adding this file to the GitOps repo on overlays/development with name sealed-secret.yaml and I will update the kustomization file to include this resource my kustomization file will look like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resources:
  - ../../base
  - ingress.yaml
  - sealed-secret.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kubernetes-demo-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Let’s explore other features of kustomize by doing something that you may need. On the base deployment, we have 2 replicas of that pod, but what if we want to scale up the number of replicas but only for specific environments (let’s say development)? We can create a patch and we can apply that one only for development. So let’s create a new file named replica-count in development, as follows:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubernetes-demo
spec:
  replicas: 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;and in kustomization, modify as:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resources:
  ...
# you add this key
patchesStrategicMerge:
- replica-count.yaml
...
namespace: kubernetes-demo-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now if we check with kustomize:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kustomize build apps/fastapi-service/overlays/dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;we will see that the number of replicas for the development environment has changed to 3. This is the power of kustomize when it comes to separating the environments in style where you can extend what you want, but also override anything you want.&lt;/p&gt;

&lt;p&gt;At this point, we have completed all the necessary resources to replicate a development environment using ArgoCD and Kustomize. Now we won’t apply these manifests through kubectl, or in any manual way because we want to keep everything consistent and reliable so instead, we are going to let ArgoCD apply these manifests and create them accordingly. Now we need to create a Continuous Integration pipeline for our sample app that builds the docker image, pushes it to the registry, and updates the GitOps repo with the name image name in order for ArgoCD to get the changes and deploy the new version of the application.&lt;/p&gt;
&lt;h2&gt;
  
  
  Continuous Pipeline with GitHub Actions — Continuous Delivery with ArgoCD
&lt;/h2&gt;

&lt;p&gt;In the GitOps world, the CD part is normally handled by a tool that implements GitOps, like ArgoCD or Flux. We are using ArgoCD, so basically, ArgoCD is connected with the repo that holds the manifests, which I am referring to as the GitOps repo and when there is a change, ArgoCD synchronizes the change automatically in the cluster.&lt;/p&gt;

&lt;p&gt;Let’s go to our codebase and create a CI pipeline. GitHub excepts the workflows to exist under .github/workflows so let’s create the directories and the respective files and also a branch for development:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  kubernetes-demo-app git:(main) ✗ mkdir -p .github/workflows           
➜  kubernetes-demo-app git:(main) ✗ touch .github/workflows/workflow.yaml
➜  kubernetes-demo-app git:(main) ✗ git add .
➜  kubernetes-demo-app git:(main) ✗ git commit -m "Add workflow files"                                     
 create mode 100644 .github/workflows/workflow.yaml
➜  kubernetes-demo-app git:(main) git push 
# I created a new branch as I want to associate development with the
# overlay that I created on GitOps repo
➜  kubernetes-demo-app git:(main) git checkout -b development       
Switched to a new branch 'development'
➜  kubernetes-demo-app git:(development)We are going to use GitHub actions to build our pipeline. GitHub Actions is a powerful tool for automating software development workflows. It allows you to trigger actions based on events in your GitHub repository, such as commits, pull requests, and releases. In this case, we will use GitHub Actions to trigger a deployment to our Kubernetes cluster every time a change is pushed to the master branch.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Our CI pipeline has two goals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Build the docker image and push it to the registry&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Update the repo that we use to store manifests with the latest image that is pushed&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Let’s explain the workflow a bit. There are three jobs:&lt;/p&gt;
&lt;/li&gt;

&lt;li&gt;&lt;p&gt;build: This job runs on an ubuntu-latest environment and checks out the code from the repository. It then sets up Python 3.9 and installs any dependencies specified in the requirements.txt file.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;build-and-push: This job also runs on an ubuntu-latest environment and sets up QEMU and Docker Buildx. It then logs in to Docker Hub using the username and token stored as secrets, and builds and pushes a Docker image to Docker Hub with the tag valonjanuzaj/kubernetes-demo:$github.sha, where github.sha is the commit SHA.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;update-manifest: This job checks out a separate repository named vjanz/kubernetes-demo-gitops, which contains the Kubernetes manifests and updates the development manifests in the $K8S_YAML_DIR/overlays/development directory with the new image version, which is built and pushed in the build-and-push job. The changes are then committed and pushed back to the repository.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Basically what happens is, each time we create a Pull request or we push in the development branch, a new image will be built and pushed to the repository. Aside from this, the pipeline will checkout in our &lt;a href="https://github.com/vjanz/kubernetes-demo-gitops" rel="noopener noreferrer"&gt;GitOps&lt;/a&gt; repo to update the image with the latest one based on the environment that the Pull Request or push has happened.&lt;/p&gt;

&lt;p&gt;There are some secrets that are being used in the workflow. To set up GitHub secrets, &lt;a href="https://github.com/Azure/actions-workflow-samples/blob/master/assets/create-secrets-for-GitHub-workflows.md" rel="noopener noreferrer"&gt;see the instructions here&lt;/a&gt;, and to set up GitHub personal token (PAT) see the &lt;a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token" rel="noopener noreferrer"&gt;instructions here&lt;/a&gt;. The secrets corresponding to Dockerhub login information and GitHub personal token to be able to access the other repository.&lt;/p&gt;

&lt;p&gt;The updated manifests in the GitOps repository are then pulled by ArgoCD, which ensures that the deployed application in the cluster is in sync with the desired state defined in the GitOps repository. This way ArgoCD ensures that the application version deployed in the cluster is always up-to-date and aligned with the version in the GitOps repository.&lt;/p&gt;

&lt;p&gt;That’s everything, we have set up our CI pipeline which will build, push the image and then update the repository which holds the manifests. After the update is done, ArgoCD will see the changes and it will update the cluster accordingly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check if the pipeline is working as excepted
&lt;/h3&gt;

&lt;p&gt;Now let’s check if the pipeline that we build is working as excepted. Normally all the jobs should pass, the image should be built, and there should be a commit in the &lt;a href="https://github.com/vjanz/kubernetes-demo-gitops" rel="noopener noreferrer"&gt;GitOps repository&lt;/a&gt; with the new image tag.&lt;/p&gt;

&lt;p&gt;Let’s make a change in one of the routes and push the code!&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@app.get("/health")
def health():
    return {"status": "App is running!!"}

$ git add .
$ git commit -m "Update /health endpoint"
$ git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;If everything is set up correctly (keep an eye on GitHub secrets) the workflow should be complete without any errors. If we check the GitOps repo we should see an update on overlays/development since we pushed on the development branch. We have configured that when a change is made in the development branch we deploy to a development environment on Kubernetes and we can configure it so that when we push to the main we update the production environment (this is based on your preferences).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8gdsytyp024cplyow0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8gdsytyp024cplyow0r.png" alt="The pipeline has pushed the new tag to the registry" width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3446q62amr5xexqdoutq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3446q62amr5xexqdoutq.png" alt="GitHub actions updating the tag on the GitOps repo" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Perfect, so now let’s just add our application to be monitored byArgoCD as this is the only step missing in the picture.&lt;/p&gt;
&lt;h3&gt;
  
  
  Setting up the application on ArgoCD — Automating the whole workflow!
&lt;/h3&gt;

&lt;p&gt;Now that the continuous pipeline is set up and the application is continuously being integrated, all we need to do is to set up the ArgoCD application, which will listen for changes in a specific directory, and if there is any change it will automatically deploy. In our case, we have added our manifest at: &lt;a href="https://github.com/vjanz/kubernetes-demo-gitops/tree/main/apps/fastapi-service/overlays/development" rel="noopener noreferrer"&gt;apps/fastapi-service/overlays/development&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are at least two ways to add an application in ArgoCD:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Configuring all the options from the ArgoCD UI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Adding the application in a declarative way (I recommend this one)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Creating an Argo application in a declarative way is generally considered to be better than creating it from the UI for a few reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Reproducibility: Declarative manifests allow you to version control your application configurations, making it easy to roll back to a previous version if something goes wrong.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automation: Declarative manifests can be easily automated, allowing for repeatable and consistent deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Audibility: Declarative manifests provide a clear and concise representation of the desired state of the application, making it easier to understand and audit the configuration of the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Portability: Declarative manifests can be easily ported across different environments, allowing for simpler migration and disaster recovery.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Easier to scale: Declarative manifests can be easily scaled up or down with minimal changes, making it easier to manage the application as it grows.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below find the Argo application for our application in the declarative form:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Let’s explain what those configurations mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;name — the name of the application that is installed on ArgoCD (can be anything)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;project — name of the project we want this application to be associated with&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;repoURL — the repo that we have linked with our ArgoCD (where we keep the manifest, GitOps repo)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;path — Where is the application manifest located&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;destination, server — With ArgoCD you can manage multiple clusters, so in this case I am telling ArgoCD to install on the same cluster where ArgoCD is installed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;namespace — Which namespace do we want the application to be deployed at&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;syncPolicy, automated — means that any change will be automatically synchronized without any manual interventions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the other configuration seems quite self-explanatory, so I will not go on for each of them.&lt;/p&gt;

&lt;p&gt;Let’s create the application:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f fastapi-service-development.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now the structure on the repo looks like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── apps
│   ├── argocd
│   │   └── fastapi-service-development.yaml
│   └── fastapi-service
│       ├── base
│       │   ├── deployment.yaml
│       │   ├── kustomization.yaml
│       │   └── service.yaml
│       └── overlays
│           ├── development
│           │   ├── ingress.yaml
│           │   ├── kustomization.yaml
│           │   ├── replica-count.yaml
│           │   └── sealed-secret.yaml
│           └── production
└── projects
    └── fastapi-app.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is the application that we are deploying for a development environment. Note that if you want to separate another one for production, you should create another manifest for it too, giving its respective configurations.&lt;/p&gt;

&lt;p&gt;Now if we head back to the ArgoCD UI, we can see that the application should be up and running!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30f3j61jwnfvtaedw573.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30f3j61jwnfvtaedw573.png" alt="ArgoCD UI" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesome isn’t it? — Now note that you cannot make any changes from outside the cluster, all the changes should happen from the Git Repository. For example, if we delete a pod from outside the cluster with kubectl or any other tool, ArgoCD will look at the state of the cluster and the state defined in the GitOps repository, and where there is a change between states, ArgoCD will automatically choose the one that is on GitOps repository and this is really good as you have only one source of truth when it comes to writing and managing these files.&lt;/p&gt;

&lt;p&gt;Now let’s see if the changes are applicable if we change something, like changing the number of replicas. I am going to update &lt;a href="https://github.com/vjanz/kubernetes-demo-gitops/blob/main/apps/fastapi-service/overlays/development/replica-count.yamlhttps://github.com/vjanz/kubernetes-demo-gitops/blob/main/apps/fastapi-service/overlays/development/replica-count.yaml" rel="noopener noreferrer"&gt;this&lt;/a&gt; file to make the deployment have only one pod (1 replica).&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubernetes-demo
spec:
  replicas: 1 # this was 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now let’s just push the changes to the GitOps repository:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git add .
$ git commit -m "Change replica count"
$ git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;and we can see that two pods will get immediately killed by ArgoCD as the state on the GitOps has changed compared to the one on the cluster. Keep in mind that these two should always be in sync!&lt;/p&gt;

&lt;p&gt;Since we tried that, let’s try to change the number of replicas with kubectl and see what happens:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get deployment -n kubernetes-demo-dev
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-demo   1/1     1            1           30m

$ kubectl scale --replicas=3 deployment/kubernetes-demo -n kubernetes-demo-dev
deployment.apps/kubernetes-demo scaled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;At first, you may think that you changed the number of replicas and everything is going to be fine but as soon as ArgoCD detects the change, will rollback the change to the state that is defined on the repository which was (1), and it will kill both newly created pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;So we have completed the full workflow, from creating the application and running it locally to Kubernetes using the best practices out there. I hope you find it helpful and I am sure that you learned a lot from this article as I dedicated a lot of time to writing it, using my knowledge regarding the topic and the problems related to it.&lt;/p&gt;

&lt;p&gt;If you want to support my work, you can buy me a coffee by clicking the image below 😄&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/valonjanuzaj" rel="noopener noreferrer"&gt;BuyMeCoffe&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any questions, feel free to reach out to me.&lt;/p&gt;

&lt;p&gt;Connect with me on &lt;a href="https://www.linkedin.com/in/valon-januzaj-b02692187/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="http://www.github.com/vjanz" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Links and resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/vjanz/from-local-development-to-kubernetes-cluster-helm-https-cicd-gitops-kustomize-argocd-part1-2e64"&gt;Part 1&lt;/a&gt;, &lt;a href="https://dev.to/vjanz/from-local-development-to-kubernetes-cluster-helm-https-cicd-gitops-kustomize-argocd-part2-4noh"&gt;Part 2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Github repo for this part:&lt;br&gt;
&lt;a href="https://github.com/vjanz/kubernetes-demo-app" rel="noopener noreferrer"&gt;https://github.com/vjanz/kubernetes-demo-app&lt;/a&gt;&lt;/p&gt;

</description>
      <category>documentation</category>
      <category>productivity</category>
      <category>career</category>
      <category>discuss</category>
    </item>
    <item>
      <title>From local development to Kubernetes — Cluster, Helm, HTTPS, CI/CD, GitOps, Kustomize, ArgoCD — Part[1]</title>
      <dc:creator>Valon Januzaj</dc:creator>
      <pubDate>Fri, 03 Feb 2023 00:48:53 +0000</pubDate>
      <link>https://dev.to/vjanz/from-local-development-to-kubernetes-cluster-helm-https-cicd-gitops-kustomize-argocd-part1-2e64</link>
      <guid>https://dev.to/vjanz/from-local-development-to-kubernetes-cluster-helm-https-cicd-gitops-kustomize-argocd-part1-2e64</guid>
      <description>&lt;h2&gt;
  
  
  From local development to Kubernetes — Cluster, Helm, HTTPS, CI/CD, GitOps, Kustomize, ArgoCD — Part[1]
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3840%2F1%2ABbTH793LvpTCkEijdFXDzA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3840%2F1%2ABbTH793LvpTCkEijdFXDzA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nowadays you may hear a lot about Kubernetes… You open the browser and search for it and all you get is some resources that are really advanced and you can’t even understand what is happening or basically, you don’t see where is your problem fitting into the frame. At least, this was my experience when I started digging around this topic.&lt;/p&gt;

&lt;p&gt;Now I wrote this article to share my knowledge and to make your life easier when it comes to using solving problems with Kubernetes using the best practices.&lt;/p&gt;

&lt;p&gt;I will go in an ordered way from explaining what Kubernetes is, how to prepare local development, deployment, etc. each step-by-step and explaining in detail what each component is and why you need it in the bigger picture. Are you ready? — Let's go!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Since the article is going to cover a lot of concepts, it’s separated into two parts to make it easier for you to read and follow along. Please find below the link for the second part of this article. This article assumes that the reader has basic knowledge of programming and Kubernetes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Tutorial steps
&lt;/h2&gt;

&lt;p&gt;The tutorial series progresses with increased complexity, where:&lt;/p&gt;

&lt;p&gt;Part 1(This one):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What is Kubernetes and its most important components&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build a basic Python application using FastAPI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dockerize and run it locally with docker-compose&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Setting up a Kubernetes Cluster in the cloud&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy our app to Kubernetes in the traditional way&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploying a PostgreSQL database in our cluster using Helm&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure domain, Ingress Controller, HTTPS&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Part 2 includes &lt;a href="https://dev.to/vjanz/from-local-development-to-kubernetes-cluster-helm-https-cicd-gitops-kustomize-argocd-part2-4noh"&gt;(click to go to Part 2):&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Introduction to GitOps — ArgoCD installation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using Kustomize to write Kubernetes manifest&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Secret management in Kubernetes with SealedSecrets&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add a basic Continuous integration pipeline with GitHub actions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Creating and running our services in ArgoCD&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;If you know Kubernetes architecture, you can skip the following part (What is Kubernetes), even though a refresh is highly recommended&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Kubernetes
&lt;/h2&gt;

&lt;p&gt;Kubernetes is a system for managing and orchestrating containerized applications. In simple terms, it allows you to define how your applications should run, and then it manages the running of those applications for you. It can run on a variety of different platforms, including on your own infrastructure or in the cloud. Kubernetes is especially useful when you have multiple applications or microservices that need to run together, as it can help you to automate the deployment and scaling of those applications, and it makes it easier to manage the entire system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2048%2F0%2Afih6ENJhhWnepI1X.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2048%2F0%2Afih6ENJhhWnepI1X.png" alt="Kubernetes Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;API Server — The Kubernetes API server is the central way to interact with a Kubernetes cluster. It exposes the Kubernetes API, which is a set of RESTful APIs that you can use to create, update, delete, and list the objects in a cluster&lt;/p&gt;

&lt;p&gt;Scheduler — The Kubernetes scheduler is a component of the Kubernetes system that is responsible for assigning pods (the smallest deployable units in Kubernetes) to nodes in the cluster. When a pod is created, the scheduler selects the node where the pod should run based on the resource requirements of the pod, the resource availability on each node, and other constraints that may be specified by the user.&lt;/p&gt;

&lt;p&gt;Controller Manager — The Kubernetes controller manager is a daemon that runs on the Kubernetes master node and is responsible for the management and execution of various controllers&lt;/p&gt;

&lt;p&gt;etcd — is a distributed key-value store that is used by Kubernetes to store data that needs to be shared across all of the nodes in the cluster. This data includes information about the state of the cluster, such as the configuration of the various applications and services that are running on the cluster.&lt;/p&gt;

&lt;p&gt;Pod — a group of one or more containers that are deployed together on a host. Pods are the smallest deployable units in Kubernetes and provide a higher level of abstraction than individual containers.&lt;/p&gt;

&lt;p&gt;Node — may be a virtual machine or a physical machine, depending on the cluster. Each node has a Kubernetes agent (also called a “kubelet”) that communicates with the control plane components and runs containers on the node. Docker is also installed on each node as it’s used to manage the containers that run inside a pod.&lt;/p&gt;

&lt;p&gt;Now that we know a bit about the architecture of a Kubernetes cluster, let’s dive into code and build a simple Python web application that defines some API endpoints, connects to a Postgres database, has some environment variables and also has Dockerfile and docker-compose to simulate the application locally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a basic Python application locally — Run with Docker &amp;amp; docker-compose
&lt;/h2&gt;

&lt;p&gt;To save time here, I have prepared a basic Python application developed on &lt;a href="https://fastapi.tiangolo.com/" rel="noopener noreferrer"&gt;FastAPI&lt;/a&gt; and if you want to follow along you can get the code from&lt;a href="https://github.com/vjanz/kubernetes-demo-app" rel="noopener noreferrer"&gt; GitHub Repository here.&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── app                   # Base directory
│   ├── crud.py           # Crud Logic 
│   ├── database.py       # Database engine
│   ├── main.py           # API endpoints and FastAPI initialization
│   ├── models.py         # Database models
│   └── schemas.py        # Schemas used to exchange data in API
├── docker-compose.yaml   # docker-compose with web and postgres
├── Dockerfile            # Dockerfile for FastAPI application
└── requirements.txt      # Requirements for project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;I will focus on the points that are important and that you should keep in mind while developing an application for Kubernetes or other platforms that are related to containerized applications like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Developing a service in containerized methodology&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Giving environment variables in a dynamic way&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Running a service locally with docker-compose&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I am stopping at this point as this as its the beginning of the whole process as Kubernetes itself is based on containers (running containerized applications).&lt;/p&gt;

&lt;p&gt;The sample app in the &lt;a href="https://github.com/vjanz/kubernetes-demo-app" rel="noopener noreferrer"&gt;repo&lt;/a&gt; is a basic FastAPI application that I extracted from official docs, but I added a PostgreSQL database to make the whole tutorial a bit more challenging. The application is very simple, it exposes some endpoints to create Users and Items.&lt;/p&gt;

&lt;p&gt;To start the application locally, simply copy .env_example to .env and run the application with Docker&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cp .env_example .env
$ docker-compose up -d --build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The application should be up and running, and you should be able to see the API definitions at &lt;a href="http://localhost:8000/api/docs" rel="noopener noreferrer"&gt;http://localhost:8000/api/docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s make sure the service is working as excepted by making some API requests:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create a user

curl -X 'POST' \
  'http://localhost:8000/users/' \
  -H 'Content-Type: application/json' \
  -d '{
  "email": "test@demo.com",
  "password": "somepassword"
}'
------------------------------------------------------------
{"email":"test@demo.com","id":1,"is_active":true,"items":[]}                                        ➜  ~

# Create a item for the user with id 1

curl -X 'POST' \
  'http://0.0.0.0:8000/users/1/items/' \
  -H 'Content-Type: application/json' \
  -d '{
  "title": "Item1",
  "description": "Some Description"
}'

Excepted response:
{"title":"Item1","description":"Some Description","id":1,"owner_id":1}                                  ➜  ~
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;So as excepted, the users and the respective items are being created and stored in PostgreSQL which is running in Docker. I added a PostgreSQL on purpose to also explain how you connect a service to a database running on Kubernetes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting up Kubernetes Cluster
&lt;/h2&gt;

&lt;p&gt;To setup a Kubernetes cluster you can decide if you want to create locally with &lt;a href="https://minikube.sigs.k8s.io/docs/start/" rel="noopener noreferrer"&gt;minikube&lt;/a&gt; or &lt;a href="https://kind.sigs.k8s.io/" rel="noopener noreferrer"&gt;kind&lt;/a&gt;, or create it in a cloud provider like AWS, GCP, DigitalOcean, etc. in their respective platforms for Kubernetes like EKS, GKE, DOKS.&lt;/p&gt;

&lt;p&gt;For this demo, I want to be as realistic as possible, as I want to also set up:&lt;br&gt;
Domain, HTTPS, Reverse Proxy, etc. so I am going to use a cloud-based cluster in DigitalOcean (similar steps are applicable in other cloud providers) or if you follow along using Minikube or Kind.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Steps to set up a Kubernetes cluster may be different in other cloud providers, but the concept are almost the same, so follow along:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here are the general steps for setting up a Kubernetes cluster on DigitalOcean:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Sign up for a DigitalOcean account &lt;a href="https://cloud.digitalocean.com/registrations/new" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to Kubernetes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click “Create a Kubernetes Cluster”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fill up the necessary information, and click “Create Cluster”&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F8176%2F1%2A0zGKBVQ7_nlB2tVhTlv7gA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F8176%2F1%2A0zGKBVQ7_nlB2tVhTlv7gA.jpeg" alt="Setting up a Kubernetes cluster in Digital Ocean"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We set up a Kubernetes cluster, which is located in Germany, with 2 nodes, which we named my-cluster while the node pool name is my-node-pool . It’s always recommended to have at least two nodes, as Kubernetes concepts are regarding scaling the services between the nodes, so if one goes down, the pods can be created in the other nodes, so in order to keep your services up and running is highly recommended to have at least two nodes.&lt;/p&gt;

&lt;p&gt;After creating the Kubernetes Cluster, you need to configure tools to connect to the cluster. The CLI tool to interact with the Kubernetes cluster is kubectl which you can easily download from &lt;a href="https://kubernetes.io/docs/reference/kubectl/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;kubectl requires an &lt;a href="https://loft.sh/blog/kubectl-get-context-its-uses-and-how-to-get-started/#:~:text=A%20Kubernetes%20context%20is,to%20switch%20between%20them." rel="noopener noreferrer"&gt;active context&lt;/a&gt; in order to know in which cluster to make the request. After the cluster has been provisioned, you can download the kubeconfig, and then you can make a request against the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2ASKv3lFdZGv8vVAZBtK5uuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2ASKv3lFdZGv8vVAZBtK5uuw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For now, navigate to the directory in which you have downloaded the config and export a variable named KUBECONFIG . Remember, if you configure this way, you always need to export this variable when you change the terminal.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ export KUBECONFIG=name_of_downloaded_file.yaml 

# on windows you can use setx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;To verify that we are successfully authenticated to cluster, run a simple command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get nodes

------------------------------------------------------
NAME                 STATUS   ROLES    AGE     VERSION
my-node-pool-mhag8   Ready    &amp;lt;none&amp;gt;   7m4s    v1.25.4
my-node-pool-mhagu   Ready    &amp;lt;none&amp;gt;   6m54s   v1.25.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Deploying the application to Kubernetes — Traditional Way
&lt;/h2&gt;

&lt;p&gt;To deploy our services to Kubernetes, we need to write the manifests in a declarative way, speaking of which we need to create:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noopener noreferrer"&gt;Deployment&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noopener noreferrer"&gt;Service&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noopener noreferrer"&gt;Ingress&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="noopener noreferrer"&gt;Config&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noopener noreferrer"&gt;Secrets&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Please read each one from the official docs on kubernetes.io&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Before we do so, let’s push our docker image to a &lt;a href="https://www.aquasec.com/cloud-native-academy/docker-container/docker-registry/#:~:text=Summary-,What%20is%20a%20Docker%20Registry%3F,-A%C2%A0Docker" rel="noopener noreferrer"&gt;registry&lt;/a&gt; (You can skip this step and use my image instead as I am leaving it as public, but feel free to create your own registry if you want to customize for your needs)&lt;/p&gt;

&lt;p&gt;I am using the docker hub registry on which I have created a public repository. (&lt;a href="https://www.ionos.com/digitalguide/server/know-how/setting-up-a-docker-repository/#:~:text=Creating%20a%20Docker%20Hub%20Repository" rel="noopener noreferrer"&gt;How to create a Docker Hub Repository&lt;/a&gt;). After creating the repository, I need to rename and tag the image that I built with docker-compose to the name of the repo that I created on Dockerhub&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker login # login to your dockerhub account
$ docker tag fastapi-app_app valonjanuzaj/kubernetes-demo:latest
$ docker push valonjanuzaj/kubernetes-demo:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now that we have the application image on the docker hub, we can start creating the manifest for our application. Let’s start by creating a deployment and service. To simplify, I am using the same file and I am separating the manifest with --- which is valid YAML syntax.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
To deploy the application in Kubernetes you need to execute:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f deployment-service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The application will get deployed, but it won’t work as it tries to access the database which we still don’t have. Remember, in the local application, we defined some environment variables regarding the Postgres database and here we still didn’t add any environment variable for the pod.&lt;/p&gt;
&lt;h3&gt;
  
  
  Deploying a PostgreSQL Database
&lt;/h3&gt;

&lt;p&gt;There are several ways how you can deploy a database in the cluster (you can also use a remote database hosted in the cloud, but we want to make this practice more challenging so we can touch more concepts regarding Kubernetes). You can manually write all the manifests (deployment, service, secrets, volumes, etc) or you can use &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; to deploy the database and many other services. I highly recommend using helm charts (verified ones) to deploy this kind of service instead of writing these by yourself.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Helm is a package manager for Kubernetes. It is used to automate the installation, upgrade, and management of Kubernetes applications.A Helm package is called a “chart”. Charts are made up of a collection of files that describe a related set of Kubernetes resources. For example, a chart might contain a deployment, a service, and a cluster role.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After installing Helm locally, you need to find the chart that you need to install. I usually go with &lt;a href="https://github.com/bitnami/charts" rel="noopener noreferrer"&gt;Bitnami &lt;/a&gt;charts. To install any chart with helm, we first need to add the repo from which we want to pull the charts, and then we can install after finding the chart that we want to.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm repo add my-repo https://charts.bitnami.com/bitnami
$ helm install postgres my-repo/postgresql --namespace postgres --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;em&gt;After you install the chart, all the instructions to connect to the database will be shown. Basically, you would need to port-forward the service of PostgreSQL and connect to it locally to create a new database or just manipulate it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The command above will install PostgreSQL in the cluster, it will attach the necessary resources and will make it ready to be used by applications. What is even more interesting is that when the chart gets installed it will assign some persistent volumes for data based on the cloud provider that you’re using, in this case, do-block-storage, so even if we delete this chart, we can still re-wire a database to the volume without losing the data.&lt;/p&gt;

&lt;p&gt;Let's connect to the database and create a database named kubernetes-demo for our FastAPI service.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;h3&gt;
  
  
  Connect the application to the database
&lt;/h3&gt;

&lt;p&gt;Now that the database is created and running in our cluster, we can set up the environment variables to connect our FastAPI service to this database. To do that, we need to provide some environment variables, just like we did locally on the application with docker-compose, but now to the container that is running on the pod that we deployed, and those are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;POSTGRES_USER&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;POSTGRES_PASSWORD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;POSTGRES_DB&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;POSTGRES_SERVER&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;POSTGRES_PORT&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’ll these are considered secrets and you need to be careful about the way you store them. Currently, we are going to create a secret and then we’re going to make some changes in deployment-service.yaml to reference the secrets that we have created because currently, we haven’t set any environment variable for the container.&lt;/p&gt;

&lt;p&gt;Creating the secret: To create the secret, you need to encode the values with base64 which can be easily done on the terminal:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo -n "value_to_encrypt" | base64
# Example
$ echo -n "SomeSecret" | base64
Output: U29tZVNlY3JldA==
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Similarly, you encode all the values and then add them to the secret.yaml:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
I’ve encoded real values of the Postgres database that we have created. Now create the secret with kubectl:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Be very careful when you add the POSTGRES_SERVER; Since the postgres is deployed on a separate namespace named “postgres” and the app is running on “kubernetes-demo” namespace, you need to add the server like: ., which in our case would be something like: postgres-postgresql.postgres&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;These kinds of manifests are not safe to be pushed on the repository, since base64 can be easily decoded to plain text. We are leaving it like this for now as this is the purpose of the article, to show you the traditional way first then we’ll move to advanced ways of managing these kinds of resources.&lt;/p&gt;

&lt;p&gt;Now let’s update deployment-service.yaml to reference the created secrets:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    spec:
      containers:
        - name: kubernetes-demo
         ...
         #######################################
         #Add the following part to the yaml file
          envFrom:
            - secretRef:
                name: demo-secrets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Apply the manifest again with:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f deployment-service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;And the application should be up and running. To access the app now we need to expose something, but for now, let’s just port forward and access it locally to see if everything is working fine:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl port-forward --namespace kubernetes-demo svc/kubernetes-demo 8000:80

# The application should be up and running at http://localhost:8000/api/docs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;We port-forwarded the service of the application named kubernetes-demo which has port 80 to local port 8000. Now we can access the application at &lt;a href="http://localhost:8000/api/docs" rel="noopener noreferrer"&gt;http://localhost:8000/api/docs&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting up the domain—Installing Ingress Controller — Securing the traffic with HTTPS
&lt;/h2&gt;

&lt;p&gt;Now let’s expose our application for the whole world to see it. To do that we need a domain (you can use the load-balancer IP if you don’t have a domain and don’t want to set one, so bear with me). If you want to get one domain, go check out &lt;a href="https://www.namecheap.com/" rel="noopener noreferrer"&gt;Namecheap&lt;/a&gt; as you can find cheap domains to play around with or for your personal purposes.&lt;/p&gt;

&lt;p&gt;I have already one domain purchased &lt;a href="https://get.tech/" rel="noopener noreferrer"&gt;get.tech&lt;/a&gt;, so I am going to use that one. To manage the domain from DigitalOcean (and any cloud provider), you need to add the domain to it and then point the nameservers to that provider from your domain registrar. To add the domain in DigitalOcean, navigate to Networking &amp;gt; Domains and add your custom domain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2214%2F1%2ALFe3UlaEqG7aX4eRxEnJ9A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2214%2F1%2ALFe3UlaEqG7aX4eRxEnJ9A.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you add the domain you need to point the nameservers to DigitalOcean &lt;a href="https://docs.digitalocean.com/tutorials/dns-registrars/" rel="noopener noreferrer"&gt;(see here how)&lt;/a&gt;, which are:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ns1.digitalocean.com.
ns2.digitalocean.com.
ns3.digitalocean.com.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;After a bit, your domain should be pointing to the DO nameservers and you can manage your records from the DigitalOcean dashboard.&lt;/p&gt;
&lt;h3&gt;
  
  
  Setting up Kubernetes ingress controller
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Note that the same steps are applicable to other cloud providers&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;An ingress controller acts as a reverse proxy and load balancer. It implements a Kubernetes Ingress. The ingress controller adds a layer of abstraction to traffic routing, accepting traffic from outside the Kubernetes platform and load balancing it to Pods running inside the platform through internal services. The whole flow is as follows:&lt;/p&gt;

&lt;p&gt;A user makes a request to the URL that we have exposed. The load balancer accepts the request and based on the mappings that are defined on the ingress forwards the request to a specific service. The service then forwards the request to one of the pods that are available to handle that request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AgBqjSpmsw1E2FsN1.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AgBqjSpmsw1E2FsN1.jpeg" alt="Kubernetes cluster with ingress controller and load balancer, source [Kubernetes Advocate](https://medium.com/@kubernetes-advocate?source=post_page-----8eb14f737f7b--------------------------------)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we understand a bit about what is ingress controller, we need to install one in our cluster.&lt;/p&gt;

&lt;p&gt;You may do these steps manually but it would take a lot of work to implement one of the ingresses that are already out there, so I highly recommend installing one from the versions that are out there using Helm. To do that we can install it again from the Bitnami repo that we added before when installing the PostgreSQL, so let’s go ahead and install the nginx-controller implementation:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  ~ helm search repo nginx           
NAME                             CHART VERSION APP VERSION DESCRIPTION                                       
my-repo/nginx                    13.2.21       1.23.3      NGINX Open Source is a web server that can be a...
my-repo/nginx-ingress-controller 9.3.24        1.6.0       NGINX Ingress Controller is an Ingress controll...
my-repo/nginx-intel              2.1.13        0.4.9       NGINX Open Source for Intel is a lightweight se...


$ helm install nginx-controller my-repo/nginx-ingress-controller --namespace nginx-controller --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The controller will be installed at nginx-controller namespace in a bit. What happens when we install the controller in a managed Kubernetes service through the helm chart is that a LoadBalancer will automatically get created for you, and you can use that created LoadBalancer to send the traffic to your cluster (see picture above, we are implementing the same idea). If we navigate to Networking &amp;gt; Load Balancers we see that the LoadBalancer is created and is ready to be used. Similarly, a load balancer would be created in other cloud providers such as AWS, GCP, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2214%2F1%2AM9bq80d0NLYK8UT5-RYWGA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2214%2F1%2AM9bq80d0NLYK8UT5-RYWGA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that the domain is created, and the LoadBalancer is ready, we can go ahead and create a new subdomain to use for our application. To do so, we navigate again to:&lt;/p&gt;

&lt;p&gt;Networking &amp;gt; Domain &amp;gt; Choose our domain and we create an “A domain”&lt;/p&gt;

&lt;p&gt;&lt;em&gt;An A record, also known as an “address record,” is a type of DNS record that maps a hostname to a specific IPv4 address.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2494%2F1%2ArGUICr3DorUEiCYtUg01uQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2494%2F1%2ArGUICr3DorUEiCYtUg01uQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just be careful to redirect to the LoadBalancer IP that we have just created so then that Ingress can take control of the request and redirect as it should. Now we are ready, so let’s create an &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noopener noreferrer"&gt;Ingress&lt;/a&gt; and map the subdomain to our FastAPI service.&lt;/p&gt;
&lt;h3&gt;
  
  
  Creating a Kubernetes Ingress
&lt;/h3&gt;

&lt;p&gt;Create a new file named ingress.yaml and paste the following content.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
It’s pretty clear right? We are mapping the host demo.*.tech to the service named kubernetes-demo (which exists in the same namespace), in the port 80, when path / is triggered. The ingressClassName should be the name of the ingress controller that we created.&lt;/p&gt;

&lt;p&gt;Create the ingress with kubectl:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;To check if the ingress is created and assigned to the right service, we execute the command belowto get more data about the Kubernetes resource:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe ingress -n kubernetes-demo kubernetes-demo-ingress
-------------------------------------------------------------------------

Name:             kubernetes-demo-ingress
Labels:           &amp;lt;none&amp;gt;
Namespace:        kubernetes-demo
Address:          164.92.182.82
Ingress Class:    nginx
Default backend:  &amp;lt;default&amp;gt;
Rules:
  Host                      Path  Backends
  ----                      ----  --------
  demo.vjanztutorials.tech  
                            /   kubernetes-demo:80 (10.244.0.123:8000,10.244.0.251:8000)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;We can see that ingress is created successfully and it’s redirecting the traffic to the kubernetes-demo service, which is redirecting to the two pods that we have created for that Deployment. Perfect, now let’s see if we can access the application from the exposed host:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X 'GET' \
  'http://demo.vjanztutorials.tech/health' \
  -H 'accept: application/json'
--------------------------------------------
{
  "status": "Running!"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Or just visit the service URL at: yourdomain.com/health in my case demo.vjanztutorials.tech/health&lt;/p&gt;

&lt;p&gt;Everything working as excepted, but there is only one thing we can do better and that is securing our traffic with HTTPS, so why not, let’s do that! If you have doubts about what HTTPS is and why we need it please take a refresher &lt;a href="https://howhttps.works/" rel="noopener noreferrer"&gt;here&lt;/a&gt;, and then let's move on.&lt;/p&gt;
&lt;h3&gt;
  
  
  Securing the traffic with HTTPS
&lt;/h3&gt;

&lt;p&gt;HTTPS is a way to make sure that when you visit a website, the information you share with that website is private and can’t be seen by anyone else. It’s like a secret code between your computer and the website that makes sure that only you and the website can read what’s being sent.&lt;/p&gt;

&lt;p&gt;To do so, we need an SSL/TLS certificate for our application. To retrieve one we need to communicate with a certificate authority (CA) to verify the authenticity of the domain and then issue a certificate that can be used to secure the communication between the application and the user.&lt;/p&gt;

&lt;p&gt;We will install Cert-Manager to manage our SSL/TLS certificates in an automated way. Cert-manager is a tool that helps manage and automate the process of obtaining and renewing SSL/TLS certificates for your websites and applications. It works by communicating with certificate authorities (CA) to request and renew certificates and then automatically updating your web server or application to use the new certificate. This eliminates the need for manual intervention and ensures that your certificates are always up-to-date and secure.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# search
➜  ~ helm search repo cert-manager
NAME                 CHART VERSION APP VERSION DESCRIPTION                                       
my-repo/cert-manager 0.8.10        1.10.1      cert-manager is a Kubernetes add-on to automate...

# install
helm install cert-manager my-repo/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now that we have cert-manager installed we need to install an &lt;a href="https://cert-manager.io/docs/concepts/issuer/" rel="noopener noreferrer"&gt;Issuer&lt;/a&gt; in our cluster, which can be ClusterIssuer or Issuer. &lt;br&gt;
The differences between them are that the Issuer is a namespaced resource and it is not possible to issue certificates from an Issuer in a different namespace while ClusterIssuer is almost identical to the Issuer resource, however, is non-namespaced so it can be used to issue Certificates across all namespaces.&lt;/p&gt;

&lt;p&gt;I am going to use ClusterIssuer and &lt;a href="https://letsencrypt.org/" rel="noopener noreferrer"&gt;Let’s Encrypt as CA&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Substitute the email with your email and that should be it. Now create it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f cluster-issuer.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;To issue a TLS certificate for our domain, we’ll annotate ingress.yaml with the ClusterIssuer that we created, so let’s modify it as:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Apply the changes with kubectl:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And check if the certificate is created:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get certificate -n kubernetes-demo
--------------------------------------------
NAME                  READY   SECRET                AGE
kubernetes-demo-tls   True    kubernetes-demo-tls   1m


$ kubectl describe certificate kubernetes-demo-tls -n kubernetes-demo
---------------------------------------------------------------------
Normal  Issuing    2m   cert-manager-certificates-issuing          The certificate has been successfully issued
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can also see that a secret has been created in the same namespace named kubernetes-demo-tls which contains the tls.crt and tls.key which is used to encrypt the traffic between pods, services, and other resources within the cluster.&lt;/p&gt;

&lt;p&gt;Perfect, now let’s visit our application through the browser to see if the connection is being through HTTPS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AMBsShkbPSgsnUZWssqf-6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AMBsShkbPSgsnUZWssqf-6g.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Perfect, our application is now running on HTTPS, which provides a secure and encrypted connection for users. This means that any information shared on our website, such as login credentials or personal information, is protected from potential hacking or data breaches. Additionally, HTTPS also improves website performance and is a ranking factor for search engines.&lt;/p&gt;

&lt;p&gt;Everything is working smoothly, but let’s start to think on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;How we are going to work in a team on this project&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How we are going to continuously integrate and deploy our app&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How we are going to build a deployment strategy that is secure and reliable, etc.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this chapter, we learned what is Kubernetes and its components, creating and running a simple application locally, installing PostgreSQL in the cluster using Helm, and connecting our running application to it. After that, we exposed the application to the world with Ingress and secured the connections with HTTPS.&lt;/p&gt;

&lt;p&gt;In the next part, I am going to discuss some of the downsides of the way that we organized the project and manifests and what we could do better using the most modern tools that are out there regarding Kubernetes and deployment strategies to make it developer friendly, reliable, and secure so stay tuned!&lt;/p&gt;

&lt;p&gt;If you have any questions or any problem following along, feel free to reach out to me.&lt;/p&gt;

&lt;p&gt;Connect with me on &lt;a href="https://www.linkedin.com/in/valon-januzaj-b02692187/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="http://www.github.com/vjanz" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Links and resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/vjanz/from-local-development-to-kubernetes-cluster-helm-https-cicd-gitops-kustomize-argocd-part1-2e64"&gt;Part 1&lt;/a&gt;, &lt;a href="https://dev.to/vjanz/from-local-development-to-kubernetes-cluster-helm-https-cicd-gitops-kustomize-argocd-part2-4noh"&gt;Part 2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Github repo for this part:&lt;br&gt;
&lt;a href="https://github.com/vjanz/kubernetes-demo-app" rel="noopener noreferrer"&gt;https://github.com/vjanz/kubernetes-demo-app&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Implement API Caching with Redis, Flask, and Docker [Step-By-Step]</title>
      <dc:creator>Valon Januzaj</dc:creator>
      <pubDate>Wed, 09 Jun 2021 07:14:03 +0000</pubDate>
      <link>https://dev.to/vjanz/implement-api-caching-with-redis-flask-and-docker-step-by-step-5h01</link>
      <guid>https://dev.to/vjanz/implement-api-caching-with-redis-flask-and-docker-step-by-step-5h01</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uJhuZPvM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3088/1%2AkweAChrF5o0hO83KK2fQHQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uJhuZPvM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3088/1%2AkweAChrF5o0hO83KK2fQHQ.png" alt="" width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You want your API to be faster, more consistent, and to reduce the requests to the server? — That’s where caching comes into play. In this article, I will show you how to implement API Caching with Redis on Flask. I am taking Flask as an example here, but the concept about Caching are the same regardless of the technologies.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What’s caching?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before we move into the practical part about implementing Caching with Redis and Flask, let’s first know what’s caching as a definition and learn it as a concept so you know then what would the use cases be.&lt;/p&gt;

&lt;p&gt;Caching is the ability to store copies of frequently accessed data in several places along the request-response path. When a consumer requests a resource representation, the request goes through a cache or a series of caches (local cache, proxy cache, or reverse proxy) toward the service hosting the resource. If any of the caches along the request path has a fresh copy of the requested representation, it uses that copy to satisfy the request. If none of the caches can satisfy the request, the request travels to the service (or origin server as it is formally known). This is well defined with two terminologies, which are cache miss and cache hit.&lt;/p&gt;

&lt;p&gt;Cache hit — A cache hit is a state in which data requested for processing by a component or application is found in the &lt;strong&gt;cache&lt;/strong&gt; memory. It is a faster means of delivering data to the processor, as the &lt;strong&gt;cache&lt;/strong&gt; already contains the requested data.&lt;/p&gt;

&lt;p&gt;Cache miss — Cache miss is a state where the data requested for processing by a component or application is not found in the cache memory. It causes execution delays by requiring the program or application to fetch the data from other cache levels or the main memory.&lt;/p&gt;

&lt;p&gt;As mentioned above, there are several ways to implement caching. That can be on the client-side through Web Caching, on the server-side through Data Caching (Relational Databases, Redis, etc), Application Caching through plugins that get installed on the application (ex: plugins on WordPress). For this tutorial we’re going to use Redis, to save the responses from the API, and then use those responses instead of making the requests to the server to fetch the data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flask and Redis — Implementation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Docker &amp;amp; Docker-compose&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flask&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Python 3.*+&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We are going to use docker to isolate our services, and then docker-compose to orchestrate the services together (putting them on the same network, communication between them, environment variables, etc). If you don’t know about Docker, I suggest you refer to the official docs &lt;a href="https://docs.docker.com/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MflWnlWv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4350/1%2AgEpkD_3NMTxK-w96c_QBBA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MflWnlWv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4350/1%2AgEpkD_3NMTxK-w96c_QBBA.png" alt="**General workflow**" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Project setup&lt;/strong&gt;:
&lt;/h3&gt;

&lt;p&gt;Create python virtualenv and install Flask, redis, flask-caching and requests:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ python -m venv venv
$ source venv/Scripts/activate
$ (venv) pip install Flask redis flask_caching requests
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Our application will look something like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/root
├── app.py                 - Application entrypoint
├── config.py              - Config file for Flask
├── docker-compose.yml     - Docker compose for app and redis
├── Dockerfile             - Dockerfile for Flask API
├── .env                   - Environment variables
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;So lets go ahead and create files that are necessary for this setup:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ touch Dockerfile docker-compose.yml .env
$ pip freeze &amp;gt; requirements.txt
$ touch config.py app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;What we are going to implement?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We are just going to make a simple endpoint which fetches the university data from &lt;strong&gt;Hipolabs universities API&lt;/strong&gt;, and based on the country that we sent as a query parameter, we get a list with the universities for the specified country.&lt;/p&gt;

&lt;p&gt;Let’s go ahead and in app.py create an instance of Flask, and use that to create an endpoint that fetches universities data.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
So basically, based on the query parameter country it makes the request to the external API and gets back the data in JSON format . Let’s go ahead and try it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ export FLASK_APP=app.py      # To tell where your flask app lives
$ export FLASK_ENV=development # Set debug mode on
$ flask run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;I will be using &lt;a href="https://www.postman.com/"&gt;Postman &lt;/a&gt;to make the request because I also want to see the time that my request takes to process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cFeIXNRd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2898/1%2A9o8_94EDMwGO-BWWKnshBA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cFeIXNRd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2898/1%2A9o8_94EDMwGO-BWWKnshBA.png" alt="**Testing the endpoint with Postman; fetch all the universities for Germany**" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Okay, now we see that we have the results, and everything is working fine as excepted. With the red color, you can see the time that it took to get the data from that endpoint. We can try and make the same request several times, and the performance won’t change. That is because we’re always making a new request to the server. Our goal is to minimize this, and as explained at the beginning to make fewer requests to the server. So let’s go ahead and do that.&lt;/p&gt;
&lt;h3&gt;
  
  
  Add redis and dockerize the application
&lt;/h3&gt;

&lt;p&gt;We saw that it worked fine locally, but now we want to implement caching, and for that, we’re going to need Redis. There are several approaches you can take here as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Installing Redis (Officially compatible in Linux, not in Windows, see &lt;a href="https://redis.io/topics/introduction"&gt;here&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Host a Redis instance and use that one (ex: Redis instance on &lt;a href="https://devcenter.heroku.com/articles/heroku-redis"&gt;Heroku&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start the Redis instance with Docker (We are doing this)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We are going to dockerize the application and add Redis as a service so we can easily communicate from our application. Let’s go ahead and write the Dockerfile for the Flask application:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
We don’t have &lt;a href="https://docs.docker.com/engine/reference/builder/#cmd"&gt;command&lt;/a&gt; here to run the image, as I will use docker-compose to run the containers. Let’s configure docker-compose to run our application and Redis:&lt;br&gt;&lt;/p&gt;

&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
So we simply add two services, which are our application and Redis. For the application, we expose the port 5000 in and out, and for Redis, we expose 6379. Now let’s start the services with docker-compose.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker-compose up -d --build 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Our services should be up and running, and if we go again and make the same request as we did above when we were running the application without Docker, we will have the same output. To check if the services are running enter the following command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now let’s configure our application to connect with Redis instance, and also to implement caching in our endpoint. We can go straight and set the variables directly in the code, but here I am trying to show you some good practices while developing with Flask and Docker. In the docker-compose from the above gist, we can see that for the environment variables I refer to the .env file, and then I use config.py to map these variables to the Flask application. For the flask-caching library to work, we need to set some environment variables, which are for Redis connection and caching type. You can read more about the &lt;a href="https://flask-caching.readthedocs.io/en/latest/#configuring-flask-caching"&gt;configuration from the documentation&lt;/a&gt; of the library, based on the caching type that you want to implement.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# .e
CACHE_TYPE=redis
CACHE_REDIS_HOST=redis
CACHE_REDIS_PORT=6379
CACHE_REDIS_DB=0
CACHE_REDIS_URL=redis://redis:6379/0
CACHE_DEFAULT_TIMEOUT=500
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;In the .env we set some variables like caching type, host, db, etc. Since we have these variables mounted from docker-compose inside our container, now we can get those variables using the os module. Let’s get those variables in config.py and we’ll use them later to map the values to our Flask application.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
From the configuration side of things, we’re good. Now let’s initialize the cache on top of Flask and integrate that with our application.&lt;br&gt;&lt;/p&gt;

&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
We have added a new decorator which is @cache.cached then we specify a timeout which is the time that this response will be cached in Redis memory. So basically after the first request, we will have this response stored for 30 seconds, after that there’ll be a fresh request that will update the memory again. The second parameter is query_string=True which in this case makes sense because we want to store the responses based on the query string that we store instead of the static path.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;query_string&lt;/strong&gt; — Default False. When True, the cache key used will be the result of hashing the ordered query string parameters. This avoids creating different caches for the same query just because the parameters were passed in a different order.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And we’re done, let’s build the containers again and test this out in action with caching in place.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up -d --build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now let’s go to Postman again, and do the same request on the universities endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mHDsWyzq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2922/1%2Au4a8OUBQu2gBzvc6iL5nsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mHDsWyzq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2922/1%2Au4a8OUBQu2gBzvc6iL5nsw.png" alt="Response time after implementing caching with Redis" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the first time we’ll have approximately the same time as we did when we weren’t using caching, but if we do the same request again, we’ll have significant improvements and all that thanks to Redis. So what we’re doing is, we are saving the response to an in-memory database, and then while the data are still stored there they’ll be returned from there, instead of making the request from the server.&lt;/p&gt;

&lt;p&gt;Dive deeper? — Let us see that in action, by using a GUI tool, to query our Redis store. I am using &lt;a href="https://tableplus.com/"&gt;TablePlus&lt;/a&gt;, for the sake of the visualization, but you can also use Redis CLI to query the data. To connect to our redis instance we will specify host as localhost and then for the port we enter 6379 just as we exposed in docker-compose .&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eW0fK6rN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AKbeJEpo_id_pG6c76qyp1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eW0fK6rN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AKbeJEpo_id_pG6c76qyp1g.png" alt=".Connection with Redis with Tableplus" width="456" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, we can see the data that are being stored in our Redis instance. When there’s a response saved, you can see a db0 and if we look for more, we’ll see our cached response including &lt;strong&gt;[key; value; type; ttl].&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g8rC2lN4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2076/1%2A55ynzdVZ2TDPrwnGIgAtdw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g8rC2lN4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2076/1%2A55ynzdVZ2TDPrwnGIgAtdw.png" alt="" width="800" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can clearly see that the response that is cached is /universities?* and that is available for the time that appears on the ttl. This section was a bit outside of the scope, but it’s good to know what is happening in the background.&lt;/p&gt;

&lt;p&gt;So with that, we have implemented API caching with Redis and Flask. For more option please refer to the &lt;a href="https://flask-caching.readthedocs.io/en/latest/"&gt;documentation &lt;/a&gt;of the flask-caching library which is a wrapper to implement caching around different clients.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;So we implemented API Caching using Redis. This is a simple example, but it includes lots of details regarding this topic. Caching is really important when you write applications, as it helps a lot on the performance and when possible, you should implement it, but make sure that you’re targeting the right use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can find the full source code of the article on the GitHub repository, with the instructions.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/vjanz/flask-cache-redis"&gt;&lt;strong&gt;vjanz/flask-cache-redis&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you found it helpful, please don’t forget to clap &amp;amp; share in your social network or with your friends.&lt;/p&gt;

&lt;p&gt;If you have any question, feel free to reach out to me.&lt;/p&gt;

&lt;p&gt;Connect with me on: &lt;a href="https://www.linkedin.com/in/valon-januzaj-b02692187/"&gt;LinkedIn&lt;/a&gt;, &lt;a href="http://www.github.com/vjanz"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  References:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.cloudflare.com/learning/cdn/what-is-caching/"&gt;https://www.cloudflare.com/learning/cdn/what-is-caching/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://redislabs.com"&gt;https://redislabs.com&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.docker.com/"&gt;https://docs.docker.com/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://flask-caching.readthedocs.io"&gt;https://flask-caching.readthedocs.io&lt;/a&gt;&lt;br&gt;
&lt;a href="http://universities.hipolabs.com/"&gt;http://universities.hipolabs.com/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Asynchronous tasks in Python with Celery + RabbitMQ + Redis</title>
      <dc:creator>Valon Januzaj</dc:creator>
      <pubDate>Sat, 24 Apr 2021 21:06:44 +0000</pubDate>
      <link>https://dev.to/vjanz/asynchronous-tasks-in-python-with-celery-rabbitmq-redis-4goo</link>
      <guid>https://dev.to/vjanz/asynchronous-tasks-in-python-with-celery-rabbitmq-redis-4goo</guid>
      <description>&lt;h2&gt;
  
  
  Asynchronous tasks in Python with Celery + RabbitMQ + Redis
&lt;/h2&gt;

&lt;p&gt;In this article, we are going to use Celery, RabbitMQ, and Redis to build a distributed Task queue.&lt;br&gt;
But what is a distributed task queue, and why would you build one?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A &lt;strong&gt;distributed task queue&lt;/strong&gt; allows you offload work to another process, to be handled asynchronously (once you push the work onto the &lt;strong&gt;queue&lt;/strong&gt;, you don’t wait) and in parallel (you can use other cores to process the work).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So it basically it gives you the ability to execute tasks in the background while the application continues to resolve other tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5100%2F1%2AFRkffS6BCCU36LBHglfF9A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5100%2F1%2AFRkffS6BCCU36LBHglfF9A.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases of Task Queues
&lt;/h2&gt;

&lt;p&gt;The most basic and understandable example would be sending emails after the user is registered. In this case, you don’t know how much time is it going to get to send the email to the user, it can be 1ms but it can be more, or sometimes even not sent at all, because, in these case scenarios, you are not responsible or simply said you’re not aware of the task is going to be successfully done, because it’s another provider who is going to do that for you.&lt;br&gt;
So now that you got a simple idea of how you can benefit from the task queues, identifying such tasks is as simple as checking to see if they belong to one of the following categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Third-party tasks&lt;/strong&gt; — The web app must serve users quickly without waiting for other actions to complete while the page loads, e.g., sending an email or notification or propagating updates to internal tools (such as gathering data for A/B testing or system logging).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Long-running jobs&lt;/strong&gt; — Jobs that are expensive in resources, where users need to wait while they compute their results, e.g., complex workflow execution (DAG workflows), graph generation, Map-Reduce like tasks, and serving of media content (video, audio).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Periodic tasks&lt;/strong&gt; — Jobs that you will schedule to run at a specific time or after an interval, e.g., monthly report generation or a web scraper that runs twice a day.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting up the dependencies for Celery
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Celery&lt;/em&gt; requires a message transport to send and receive messages. Some candidates that you can use as a message broker are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.rabbitmq.com/" rel="noopener noreferrer"&gt;RabbitMQ&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://redis.io/" rel="noopener noreferrer"&gt;Redis&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/sqs/" rel="noopener noreferrer"&gt;Amazon SQS&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this tutorial we are going to use &lt;em&gt;RabbitMQ&lt;/em&gt;, you can use any other message broker that you want (ex. Redis).&lt;/p&gt;

&lt;p&gt;It’s also good to mention for what are we going to use &lt;code&gt;Redis&lt;/code&gt; now since for the message transporter we are using &lt;code&gt;RabbitMQ&lt;/code&gt;.&lt;br&gt;
When tasks are sent to the broker, and then executed by the celery worker, we want to save the state, and also to see which tasks have been executed before. For that, you’re going to need some kind of data-store and for this one, we are going to use Redis.&lt;/p&gt;

&lt;p&gt;For the result stores we also have many candidates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AMQP, Redis&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Memcached,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SQLAlchemy, Django ORM&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apache Cassandra, Elasticsearch, Riak, etc&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To set up these services we are going to use docker as it’s easy to set up, it’s isolated environment and you can easily reproduce the same environment when you have a configuration (Dockerfile or docker-compose).&lt;/p&gt;

&lt;h3&gt;
  
  
  Project setup
&lt;/h3&gt;

&lt;p&gt;Let’s start a new python project from scratch. First let’s create a new directory, create all the files necessary for the project, and then initialize the virtual environment.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir celery-python &amp;amp;&amp;amp; cd $_
$ touch __init__.py
$ touch tasks.py
$ touch docker-compose.yaml
$ touch requirements.txt

# create &amp;amp; activate the virtualenv

$ python -m venv env
$ source env/Scripts/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now let’s install the project &lt;code&gt;requirements&lt;/code&gt;. For this project, we are just going to need celery and Redis.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install celery redis 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now it’s time to configure docker-compose to run RabbitMQ and Redis. In the docker-compose.yaml paste the following YAML configuration.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Here we simply start up two services, by defining the image key to point to the image in &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;dockerhub&lt;/a&gt; , mapping the ports &lt;a&gt;host:docker&lt;/a&gt; and adding environment variables. To see what types of environment variables you can use with your image, you can simply go to the corresponding image in dockerhub, and see the documentation. For example you can check how to use RabbitMQ image &lt;a href="https://hub.docker.com/_/rabbitmq" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s initialize the celery app to use RabbitMQ as a message transporter and Redis as a result store.&lt;br&gt;
In the tasks.py, let’s go ahead and paste the following code:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
I tried to keep the code as minimal as possible, so you can understand the purpose of this tutorial.&lt;br&gt;
As you can see, we have defined the URLs for RabbitMQ and Redis, and then we simply initialize the celery app using those configurations. The first parameter tasks is the name of the current module.

&lt;p&gt;Then we have decorated the function say_hello with @app.task which tells that the function is marked as a task, and then can later be called using .delay() which we will see in a bit.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Normally we would have a module celery_app.py to only initialize the celery application instance, and then a separate moduletasks.py in which we would define the tasks that we want to run by celery.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Build and run services with docker
&lt;/h3&gt;

&lt;p&gt;Now we only need to run the services (RabbitMQ and Redis) with docker. To run the images inside a container we simply run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker-compose up -d 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will take a while if you don’t have these images pulled locally. Then to verify that the containers are up and running we write:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And you should see two services running, and additional information for each one, if not check the logs for any possible error.&lt;br&gt;
Now let’s start the celery worker, and then let’s try to run some tasks with python interactive shell.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Starting the Celery worker

$ celery -A tasks worker -l info --pool=solo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will run celery worker, and if you see the logs it should tell that it has successfully connected with the broker.&lt;/p&gt;

&lt;p&gt;Now let’s run a task.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Running celery tasks

$ python
---------------------------------
Type "help", "copyright", "credits" or "license" for more information.
&amp;gt;&amp;gt;&amp;gt; from tasks import say_hello
&amp;gt;&amp;gt;&amp;gt; say_hello.delay("Valon")
&amp;lt;AsyncResult: 55ad96a9-f7ea-44f4-9a47-e15b90d6d8a2&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can see that we called the function using .delay() and then passing the name argument. This method is actually a star-argument shortcut to another method called &lt;code&gt;apply_async()&lt;/code&gt;. Then we see that we get &lt;code&gt;&amp;lt;AsyncResult&lt;/code&gt; back which is the task that was passed to the broker, and after that will get consumed and finished in background by celery.&lt;/p&gt;

&lt;p&gt;If you look at your worker now, you will see in the logs that the worker received a task and then after 5 seconds will tell you that the task finished successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2ADoPjdWMffrdv5rvmWUcT9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2ADoPjdWMffrdv5rvmWUcT9g.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let’s run the same task but let’s put the results store in the game now. In the python shell let’s store the result in a variable, and then lets its properties.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3190%2F1%2AuDjJvpd0DSv6JLnLQZGSrA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3190%2F1%2AuDjJvpd0DSv6JLnLQZGSrA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If we didn’t have the backend configured at the celery (Redis), we couldn’t access these properties or functions, because by default it wouldn’t store any state, but since we have it, we can see and get the pieces of information about our tasks. If you wanna dig deeper you can access your Redis database with a tool like table plus or you can set Flowerto monitor Redis and RabbitMQ.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3834%2F1%2Ak5dDVOMdAa0N6xW1xCcqPQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3834%2F1%2Ak5dDVOMdAa0N6xW1xCcqPQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see on the image above, all the tasks are stored in redis.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up
&lt;/h3&gt;

&lt;p&gt;In this article we have set up a python application with Celery, RabbitMQ and Redis from scratch. The purpose of the article was to show you what is task queue, what can we benefit from it, and how to implement.&lt;br&gt;
The examples of the task are just for demonstration, but you can use the same configuration as I did on this one, adding tasks in the tasks module and the configuration in celery_app.py. See the docs &lt;a href="https://docs.celeryproject.org/en/stable/getting-started/next-steps.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I highly encourage you to use celery in your application as it quite useful when you have things that take longer time, you need to schedule tasks, etc.&lt;/p&gt;

&lt;p&gt;If you read the article and found it useful don’t forget to clap.&lt;br&gt;
If you have any question, feel free to reach out to me.&lt;br&gt;
Connect with me on 👉 &lt;a href="https://www.linkedin.com/in/valon-januzaj-b02692187/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://github.com/vjanz" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
