<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Utibe</title>
    <description>The latest articles on DEV Community by Utibe (@yutee_okon).</description>
    <link>https://dev.to/yutee_okon</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yutee_okon"/>
    <language>en</language>
    <item>
      <title>Ingress Is Fading Away: But Where There Is A Will, There Is A Gateway API</title>
      <dc:creator>Utibe</dc:creator>
      <pubDate>Thu, 04 Dec 2025 09:10:41 +0000</pubDate>
      <link>https://dev.to/yutee_okon/ingress-is-fading-away-but-where-there-is-a-will-there-is-a-gateway-api-5935</link>
      <guid>https://dev.to/yutee_okon/ingress-is-fading-away-but-where-there-is-a-will-there-is-a-gateway-api-5935</guid>
      <description>&lt;p&gt;Following the recent announcement of the retirement of the popular Kubernetes Ingress controller, Ingress-Nginx, it is advisable that administrators explore alternatives. While other controllers like Traefik, nginx-ingress, are still good options at the moment, it is clear the community is moving away from Ingress as Kubernetes have made it known that the API itself is no longer actively developed.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Will
&lt;/h2&gt;

&lt;p&gt;Kubernetes is a popular cloud-native platform for deploying, scaling, and managing containerised applications. By default, applications deployed in a Kubernetes cluster are not exposed to the internet, so you need a way to make them accessible and a central way to route traffic to the specific services running in the cluster. This is the solution that Ingress provides.&lt;/p&gt;

&lt;p&gt;For years, Ingress has handled routing external traffic to services in a Kubernetes cluster well enough, but it has always had certain limitations that become hard to ignore as your cluster grows and administrative needs expand.&lt;/p&gt;

&lt;p&gt;For instance, Ingress natively supports only HTTP(S) routing and TLS termination. Without vendor-specific features, it does not support many advanced traffic management needs like support for other Layer 7 protocols (udp, grpc), content-based routing, session affinity, etc.&lt;br&gt;
To implement any of this, you end up being drowned in a sea of annotations that are non-standard and vary between controllers.&lt;/p&gt;

&lt;p&gt;Additionally, while you can restrict access to Ingress via RBAC, there's no native way for, let's say, an admin to manage TLS and a developer to manage routing in the same ingress. The dev would need access to the whole Ingress spec, which could lead to misconfigurations that affects others sharing the controller in a multi-tenant environment.&lt;/p&gt;

&lt;p&gt;Not being able to standardise configuration across clusters has become a pain point, and administrators need a way to ensure teams have only the necessary access to perform their jobs, while relying less on other teams.&lt;/p&gt;

&lt;p&gt;This is why the development for the Ingress API is being abandoned in favour of the Gateway API (whose idea was first introduced as Ingress 2.0, before that, too, was abandoned).&lt;/p&gt;


&lt;h2&gt;
  
  
  The Gateway API
&lt;/h2&gt;

&lt;p&gt;In October 2023, the Gateway API was announced. It is meant to replace Ingress in the long run and not be an upgrade or new version of the Ingress API. &lt;/p&gt;

&lt;p&gt;The Gateway API offers a more structured and extensible traffic management API with clear separation of concerns and standardization across providers of the underlying controllers.  &lt;/p&gt;

&lt;p&gt;Gateway API breaks traffic management into distinct Kubernetes objects, something Ingress never properly achieved. Instead of one object carrying both infra-level settings and app-level routing with a sea of annotations, Gateway API separates responsibilities cleanly.&lt;/p&gt;

&lt;p&gt;The three core objects are:&lt;br&gt;
&lt;strong&gt;GatewayClass&lt;/strong&gt; – defines the implementation (controller)&lt;br&gt;
&lt;strong&gt;Gateway&lt;/strong&gt; – defines the entry point&lt;br&gt;
&lt;strong&gt;Route resources&lt;/strong&gt; – define how traffic should be forwarded (e.g., &lt;code&gt;HTTPRoute&lt;/code&gt;, &lt;code&gt;GRPCRoute&lt;/code&gt;, &lt;code&gt;UDPRoute&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;This structure gives clearer boundaries between infra teams and application teams, and removes the annotation-driven configuration style that made Ingress brittle and inconsistent across controllers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GatewayClass&lt;/strong&gt;&lt;br&gt;
Ingress uses annotations to bind an Ingress object to a controller. Gateway API formalises this into a dedicated object.&lt;/p&gt;

&lt;p&gt;A GatewayClass declares which controller will implement a Gateway. It’s purely infrastructure-level and usually immutable once defined.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GatewayClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;controllerName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8s.nginx.org/nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Gateway&lt;/strong&gt;&lt;br&gt;
A Gateway object represents the actual data-plane listener — the equivalent of the load balancer or proxy entrypoint. This is separate from routing logic, so infra teams control where traffic enters and how it is exposed (ports, protocols, hostnames).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Gateway&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shop&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;gatewayClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;listeners&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In Ingress, this is similar to the Ingress controller + service load balancer abstraction, but explicitly modeled instead of implicit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTPRoute&lt;/strong&gt;&lt;br&gt;
Routes define how traffic is matched and forwarded. &lt;code&gt;HTTPRoute&lt;/code&gt; replaces the routing rules you’d normally place under spec.rules in an Ingress. The difference is that Gateway API supports richer matching and isn’t tied to the limitations of annotations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTPRoute&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;books&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;parentRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shop&lt;/span&gt;
  &lt;span class="na"&gt;hostnames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;shop.books.com"&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PathPrefix&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
    &lt;span class="na"&gt;backendRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;books&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;80&lt;/span&gt;   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After years of Ingress, you would realise Ingress puts the controller, entry point, and routing logic into a single object, with most controller-specific behaviour hidden behind annotations.&lt;/p&gt;

&lt;p&gt;In contrast, Gateway API exposes these responsibilities as objects, resulting in consistent behaviour across controllers, better separation of duties, and routing rules that are easier to reason about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Ingress is not going away soon, but the ecosystem is already adjusting to a reality where Gateway API becomes the standard way to manage traffic in Kubernetes. Its clearer design, stronger separation of roles, and consistent cross-implementation behaviour make it a more practical long-term choice. Migrating to the Gateway API might be hard, but it should be done gracefully. Analyse your setup, understand your routes and watch out for controller support maturity.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>networking</category>
      <category>devops</category>
    </item>
    <item>
      <title>Building and Deploying An API With Python &amp; Azure</title>
      <dc:creator>Utibe</dc:creator>
      <pubDate>Mon, 24 Mar 2025 11:35:02 +0000</pubDate>
      <link>https://dev.to/yutee_okon/building-and-deploying-an-api-with-python-azure-293d</link>
      <guid>https://dev.to/yutee_okon/building-and-deploying-an-api-with-python-azure-293d</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As a cloud engineer, working with APIs and serverless services will almost always come into play at some point in your journey. I recently completed the LearnToCloud Phase 2 guide and took on the capstone project. In this article, I'll walk you through a version of the project where I built a FastAPI, leveraging Azure Blob Storage for storage, CosmosDB for the database, and Terraform for Infrastructure as Code (IaC), all deployed using Azure Functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;p&gt;The API serves data on clubs that participate in the Nigerian Professional Football League, including their short name, nickname, home stadium, link to their logo image, and number of titles they have won. It also has a fun feature that generates fun facts about a club when requested using the OpenAI API.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tools and services used:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform is used to provison resources on Azure.&lt;/li&gt;
&lt;li&gt;Azure Blob storage is used to save image files.&lt;/li&gt;
&lt;li&gt;Azure CosmosDB serves as the database which FastAPI interacts with to retrieve and serve the data&lt;/li&gt;
&lt;li&gt;OpenAI API&lt;/li&gt;
&lt;li&gt;Azure Functions App to host the API as a serverless function&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Architecture diagram:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5sqo9nxxaojqxoue7y9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5sqo9nxxaojqxoue7y9.png" alt="project diagram" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up The Project
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure CLI&lt;/li&gt;
&lt;li&gt;Python Installed&lt;/li&gt;
&lt;li&gt;Knowledge of Terraform&lt;/li&gt;
&lt;li&gt;Basic knowledge of Python and FastAPI&lt;/li&gt;
&lt;li&gt;Familiarity with Azure CosmosDB, Blob Storage, Azure Functions, and Azure Core Tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can either run the API locally or deploy using Azure Functions. The former is just the basic steps of running a FastAPI application using &lt;code&gt;uvicorn&lt;/code&gt; and does not leverage the full potential of the setup. So I will walk you through the full setup. First, you should clone the repository from Github &lt;a href="https://github.com/yutee/npfl-serverless-api" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt;&lt;br&gt;
Once you are in the project folder, &lt;code&gt;cd&lt;/code&gt; into the &lt;code&gt;infra&lt;/code&gt; directory and update the necessary variables and state information, then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform apply &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once done, Terraform will create a: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storage account and blob container&lt;/li&gt;
&lt;li&gt;Upload logo images to the blob storage&lt;/li&gt;
&lt;li&gt;Azure CosmosDB account&lt;/li&gt;
&lt;li&gt;Azure Functions App&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cosmos DB&lt;/strong&gt;&lt;br&gt;
Next, access the &lt;code&gt;data.json&lt;/code&gt; file and populate Azure CosmosDB with the JSON data. If you want to learn a bit more about CosmosDB SQL server you can use this &lt;a href="https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/quickstart-portal" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FastAPI&lt;/strong&gt;&lt;br&gt;
Before we proceed, I would like to hint at some parts of the main API code. The API has 4 prominent endpoints and they serve data stored in Azure CosmosDB.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GET /clubs: Get all clubs.&lt;/li&gt;
&lt;li&gt;GET /clubs/{club_name}: Get a particular club details by name.&lt;/li&gt;
&lt;li&gt;GET /clubs/by-titles?min_titles=5: Get clubs with at least 5 titles.&lt;/li&gt;
&lt;li&gt;GET /clubs/fun-fact?club_name=Enyimba: Get a fun fact about a club.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The part of the code that interacts with CosmosDB looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;azure.cosmos&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CosmosClient&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;.config&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;settings&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CosmosDB&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;class to handle cosmosdb connection and queries&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CosmosClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cosmos_endpoint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cosmos_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;database&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_database_client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cosmos_database&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;container&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;database&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_container_client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cosmos_container&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_clubs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
        SELECT 
            c.id, 
            c.club, 
            c.full_name, 
            c.short_name, 
            c.nickname, 
            c.stadium, 
            c.titles_won, 
            c.logo
        FROM c
        &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;items&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query_items&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;enable_cross_partition_query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;items&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Internal Server Error&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, a basic FastAPI endpoint looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@app.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/clubs/by-titles-won&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_clubs_by_titles&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;min_titles&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;get clubs with titles won greater than or equal to min&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;clubs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_clubs&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;filtered_clubs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;club&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;club&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;clubs&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;club&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;titles_won&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;min_titles&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;filtered_clubs&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;HTTPException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;detail&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Internal Server Error&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tests are also written to test the endpoint to ensure it does what it should:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_get_club_by_name&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/clubs/heartland&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;assert&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;
    &lt;span class="k"&gt;assert&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;club&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Heartland&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can go through more parts of the API code at &lt;code&gt;.api/WrapperFunction&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure Functions&lt;/strong&gt;&lt;br&gt;
The configurations setup for Azure Functions are already available. If you want to learn how to setup Azure Functions using the Core Tools, you should use this &lt;a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=macos%2Cisolated-process%2Cnode-v4%2Cpython-v2%2Chttp-trigger%2Ccontainer-apps&amp;amp;pivots=programming-language-python#start" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For specifically using Azure functions with FastAPI, this &lt;a href="https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-cli-python?tabs=macos%2Cbash%2Cazure-cli%2Cbrowser" rel="noopener noreferrer"&gt;link&lt;/a&gt; will serve as a better guide.&lt;/p&gt;

&lt;p&gt;Now, there are two ways to host your functions, locally (usually for testing your setup) and by publishing to Azure Functions App. To publish locally, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;func start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To deploy to Azure Functions App, there are some considerations you should keep in mind. First of all, ensure your Python version used locally is compatible with the one you chose when creating the Functions App, also, ensure that the folder structure is properly set up and that environment variables are updated. After all this is done, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;func azure functionapp publish &amp;lt;functionapp_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upon successful deployment, Azure Functions will provide a link where you can access your API. You can access the API via the various available endpoints.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;- GET /&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj0gnxrjxq8k56qn1oaq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj0gnxrjxq8k56qn1oaq.png" alt="testing endpoint in browser" width="800" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;- GET /clubs/by-titles?min_titles=5&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft97lra1yks8y5l649g5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft97lra1yks8y5l649g5l.png" alt="testing endpoint in browser" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;br&gt;
In this project, I was able to build and deploy a FastAPI-based serverless API using Azure Functions, CosmosDB, Blob Storage, and Terraform. The API provides information on Nigerian football clubs and even generates fun facts using OpenAI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Try extending the API by adding more endpoints, integrating authentication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Explore other serverless compute options like AWS Lambda or Google Cloud Functions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Call to Action&lt;/strong&gt;&lt;br&gt;
Feel free to check out the GitHub repo, experiment with the codebase ad drop some suggestions. If you found this helpful, don’t forget to share!&lt;/p&gt;

&lt;p&gt;Happy Hacking!&lt;/p&gt;

</description>
      <category>api</category>
      <category>azure</category>
      <category>serverless</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Azure DevOps Explained: Services, Tools And Use Cases</title>
      <dc:creator>Utibe</dc:creator>
      <pubDate>Fri, 31 Jan 2025 07:56:23 +0000</pubDate>
      <link>https://dev.to/yutee_okon/azure-devops-explained-services-tools-and-use-cases-2mg0</link>
      <guid>https://dev.to/yutee_okon/azure-devops-explained-services-tools-and-use-cases-2mg0</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I recently setup a deployment workflow for a microservices-based application. Faced with a client requirement to stay within the Azure ecosystem, I explored Azure DevOps to integrate modern DevOps practices seamlessly into the deployment workflow.&lt;/p&gt;

&lt;p&gt;Azure DevOps is a set of cloud tools and services offered by Microsoft that helps streamline software development, collaboration, and deployment. While it is abstracted from the Azure platform itself, it is still tightly integrated with Azure services and they work seamlessly together, it also has great support for other third party DevOps tools, services and cloud platforms.&lt;/p&gt;

&lt;p&gt;Over the next few paragraphs, I will be talking about the tools and services Azure DevOps offers, how I used some of these tools, and how they work with other Azure services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure DevOps Core Services
&lt;/h2&gt;

&lt;p&gt;Azure DevOps takes a modern approach to software development by emphasizing collaboration, automation, Infrastructure as Code (IaC), project management, product testing, Continuous Integration (CI), and Continuous Deployment (CD). This methodology streamlines the building, testing, and delivery of software, enabling teams to work more efficiently.&lt;/p&gt;

&lt;p&gt;Additionally, Azure DevOps supports both cloud-hosted and on-premises solutions through Azure DevOps Services (cloud-based) and Azure DevOps Server (self-hosted).&lt;/p&gt;

&lt;p&gt;These capabilities are made possible through a suite of integrated services and tools designed to enhance the entire development lifecycle.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Services
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Azure Repos:&lt;/strong&gt; This is one of its key services. Azure Repos provides a code repository for storing application code and configurations and offers source control with Git and TFVC.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Azure Pipelines:&lt;/strong&gt; Modern DevOps does not come without support for CI/CD (Continuous Integration and Continuous Deployment) automation for faster and seamless deployments. Azure Pipelines is the Azure platform that makes this process possible. Like Github Actions, it uses a YAML file approach and has presets that can help you write pipeline jobs quickly and efficiently. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Azure Boards:&lt;/strong&gt; This enables agile project tracking for development teams. With Azure boards, you can plan, track, and discuss jobs and tasks across teams.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Azure Test Plans:&lt;/strong&gt; This is a test management and it comes with exploratory test toolkits that help you teams ship product with confidence. It has also has advanced testing features that are available on a paid plan.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Azure Artifacts:&lt;/strong&gt; Package management to create, store and share dependencies, packages, builds and mods. Easily integrates with Azure Pipelines and other CI/CD pipelines and let's them use the stored artifacts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Azure DevOps in Action - Using Azure DevOps
&lt;/h2&gt;

&lt;p&gt;Having explored the services available on the Azure DevOps platform, let us now see how these services come together in a real-world deployment scenario.&lt;/p&gt;

&lt;p&gt;I was tasked with implementing an end-to-end deployment pipeline for a microservices-based application on Kubernetes, leveraging Azure services. The application comprised services written in Python (FastAPI) and JavaScript (Node.js), with Redis and PostgreSQL as the databases.&lt;/p&gt;

&lt;p&gt;The codebase was initially hosted on GitHub, so my first step was migrating it to Azure Repos. Following that, I designed a CI/CD workflow using Azure Pipelines to automate the build process, scan container images for vulnerabilities, and push them to Azure Container Registry (ACR). Unlike GitHub Actions, Azure Pipelines often requires self-hosted agents to execute workflows, which I set up accordingly. &lt;em&gt;(For more details on self-hosted agents, see my article &lt;a href="https://dev.to/yutee_okon/self-hosted-runners-in-azure-pipelines-the-why-and-the-how-akl"&gt;here&lt;/a&gt;.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Next, I provisioned a test Kubernetes cluster using Azure Kubernetes Service (AKS) and deployed ArgoCD, a GitOps tool for Kubernetes. I then configured an ArgoCD application to continuously monitor a specific directory in Azure Repos containing Kubernetes manifests, ensuring that the cluster state always reflects the desired configuration.&lt;/p&gt;

&lt;p&gt;This setup streamlined the deployment process by combining Azure DevOps, Kubernetes, and GitOps principles, resulting in a fully automated and scalable deployment pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Azure DevOps?
&lt;/h2&gt;

&lt;p&gt;While there are more popular DevOps platforms and services, there are some specific reasons to choose Azure DevOps for your next project. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Integration with other Azure Services:&lt;/strong&gt; Azure DevOps has a great integration with other Azure services like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Azure Kubernetes Service (AKS) – Deploy containerized applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Azure Virtual Machines (VMs) – Host applications and automate deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Azure Functions – Serverless automation in CI/CD workflows.&lt;br&gt;
Azure Monitor &amp;amp; Log Analytics – Track and analyze CI/CD performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Azure Key Vault – Securely store secrets for pipelines.&lt;br&gt;
(Use a diagram to show how Azure DevOps interacts with these services.)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Multi-cloud support:&lt;/strong&gt; Azure DevOps provides robust multi-cloud support, making it a flexible choice for enterprises operating across OpenStack, AWS, and Google Cloud. It offers native integrations with major cloud providers, enabling seamless deployment to VMs, Kubernetes clusters (AKS, EKS, GKE), and serverless environments. With built-in Azure Pipelines, teams can use agent pools, Terraform, Ansible, and Bicep to automate infrastructure provisioning across multiple clouds while maintaining centralized governance.&lt;/p&gt;

&lt;p&gt;Unlike other DevOps platforms that may be tightly coupled to a single cloud, Azure DevOps provides a vendor-agnostic CI/CD solution. It supports multi-cloud authentication, cross-cloud artifact storage, and hybrid deployments, allowing organizations to avoid vendor lock-in and optimize workloads based on cost, performance, and compliance needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Enterprise security:&lt;/strong&gt; Azure DevOps offers enterprise-grade security with built-in compliance, advanced identity management, and secure pipelines, making it a strong choice over other DevOps platforms. It integrates natively with Azure AD for SSO, MFA, and Conditional Access, enforces RBAC for granular permissions, and ensures end-to-end security with Azure Key Vault for secrets management, private networking (Private Link), and threat detection via Microsoft Defender and Sentinel. Additionally, its compliance with ISO 27001, SOC 2, and HIPAA simplifies governance for regulated industries.&lt;/p&gt;

&lt;p&gt;Unlike other platforms that rely on third-party security add-ons, Azure DevOps provides seamless, built-in security features for code, artifacts, and CI/CD pipelines. With audit logging, policy enforcement, and real-time security scanning, it helps organizations proactively mitigate risks while maintaining high availability and compliance, making it ideal for enterprises with strict security requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While Azure DevOps may not be the first choice for every DevOps engineer, its deep integration with Azure and strong security features make it a compelling solution for teams using Microsoft’s ecosystem.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>microsoft</category>
    </item>
    <item>
      <title>Self-Hosted Runners in Azure Pipelines: The Why And The How</title>
      <dc:creator>Utibe</dc:creator>
      <pubDate>Mon, 27 Jan 2025 22:38:41 +0000</pubDate>
      <link>https://dev.to/yutee_okon/self-hosted-runners-in-azure-pipelines-the-why-and-the-how-akl</link>
      <guid>https://dev.to/yutee_okon/self-hosted-runners-in-azure-pipelines-the-why-and-the-how-akl</guid>
      <description>&lt;p&gt;While Azure DevOps have a generous free offering, running CI/CD pipelines with Azure Pipelines will require a paid plan to use Azure hosted runners, a cost effective solution would be to provision and configure your own runner.&lt;/p&gt;

&lt;p&gt;If you're used to platforms like GitHub Actions or haven't worked with Jenkins, managing and configuring runners may feel unfamiliar. This rather brief piece aims to simplify the process and walk you through provisioning your own self-hosted agent for Azure Pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Runner?
&lt;/h2&gt;

&lt;p&gt;Although natively reffered to as "Agents" in the Azure ecosystem, the most common name for it would be "Runners". &lt;br&gt;
When your pipeline runs, the system begins one or more jobs. A runner is a computing infrastructure that executes jobs defined in your continuous integration (CI) or continuous delivery (CD) pipelines. It’s the environment where tasks like building, testing, and deploying your application are carried out&lt;/p&gt;

&lt;p&gt;Azure Devops offer other options for these "agents", there is the:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Microsoft-hosted &lt;/li&gt;
&lt;li&gt;Self-hosted agents&lt;/li&gt;
&lt;li&gt;Azure Virtual Machine Scale Set agents&lt;/li&gt;
&lt;li&gt;Managed DevOps Pools agents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But we will focus on the Self-hosted agents as the other options might come at a pricey rate. You can learn more about the other options &lt;a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&amp;amp;tabs=yaml%2Cbrowser" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Use Self-Hosted Runners?
&lt;/h2&gt;

&lt;p&gt;As you can probably figure out by now, cost efficiency is just one of the many reasons to use self-hosted runners. Nearly all continuous integration platforms provide the option to utilize your own runners, and the benefits extend beyond saving money.&lt;/p&gt;

&lt;p&gt;Here are some key advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Security and Compliance:&lt;/strong&gt; Self-hosted runners allow you to maintain tighter control over sensitive data and meet specific compliance requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster Builds and Better Resource Management:&lt;/strong&gt; With self-hosted runners, you can allocate resources according to your needs, leading to faster build times and greater flexibility in managing workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Support for Special Scenarios and Legacy Systems:&lt;/strong&gt; They are particularly useful for handling unique setups or supporting older systems that may not be compatible with shared runners.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have stuck around for this long, you probably are ready to dive into configuring your own self-hosted agent for Azure Pipelines so let's begin.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step-by-Step Guide to Setting Up a Self-Hosted Agent
&lt;/h2&gt;

&lt;p&gt;I will be using an Azure Virtual Machine (cool sticking to the eco-system) as the infrastructure for our agent. But you can use any virtual machine and even your local system. But before if you are not running linux, you might have to select a few different options aling the line.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1 - Create a Virtual Machine&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provision a virtual machine (VM) in Azure and ensure SSH access is enabled.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2 - Configure an Agent Pool in Azure DevOps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to Organization Settings &amp;gt; Agent Pools.&lt;/li&gt;
&lt;li&gt;Click Add Pool, provide details, and select Grant access permission to all pipelines.&lt;/li&gt;
&lt;li&gt;Open the newly created pool, go to the Agents tab, and click New Agent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3 - Install and Configure the Agent on the VM&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSH into your virtual machine.&lt;/li&gt;
&lt;li&gt;Run the following commands to set up the agent:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create a new directory, downloads the agent and unpacks the files&lt;/span&gt;
&lt;span class="nb"&gt;mkdir &lt;/span&gt;myagent &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;myagent

wget https://vstsagentpackage.azureedge.net/agent/4.248.0/vsts-agent-linux-x64-4.248.0.tar.gz

&lt;span class="nb"&gt;tar &lt;/span&gt;zxvf vsts-agent-linux-x64-4.248.0.tar.gz

&lt;span class="c"&gt;# configure the agent&lt;/span&gt;
./config.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;During configuration, you’ll be prompted for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Server URL: Use &lt;a href="https://dev.azure.com/%7Byourorganization%7D" rel="noopener noreferrer"&gt;https://dev.azure.com/{yourorganization}&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Personal Access Token (PAT): Generate this from User Settings &amp;gt; - - Personal Access Token in Azure DevOps.&lt;/li&gt;
&lt;li&gt;Agent Pool Name: Enter the name of your pool (if you did not use default).&lt;/li&gt;
&lt;li&gt;Agent Name and Work Folder: Confirm these settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point the agent status will appear as "offline" on the Azure Devops platform, Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./run.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this, your agent should now be online and ready to accept jobs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final words
&lt;/h2&gt;

&lt;p&gt;With your self-hosted agent set up, remember that you are responsible for maintaining it. This includes applying patches, updates, and installing tools as needed. For example, if your pipeline requires Docker, you’ll need to install and configure Docker on the VM.&lt;/p&gt;

&lt;p&gt;By following these steps, you now have a cost-effective and flexible agent for Azure Pipelines.&lt;br&gt;
Keep building!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>cicd</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Advanced Kubernetes Deployment with GitOps: A Hands-On Guide To Terraform, Ansible, ArgoCD And Observability Tools</title>
      <dc:creator>Utibe</dc:creator>
      <pubDate>Tue, 07 Jan 2025 16:05:53 +0000</pubDate>
      <link>https://dev.to/yutee_okon/advanced-kubernetes-deployment-with-gitops-a-hands-on-guide-to-terraform-ansible-argocd-and-5c07</link>
      <guid>https://dev.to/yutee_okon/advanced-kubernetes-deployment-with-gitops-a-hands-on-guide-to-terraform-ansible-argocd-and-5c07</guid>
      <description>&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
Introduction
&lt;/li&gt;
&lt;li&gt;
Overview
&lt;/li&gt;
&lt;li&gt;
Infrastructure Provisioning
&lt;/li&gt;
&lt;li&gt;
Application Management
&lt;/li&gt;
&lt;li&gt;
Logging and Monitoring
&lt;/li&gt;
&lt;li&gt;
Challenges and Lessons Learned
&lt;/li&gt;
&lt;li&gt;Conclusion and Final Thoughts&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;GitOps is a modern approach to managing software applications and the infrastructure they run on, leveraging Git as the single source of truth for version control. It integrates seamlessly with practices like Continuous Integration/Continuous Deployment (CI/CD) and Infrastructure as Code (IaC) to ensure consistency, automation, and reliability. The core workflow involves developers committing code or configuration changes to a Git repository, which triggers automated pipelines to build, test, and deploy the updates to the target environment.&lt;/p&gt;

&lt;p&gt;GitOps is often used to manage Kubernetes clusters and cloud-native app development. &lt;br&gt;
Kubernetes itself is an orchestration system used to deploy and manage containerized applications.&lt;/p&gt;

&lt;p&gt;I recently completed a project where I followed GitOps practices to deploy a microservices based applications managed with kubernetes. &lt;br&gt;
This project is a demonstration of modern DevOps practices through the implementation of GitOps to deploy a Kubernetes-managed microservices application. By using Git as the single source of truth for both infrastructure and application state, the project highlights the power of version-controlled, automated, and auditable deployments. GitOps, being Kubernetes-native, aligns seamlessly with container orchestration and microservices architecture, showcasing scalability, resilience, and automation in cloud-native systems. Additionally, it embraces the "shift-left" approach by empowering developers to contribute to operations through simple Git workflows, fostering collaboration and reducing deployment errors. With its focus on observability, traceability, and the ability to scale across diverse environments, this project underscores key principles of modern DevOps, making it highly relevant in today's software delivery landscape.&lt;/p&gt;
&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;The aim of this project is to create a fully automated setup for deploying a microservices based application using kubernetes (helm) and also deploy observability tools on the provisioned kubernetes cluster to both monitor the state of the cluster and retreive logs.&lt;br&gt;
The following tools and services used in the project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform&lt;/li&gt;
&lt;li&gt;Ansible&lt;/li&gt;
&lt;li&gt;Azure Kubernetes Cluster&lt;/li&gt;
&lt;li&gt;Docker/Dockerhub&lt;/li&gt;
&lt;li&gt;Trivy&lt;/li&gt;
&lt;li&gt;Helm&lt;/li&gt;
&lt;li&gt;Kube-Prometheus Stack (monitoring)&lt;/li&gt;
&lt;li&gt;EFK Stack (logging)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This diagram shows the full flow of the project:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48b9exklp894yfpe9j9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48b9exklp894yfpe9j9w.png" alt="architecture diagram" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A fully complete DevOps setup typically involves multiple integrated components. This project utilized three Git repositories to manage distinct aspects of the workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Infrastructure Repository:&lt;/strong&gt; Hosts Terraform and Ansible configurations for provisioning and configuring the Kubernetes cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Application Repository:&lt;/strong&gt; Contains the microservices application source code, including the Dockerfile for building container images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kubernetes Manifests Repository:&lt;/strong&gt; Stores Kubernetes manifests and Helm charts for application deployment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The project workflow involves running terraform apply to provision or update an Azure Kubernetes Service (AKS) cluster and generate an Ansible hosts.ini file with cluster details. Once the cluster is ready, Terraform triggers Ansible to execute additional tasks, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploying the Kube-Prometheus stack for monitoring&lt;/li&gt;
&lt;li&gt;Setting up the EFK stack for centralized logging&lt;/li&gt;
&lt;li&gt;Installing ArgoCD using Helm&lt;/li&gt;
&lt;li&gt;Configuring an ArgoCD application to sync with the Kubernetes Manifests repository, ensuring that the cluster state matches the repository's desired state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup streamlines the process of deploying applications and observability tools. With a single &lt;code&gt;terraform apply&lt;/code&gt; command, the entire infrastructure, monitoring, logging, and GitOps deployment configuration is provisioned and operational.&lt;/p&gt;
&lt;h2&gt;
  
  
  Infrastructure Provisioning
&lt;/h2&gt;

&lt;p&gt;Azure Kubernetes Cluster is a managed kubernetes cluster which is used as the main infrastructure for deploying our microservices applications. Provisioning of the cluster is managed using terraform, an infrastructure as code tool.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_kubernetes_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"app-cluster"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"app-cluster"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;dns_prefix&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"app-k8s"&lt;/span&gt;
  &lt;span class="nx"&gt;kubernetes_version&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.30.6"&lt;/span&gt;

  &lt;span class="nx"&gt;default_node_pool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"app"&lt;/span&gt;
    &lt;span class="nx"&gt;node_count&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="nx"&gt;vm_size&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard_DS2_v2"&lt;/span&gt;
    &lt;span class="nx"&gt;os_disk_size_gb&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a basic code structure that create an AKS cluster.&lt;/p&gt;

&lt;p&gt;After cluster creation, terraform triggers ansible to run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;  &lt;span class="nx"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"local-exec"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;EOT&lt;/span&gt;&lt;span class="sh"&gt;
      sleep 60 # wait for cluster to be ready
      KUBECONFIG=../ansible/kubeconfig \
      ansible-playbook \
        -i ../ansible/inventory/hosts.ini \
        ../ansible/playbook.yml \
        -e "kubernetes_cluster_name=${azurerm_kubernetes_cluster.app-cluster.name}" \
        -e "kubernetes_resource_group=${data.azurerm_resource_group.default.name}"
&lt;/span&gt;&lt;span class="no"&gt;    EOT

&lt;/span&gt;    &lt;span class="nx"&gt;working_dir&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"../ansible"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ansible setup has roles that use helm in installing ArgoCD, Kube-Prometheus stack and EFK stack. It also creates an ArgoCD appplication.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# playbook.yml&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy tools on AKS&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;no&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ansible_python_interpreter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/Library/Frameworks/Python.framework/Versions/3.12/bin/python3&lt;/span&gt;

  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd_install&lt;/span&gt;
      &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;kubeconfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;kubeconfig_path&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
      &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;not node_status.failed&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;logging_stack&lt;/span&gt;
      &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;kubeconfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;kubeconfig_path&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
      &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;not node_status.failed&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;monitoring_stack&lt;/span&gt;
      &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;kubeconfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;kubeconfig_path&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
      &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;not node_status.failed&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The roles configurations also contain the ArgoCd application manifest files. Complete configuration code can be found &lt;a href="https://github.com/yutee/infra-repo" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Management
&lt;/h2&gt;

&lt;p&gt;The application to be deployed is designed using microservices based architecture. The source codes and dockerfile for each microservice is being saved in a an application repo. A continous integration pipeline is also setup to handle code changes to the application code and dockerfile. The pipeline has jobs that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detect the services that has had changes&lt;/li&gt;
&lt;li&gt;Build a new docker image and push it to dockerhub&lt;/li&gt;
&lt;li&gt;Scan the docker images for vulnerability using Trivy&lt;/li&gt;
&lt;li&gt;Update the manifest files in the manifest repository&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The configuration codes for the workflow can be found &lt;a href="https://github.com/yutee/application-repo/blob/main/.github/workflows/microservice-ci.yml" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Manifests
&lt;/h2&gt;

&lt;p&gt;Each services contains kubernetes manifest files for Deployment, Services and Service account. There is also an Ingress configuration file available. The manifest are also managed by a custom helm chart in the repository.&lt;/p&gt;

&lt;p&gt;The Helm chart is being updated with current tags by the application repo workflow. It is the single source of truth used with ArgoCD in the kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps Workflow
&lt;/h2&gt;

&lt;p&gt;Argo CD is a continuous delivery (CD) tool for Kubernetes that uses GitOps to manage and deliver applications&lt;br&gt;
It is synced to the manifests repo, and it ensures that the current state of the manifest repo is the deployed state in the kubernetes cluster.&lt;/p&gt;

&lt;p&gt;ArgoCD is installed in the kubernetes cluster using Ansible, and an application that handles the manifest files is also created using ansible.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# infra-repo/ansible/roles/argocd_install/templates/application.yml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;app_name&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;argocd_namespace&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;app_repo_url&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;app_target_revision&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;app_path&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
    &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;releaseName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;app_name&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;app_namespace&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;syncOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CreateNamespace=true&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;# infra-repo/ansible/roles/argocd_install/templates/application.yml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-server-ingress&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;argocd_namespace&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ingressClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;argocd_host&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;argocd_release_name&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;span class="s"&gt;-server&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmf913y9j6zru7anwws8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmf913y9j6zru7anwws8.png" alt="argocd application" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging and Monitoring
&lt;/h2&gt;

&lt;p&gt;Metrcis, Logs and Alerts are pillars of observability. Deploying tools that simplify logging and monitoring our performance metrics is even more crucial in a microservices deployment where a single &lt;code&gt;kubect logs&lt;/code&gt; on multiple pods and services might not suffice if you are trying to troubleshoot. This is where efficient and centralized metrics and log management tools like Kube-Prometheus (Prometheus, Grafana, Alert Manager) and EFK (Elasticsearch, Fluent Bit and Kibana) come in.&lt;/p&gt;

&lt;p&gt;In this project setup, after &lt;code&gt;terraform apply&lt;/code&gt; runs successfully, the Kube-prometheus stack and EFK stack are also deployed in the AKS cluster in the &lt;code&gt;monitoring&lt;/code&gt; and &lt;code&gt;logging&lt;/code&gt; namespaces respectively and with default configurations.&lt;br&gt;
The configuration files that handles this can be accessed &lt;a href="https://github.com/yutee/infra-repo/tree/main/ansible" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once the services are up, Kibana and Grafana UI can be accessed using their services via ingress or port-forwarding.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;screenshot of deployed applications&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezkaphk16x0uku3hvleh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezkaphk16x0uku3hvleh.png" alt="deployed application" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;screenshot of dashboards from kube-prometheus stack&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qssrqh0tf8fs9kbmeyt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qssrqh0tf8fs9kbmeyt.png" alt="monitoring dashboard options" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;screenshot of dashboard view with data on grafana&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrtn4ypoujq35155hh53.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrtn4ypoujq35155hh53.png" alt="monitoring dashboard view" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges and Lessons Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Challenges:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Managing Repository Synchronization:&lt;/strong&gt;&lt;br&gt;
Ensuring consistent updates across the Infra, Application, and K8 Manifests repositories was complex, especially with multiple dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tooling Compatibility:&lt;/strong&gt;&lt;br&gt;
Configuring Terraform and Ansible to work seamlessly required troubleshooting, particularly when passing dynamic values&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ArgoCD Configuration:&lt;/strong&gt;&lt;br&gt;
Setting up ArgoCD applications to handle multiple microservices and ensuring proper synchronization posed a learning curve.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Image Management:&lt;/strong&gt;&lt;br&gt;
Scanning and pushing Docker images while managing triggers for CI pipelines was challenging to automate effectively.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Debugging Deployment Failures:&lt;/strong&gt;&lt;br&gt;
Diagnosing deployment issues, such as misconfigured Kubernetes manifests or failed CI/CD pipelines, consumed significant time.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Lessons Learned:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Importance of GitOps:&lt;/strong&gt;&lt;br&gt;
GitOps principles simplify complex workflows by centralizing configurations in version-controlled repositories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Toolchain Proficiency:&lt;/strong&gt;&lt;br&gt;
Cemented my understanding of tools like Terraform, Ansible, and ArgoCD significantly reduces setup time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Iterative Approach:&lt;/strong&gt;&lt;br&gt;
Breaking down tasks into smaller, testable units (e.g., individual Ansible roles) improves development and debugging efficiency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Documentation and Automation:&lt;/strong&gt;&lt;br&gt;
Comprehensive documentation and automation reduce the risk of misconfigurations during handoffs or scaling.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project was a in-depth exploration of GitOps principles applied to a microservices based application deployment. By integrating Terraform, Ansible, and ArgoCD, I was able to streamline Kubernetes deployments while leveraging CI/CD pipelines with Github Actions for application management. Setting up logging and monitoring stacks ensured operational visibility while overcoming challenges deepened our understanding of Kubernetes and GitOps workflows.&lt;/p&gt;

&lt;p&gt;This experience underscores the value of automation and iterative problem-solving in managing modern cloud-native architectures. Focusing more on the overall workflow and how the many different parts come together has been the biggest success of the project for me. As a next step, enhancing scalability, enhancing security, improving the overall continous integegration pipelines and exploring advanced GitOps practices, such as automated rollbacks and canary deployments, could further refine the workflow.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>gitops</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Continous Integration And Continous Deployment Of A Full-Stack Docker-Compose Application</title>
      <dc:creator>Utibe</dc:creator>
      <pubDate>Tue, 17 Dec 2024 00:36:32 +0000</pubDate>
      <link>https://dev.to/yutee_okon/continous-integration-and-continous-deployment-of-a-full-stack-docker-compose-application-1f89</link>
      <guid>https://dev.to/yutee_okon/continous-integration-and-continous-deployment-of-a-full-stack-docker-compose-application-1f89</guid>
      <description>&lt;p&gt;Achieving a fully automated setup for infrastructure and application deployment is usually not complete without a proper Continuous Integration and Continuous Deployment (CI/CD) setup. Previously, I had a setup where Docker Compose was leveraged to deploy a full satck application (Python, React, MySQL) and a corresponding monitoring stack (Prometheus, Grafana, Loki and CAdvisor) for monitoring the server and container metrics and logs. Terraform (IAC) and Ansible (Configuration management) was used together to quickly spin up the services.&lt;/p&gt;

&lt;p&gt;But to keep track of future updates to the application or state of infrastructure running the application, it’s crucial to leverage a version control system like GitHub. Also, changes to our code and configuration should be seamlessly integrated into the currently version. This is where using Github actions comes in.&lt;/p&gt;

&lt;p&gt;GitHub Actions is a CI/CD platform provided by GitHub that enables you to automate, build, test, and deploy your code directly from your GitHub repository. It allows developers to define custom workflows using YAML files that respond to specific events in a repository, such as pushes, pull requests, or issues.&lt;/p&gt;

&lt;p&gt;Setting up a deployment pipeline does not require a particularly steep learning curve, but as with many complicated things, requires good planning and proper design. This would ensure the many moving parts work together. To learn more about github actions, you can use this &lt;a href="https://docs.github.com/en/actions/about-github-actions/understanding-github-actions" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Requirements
&lt;/h2&gt;

&lt;p&gt;For this project, we will be setting up a CI/CD pipelines to automate infrastructure and application deployments and it will include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud cost optimization with cost estimation tools like InfraCost.&lt;/li&gt;
&lt;li&gt;Terraform and Ansible integration for infrastructure management and monitoring stack setup.&lt;/li&gt;
&lt;li&gt;Git branching strategies for streamlined CI/CD pipelines.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To be able to follow along the implementation, you should have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Knowledge of Docker, Git, Terraform and Ansible&lt;/li&gt;
&lt;li&gt;An Azure subscription&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The application code, configuration and workflow files for this project can be found in this github &lt;a href="https://github.com/yutee/devops-cicd" rel="noopener noreferrer"&gt;repository&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architectural Overview
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Architecture Diagram of the Setup&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hkn6428ftzlkni7mg30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hkn6428ftzlkni7mg30.png" alt="Image description" width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Process
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Clone the repository&lt;/strong&gt;&lt;br&gt;
The repository contains code and configurations setup in directories as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;.github/worflows&lt;/strong&gt; - workflow files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;terraform&lt;/strong&gt; - configurations for provisioning infrastructure resources on Azure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ansible&lt;/strong&gt; - ansible playbooks and roles for configuring the server and bringing up monitoring and application stack&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;docker-compose&lt;/strong&gt; - docker compose configuration files for both application and monitoring stack&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;app&lt;/strong&gt; - contains application code for frontend and backend with their corresponding dockerfiles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Workflow Configurations &amp;amp; branch setup&lt;/strong&gt;&lt;br&gt;
There are 6 github actions workflow files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;terraform-validate.yml&lt;/li&gt;
&lt;li&gt;terraform-plan.yml&lt;/li&gt;
&lt;li&gt;terraform-apply.yml&lt;/li&gt;
&lt;li&gt;ansible-monitoring.yml&lt;/li&gt;
&lt;li&gt;ci-application.yml&lt;/li&gt;
&lt;li&gt;cd-application.yml&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create 4 branches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;infra_features&lt;/li&gt;
&lt;li&gt;infra_main&lt;/li&gt;
&lt;li&gt;integration&lt;/li&gt;
&lt;li&gt;deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Infrastructure Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Push to &lt;code&gt;infra_features&lt;/code&gt; branch -- the validate branch runs to confirm validity of terraform code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On pull request to &lt;code&gt;infra_main&lt;/code&gt; branch, the plan workflow runs and adds an infracost comment containing the estimated monthly cost of the new infrastructure updates&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1tpoxuajanrxmd9a1gb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1tpoxuajanrxmd9a1gb.png" alt="infracost comment" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;On merge to &lt;code&gt;infra_main&lt;/code&gt; branch, the apply workflow runs, creating or updating resources on Azure (or cloud service provider of choice).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On successful completion of terraform apply, the ansible workflow runs. This workflow will run ansible to configure the created server infrastructure and deploy the monitoring stack on the server.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2z6p2gvz77lwau1yka3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2z6p2gvz77lwau1yka3w.png" alt="successful pipeline runs" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Deployment Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For deploying the application stack, when changes are made to the application code and there is a push of the new code to the &lt;code&gt;integration&lt;/code&gt; branch, a workflow is triggered. It:&lt;/li&gt;
&lt;li&gt;builds new docker images&lt;/li&gt;
&lt;li&gt;tags and pushes the images to dockerhub&lt;/li&gt;
&lt;li&gt;&lt;p&gt;updates docker compose files with the new tags&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On successful merge from &lt;code&gt;integration&lt;/code&gt; branch to &lt;code&gt;deployment&lt;/code&gt; branch, ansible is once again used to deploy the new version of the update docker compose files on the server.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmx82a4qks4cptkutawn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmx82a4qks4cptkutawn.png" alt="deployed services using github actions" width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges and Solutions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Managing and Authenticating with Multiple Tools, Platforms and Frameworks&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Testing Pipeline updates&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Results and Learnings
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Streamlined Infrastructure Management: The infrastructure pipeline automated the provisioning of cloud resources and deployed a monitoring stack efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost Transparency: Using InfraCost provided visibility into cloud expenses, enabling better decision-making.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improved CI/CD Efficiency: The application pipeline successfully automated Docker image builds, updates, and deployments, significantly reducing manual efforts and errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;End-to-End Automation: Combined workflows for infrastructure and application deployments achieved a fully automated system with minimal downtime during updates.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project demonstrated the power of CI/CD pipelines in automating both infrastructure and application deployments. By integrating multiple tools into a cohesive workflow, the team achieved faster deployments, better cost control, and a more reliable system overall. The challenges provided valuable lessons on tool integration, branching strategies, and automation best practices, making future projects more efficient and scalable.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>githubactions</category>
      <category>ansible</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Automating the Deployment of a Docker-Powered Full-Stack Application with Terraform and Ansible</title>
      <dc:creator>Utibe</dc:creator>
      <pubDate>Sun, 01 Dec 2024 22:53:10 +0000</pubDate>
      <link>https://dev.to/yutee_okon/automating-the-deployment-of-a-docker-powered-full-stack-application-with-terraform-and-ansible-mo6</link>
      <guid>https://dev.to/yutee_okon/automating-the-deployment-of-a-docker-powered-full-stack-application-with-terraform-and-ansible-mo6</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In a previous project, &lt;a href="https://dev.to/yutee_okon/docker-to-the-rescue-deploying-react-and-fastapi-app-with-monitoring-1i79"&gt;"Docker to the Rescue: Deploying React and FastAPI App With Monitoring"&lt;/a&gt;, I explored Docker's transformative power in simplifying application deployment.&lt;/p&gt;

&lt;p&gt;However as deployment requirements grow, automation becomes essential to ensure consistency, efficiency, and scalability. Manual deployments, particularly during updates, are prone to inefficiencies and inconsistencies, this is a significant challenge in software delivery. Automation addresses this by enabling seamless rollouts of application and infrastructure changes, reducing downtime and human errors.&lt;/p&gt;

&lt;p&gt;In this article, I will go through a project where I leveraged automation to deploy a React and FastAPI application by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provisioning Azure resources with Terraform.&lt;/li&gt;
&lt;li&gt;Integrate Ansible into Terraform to configure infrastructure&lt;/li&gt;
&lt;li&gt;Orchestrating application deployment using Docker Compose.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Terraform provisions the server infrastructure required to host the application, while Ansible manages the configuration and setup of the provisioned server. The application (complete application and monitoring stack) is containerized and orchestrated using Docker Compose, ensuring seamless deployment and efficient resource management.&lt;br&gt;
By integrating these tools, the setup becomes repeatable, robust, and adaptable to evolving deployment needs.&lt;/p&gt;
&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkx2jb5ei266rbcdkxuq3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkx2jb5ei266rbcdkxuq3.png" alt="Architecture Diagram" width="800" height="435"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;overview of complete architecture&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Implementation Steps
&lt;/h2&gt;

&lt;p&gt;I will walk you through the steps I took to set up this automation process. You can find the application configuration code on my &lt;a href="https://github.com/yutee/devops-tf-ansible" rel="noopener noreferrer"&gt;github&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker and Dockerhub:&lt;/strong&gt;&lt;br&gt;
First I took the steps to push my custom application pre-built docker images to dockerhub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker tag utibeokon/frontend:v2.0 frontend
docker tag utibeokon/backend:latest backend

docker push utibeokon/frontend:v2.0
docker push utibeokon/backend:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The respective images can be found here.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend: &lt;a href="https://hub.docker.com/r/utibeokon/frontend" rel="noopener noreferrer"&gt;https://hub.docker.com/r/utibeokon/frontend&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Backend: &lt;a href="https://hub.docker.com/r/utibeokon/backend" rel="noopener noreferrer"&gt;https://hub.docker.com/r/utibeokon/backend&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker-Compose&lt;/strong&gt;&lt;br&gt;
I have my docker-compose configuration files in the project folder. Two docker compose files and other service specific config files properly named according the their respective services. The docker compose files contain declarations for the following services:&lt;br&gt;
&lt;code&gt;docker-compose.yml&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend&lt;/li&gt;
&lt;li&gt;Backend&lt;/li&gt;
&lt;li&gt;Postgres&lt;/li&gt;
&lt;li&gt;Adminer&lt;/li&gt;
&lt;li&gt;Traefik&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;monitoring/docker-compose.yml&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus&lt;/li&gt;
&lt;li&gt;Grafana&lt;/li&gt;
&lt;li&gt;Loki&lt;/li&gt;
&lt;li&gt;Promtail&lt;/li&gt;
&lt;li&gt;Cadvisor&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An overview of how the services interact within a shared docker network.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqt8usrqwqpb8t4bfg1a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqt8usrqwqpb8t4bfg1a.png" alt="services-architecture" width="800" height="589"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;service interaction diagram&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend connects with Backend&lt;/li&gt;
&lt;li&gt;Backend connects with Postgres for database interaction&lt;/li&gt;
&lt;li&gt;Adminer serves as a database dashboard for administration&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Traefik (running on ports 80 and 443) handles reverse-proxy within the network, handling routing, HTTPS redirecting and loadbalancing of requests received by the server&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cadvisor gathers container metrics and sends it to prometheus&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Promtail gathers logs for Loki&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prometheus and Loki stores metrics and logs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Grafana accepts Prometheus and Loki as data source and display the data in graphical form using dahsboards.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Do well to confirm the routing configurations for Traefik. You can modify this to fit your routing needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform:&lt;/strong&gt;&lt;br&gt;
In the terraform directory, there are configurations to provision a virtual network, a network security group and a virtual machine on azure. These configs are spread into two modules, &lt;code&gt;network&lt;/code&gt; and &lt;code&gt;vm&lt;/code&gt;. Also, the terraform setup uses azure blob storage as the remote backend.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# main.tf file&lt;/span&gt;
&lt;span class="c1"&gt;# provision a virtual network from the network module&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"network"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"./modules/network"&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;resource_group_name&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;vnet_name&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"server-vnet"&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_name&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"server-subnet"&lt;/span&gt;
  &lt;span class="nx"&gt;nsg_name&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"server-nsg"&lt;/span&gt;
  &lt;span class="nx"&gt;allowed_ports&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"22"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"80"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"443"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# provision a virtual machine instance from vm module&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"vm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"./modules/vm"&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;resource_group_name&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;vm_name&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"server"&lt;/span&gt;
  &lt;span class="nx"&gt;vm_size&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard_B2s"&lt;/span&gt;
  &lt;span class="nx"&gt;admin_username&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;admin_username&lt;/span&gt;
  &lt;span class="nx"&gt;ssh_public_key&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ssh_key_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;subnet_id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# dynamically create an inventory file for ansible&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"local_file"&lt;/span&gt; &lt;span class="s2"&gt;"inventory"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOT&lt;/span&gt;&lt;span class="sh"&gt;
[servers]
${module.vm.public_ip} ansible_user=${var.admin_username} ansible_ssh_private_key_file=~/.ssh/id_rsa
&lt;/span&gt;&lt;span class="no"&gt;EOT

&lt;/span&gt;  &lt;span class="nx"&gt;filename&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"../ansible/inventory.ini"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# trigger the vm configuration with ansible&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"null_resource"&lt;/span&gt; &lt;span class="s2"&gt;"ansible_provisioner"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;depends_on&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nx"&gt;local_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;inventory&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;network&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;triggers&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;always_run&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"local-exec"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;EOT&lt;/span&gt;&lt;span class="sh"&gt;
      sleep 60 # allow time for public ip to update
      ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook \
        -i ../ansible/inventory.ini \
        ../ansible/playbook.yml
&lt;/span&gt;&lt;span class="no"&gt;    EOT
&lt;/span&gt;  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# dns record for the server&lt;/span&gt;
&lt;span class="c1"&gt;# dns zone (slready created manually)&lt;/span&gt;
&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_dns_zone"&lt;/span&gt; &lt;span class="s2"&gt;"domain"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;domain_name&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;resource_group_name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_dns_a_record"&lt;/span&gt; &lt;span class="s2"&gt;"domain"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"@"&lt;/span&gt;
  &lt;span class="nx"&gt;zone_name&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azurerm_dns_zone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;resource_group_name&lt;/span&gt;
  &lt;span class="nx"&gt;ttl&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt;
  &lt;span class="nx"&gt;records&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_ip&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upon running &lt;code&gt;terraform apply&lt;/code&gt; Terraform will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provision the resources on azure&lt;/li&gt;
&lt;li&gt;Retrieve the public ip and create an &lt;code&gt;inventory.ini&lt;/code&gt; file in the ansible directory with the public ip details&lt;/li&gt;
&lt;li&gt;Trigger ansible to start configuring the server once it is ready&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After inspecting the configurations, you can cd into the &lt;code&gt;terraform directory and run&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
terraform apply &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Ansible:&lt;/strong&gt;&lt;br&gt;
The ansible setup consists of a single playbook and 3 roles. The roles are responsible for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Preparing the server&lt;/li&gt;
&lt;li&gt;Copying necessary files&lt;/li&gt;
&lt;li&gt;Deploying the application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fib04udp4ku639s9lf0ke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fib04udp4ku639s9lf0ke.png" alt="screenshot" width="479" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The playbook is a simple playbook that calls all three roles.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# playbook.yml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy Docker-based application&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;servers&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;become_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;server-setup&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;copy-files&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;deploy-app&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Deployed Application:&lt;/strong&gt;&lt;br&gt;
On a successful run of &lt;code&gt;terraform apply -auto-approve&lt;/code&gt;, you will get a similar output as this:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgb6rpfje2txazs6pqra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgb6rpfje2txazs6pqra.png" alt="tf apply output" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you used the the DNS configuration for Azure DNS, you can access the application via the domain name.&lt;br&gt;
Else, you should copy the IP address, and create an A record the details on your DNS service provider website that maps to the IP address. &lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges Faced
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Ensuring Seamless Integration Between Terraform and Ansible&lt;/li&gt;
&lt;li&gt;Managing sensitive data like SSH keys, credentials, and API tokens securely across both Terraform and Ansible.&lt;/li&gt;
&lt;li&gt;Debugging Ansible Playbooks: Playbook errors due to mismatched dependencies or configurations on the target VMs slowed down the deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Best Practices
&lt;/h2&gt;

&lt;p&gt;-Automation workflow&lt;br&gt;
Automate every step of the process—from provisioning to configuration—to reduce human error and improve reliability.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;State Management&lt;/strong&gt;&lt;br&gt;
Use remote backends for Terraform state files to enable team collaboration and state consistency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Testing and Validation&lt;br&gt;
Validate Terraform configurations with terraform validate and terraform plan.&lt;br&gt;
Test Ansible playbooks in isolated environments using Molecule before running them in production.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project demonstrates the power of combining Terraform and Ansible to achieve seamless automation for infrastructure provisioning and application deployment. By leveraging Infrastructure as Code (IaC) and Configuration Management, it’s possible to create a repeatable, reliable pipeline that significantly reduces manual effort.&lt;/p&gt;

&lt;p&gt;The challenges encountered, such as ensuring integration between tools and managing sensitive data, provided invaluable learning opportunities. Following best practices like modular design, robust state management, and automated testing ensures that the solution is both scalable and secure.&lt;/p&gt;

&lt;p&gt;As a next step, this workflow could be extended by incorporating CI/CD pipelines, adding alerting mechanisms for monitoring, or scaling to multi-cloud environments. Automation isn’t just about efficiency—it’s about building systems that evolve with your needs.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>ansible</category>
      <category>docker</category>
      <category>automation</category>
    </item>
    <item>
      <title>Docker to the Rescue: Deploying React And FastAPI App With Monitoring</title>
      <dc:creator>Utibe</dc:creator>
      <pubDate>Tue, 26 Nov 2024 23:12:52 +0000</pubDate>
      <link>https://dev.to/yutee_okon/docker-to-the-rescue-deploying-react-and-fastapi-app-with-monitoring-1i79</link>
      <guid>https://dev.to/yutee_okon/docker-to-the-rescue-deploying-react-and-fastapi-app-with-monitoring-1i79</guid>
      <description>&lt;p&gt;Mary, a software developer in training and the owner of a small online retail business, found herself in a bind. A viral Instagram post about her products caused a sudden surge in orders, and her manual order tracking system couldn’t keep up.&lt;/p&gt;

&lt;p&gt;Fortunately, she had been building a web app to manage her business, featuring a React frontend and a FastAPI backend. But there was one problem, she did not know how to deploy it. She needed a scalable, robust system, complete with monitoring to handle future traffic spikes and prevent downtime.&lt;/p&gt;

&lt;p&gt;That’s where I came in. Together, we embarked on a journey to transform her project into a fully deployed application with a stack that included Docker Compose, Traefik, Prometheus, Grafana, and Loki. In this article, I’ll walk you through the process we followed to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Containerize and orchestrate a full-stack app.&lt;/li&gt;
&lt;li&gt;Configure a reverse proxy for secure routing.&lt;/li&gt;
&lt;li&gt;Set up real-time monitoring for metrics and logs.&lt;/li&gt;
&lt;li&gt;Deploying the stack to a cloud platform with a custom domain and HTTPS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s dive in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Overview&lt;/li&gt;
&lt;li&gt;The Process&lt;/li&gt;
&lt;li&gt;Challenges&lt;/li&gt;
&lt;li&gt;Lessons&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;To acomplish this, the following tools/services are employed:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application Stack Services:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;React Frontend: A dynamic and responsive UI powered by Chakra UI&lt;/li&gt;
&lt;li&gt;FastAPI Backend: Provides REST APIs and Swagger documentation and using Poetry as package manager.&lt;/li&gt;
&lt;li&gt;PostgreSQL: A robust database for persistent storage.&lt;/li&gt;
&lt;li&gt;Traefik: Reverse Proxy fpr routing traffic seamlessly to appropriate services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Monitoring Stack Services:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus: Collects and stores real-time metrics and provides querying abilities&lt;/li&gt;
&lt;li&gt;Grafana: Visualizes performance and logs using data from prometheus and Loki&lt;/li&gt;
&lt;li&gt;Loki &amp;amp; Promtail: Promtail collects logs, and Loki stores them for querying and visualization.&lt;/li&gt;
&lt;li&gt;cAdvisor: Monitors container resource usage and forwards metrics to promethues&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Process
&lt;/h2&gt;

&lt;p&gt;The application code to be deployed can be found &lt;a href="https://github.com/The-DevOps-Dojo/cv-challenge01" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First I retreived the application:&lt;br&gt;
&lt;code&gt;git clone https://github.com/The-DevOps-Dojo/cv-challenge01&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The repository is organized as follows:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; Contains the ReactJS application.&lt;br&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Contains the FastAPI application and PostgreSQL database integration.&lt;/p&gt;

&lt;p&gt;I explored the codebase, I discovered a few uncommon aspects. First, I was unfamiliar with Poetry as a package manager for python, so I did some research on it and how to deploy Poetry applications. I tested the steps locally and was sure it was working fine before moving to the next step, containerization.&lt;/p&gt;

&lt;p&gt;🖥️ &lt;strong&gt;Containerization&lt;/strong&gt;&lt;br&gt;
I wrote docker files for both frontend and backend code.&lt;/p&gt;

&lt;p&gt;Frontend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:16-alpine AS build

WORKDIR /app

COPY package.json package-lock.json ./

RUN npm install

COPY . .

EXPOSE 3000

CMD ["npm", "run", "dev", "--", "--host", "0.0.0.0"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This docker file is responsible for building an image for the frontend application code.&lt;/p&gt;

&lt;p&gt;Backend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build stage
FROM python:3.10-slim as builder

RUN pip install poetry

WORKDIR /app
COPY poetry.lock pyproject.toml /app/

RUN poetry config virtualenvs.create false \
    &amp;amp;&amp;amp; poetry install --no-interaction --no-ansi --no-root --no-dev

# Final stage
FROM python:3.10-slim

WORKDIR /app

RUN apt-get update \
    &amp;amp;&amp;amp; apt-get install -y libpq-dev gcc \
    &amp;amp;&amp;amp; rm -rf /var/lib/apt/lists/*

COPY --from=builder /usr/local/lib/python3.10/site-packages/ /usr/local/lib/python3.10/site-packages/
COPY --from=builder /usr/local/bin/ /usr/local/bin/

COPY . /app

RUN adduser --disabled-password --gecos "" appuser
RUN chown -R appuser:appuser /app
USER appuser

ENV PYTHONPATH=/app
ENV PORT=8000

CMD ["sh", "-c", "bash ./prestart.sh &amp;amp;&amp;amp; uvicorn app.main:app --host 0.0.0.0 --port $PORT"]

EXPOSE 8000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This handles building the image for the FastAPI backend code. It looks more complicated becuase I used a multistage build to reduce file size due to the many layers involved with building an image for an app that requires Poetry for package management. If you are not familiar with multistage build, you can read more &lt;a href="https://docs.docker.com/build/building/multi-stage/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The images for the other services I will be needing for this project will be retreived from Dockerhub.&lt;/p&gt;

&lt;p&gt;🛳️ &lt;strong&gt;Docker Compose&lt;/strong&gt;&lt;br&gt;
Docker compose is an orchestration feature of docker where you can manage several containers running in a system. I will be using docker compose to manage my services. To accomplish this, I setup of folder structure and. created several YAML files for configuration. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose.yml&lt;/code&gt; - main configuration file&lt;br&gt;
&lt;code&gt;traefik.yml&lt;/code&gt; - traefik specific configurations&lt;br&gt;
&lt;code&gt;monitoring/docker-compose.yml&lt;/code&gt; - configuration file for monitoring stack&lt;br&gt;
&lt;code&gt;prometheus.yml&lt;/code&gt; - prometheus specific configs&lt;br&gt;
&lt;code&gt;promtail-config.yml&lt;/code&gt; - prometail specific configs&lt;br&gt;
&lt;code&gt;loki-config.yml&lt;/code&gt; - loki specific configs&lt;br&gt;
&lt;code&gt;.env&lt;/code&gt; - environment variables for the configurations&lt;/p&gt;

&lt;p&gt;I setup the configuration for all 10 services that was earlier mentioned. A sample example of my Frontend configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.frontend.rule=Host(`${DOMAIN}`)"
      - "traefik.http.routers.frontend.priority=1"
      - "traefik.http.routers.frontend.entrypoints=websecure"
      - "traefik.http.routers.frontend.tls=true"
      - "traefik.http.routers.frontend.tls.certresolver=myresolver"
      - "traefik.http.services.frontend.loadbalancer.server.port=3000"
    environment:
      - VITE_API_URL=https://${DOMAIN}/api
    networks:
      - app-network
    depends_on:
      - backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additonally, I ensured the the complete setup can be deployed by running &lt;code&gt;docker compose up -d&lt;/code&gt; from the root directory. I did this by leveraging the &lt;code&gt;extends&lt;/code&gt; feature and configuring all services to be on the same bridge network for easy service discovery and communication.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  prometheus:
    extends:
      file: ./monitoring/docker-compose.yml
      service: prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After some back and forth between documentations for traefik,Loki and Promtail trying to figure out routing and service discovery configurations, I was able to get the correct configurations and after running docker compose, all my service were up and running and working properly.&lt;/p&gt;

&lt;p&gt;💭 &lt;strong&gt;Into the Cloud&lt;/strong&gt;&lt;br&gt;
Up to this point, I have been working on my local system. At this point, I had all my services runnign locally and was able to predict the compute power needed to run docker and the services, so I decided to move the application to a clpud server and complete the setup there.&lt;/p&gt;

&lt;p&gt;First, I provisioned a virtual machine using Microsoft Azure and then I set up a simple deployment pipeline using Github Actions that will SSH into my VM, clone my repository (containing all my working configs) and then deploy the application using docker compose command.&lt;/p&gt;

&lt;p&gt;On successful run of the pipeline, I had all my services up and running on my cloud server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjk93ndds6s6mwfdetjy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjk93ndds6s6mwfdetjy.png" alt="services running" width="800" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NB: If you are opting to do this manually, follow the follwing steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSH into the VM&lt;/li&gt;
&lt;li&gt;Install Docker&lt;/li&gt;
&lt;li&gt;Clone the repository&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cd&lt;/code&gt; into the project folder&lt;/li&gt;
&lt;li&gt;run
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch acme.json
chmod 600 acme.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The previous step creates a file that traefik and let's encrypt will use to store tls certificate details. This is crucial to be able to access application over HTTPS&lt;/li&gt;
&lt;li&gt;run &lt;code&gt;docker-compose up -d -build&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, I accessed my DNS provider's website and added my vm's IP address to map to my domain name. This way, all requests to my domain name is forward to my server VM, and on hitting my server on port 80 or 443, the request is picked up by Traefik and re-routed to any of the services runninng on the VM depedning on the traefik configurations. At this point, the application is up and running and accessible via HTTPS, over the internet.&lt;/p&gt;

&lt;p&gt;Traefik routing:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgfy6071oqobrwgo59kh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgfy6071oqobrwgo59kh.png" alt="traefik path routing" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The application:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7hrgw2yvy0r3vs3udq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7hrgw2yvy0r3vs3udq6.png" alt="application running" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👁️ &lt;strong&gt;Monitoring&lt;/strong&gt;&lt;br&gt;
Now we have our application up and running. We should setup our monitoring stack. &lt;/p&gt;

&lt;p&gt;Access prometheus and grafana UI using their specified paths.&lt;br&gt;
Explore prometheus and confirm the service discovery.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyzdrippf5agcpkuwr81.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyzdrippf5agcpkuwr81.png" alt="prometheus" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, on grafana UI and added data sources, Loki and Prometheus. Then moved on to building dashboards to display metrics and logs. After a fair deal of time, PromQL queries and tweaks. I arrived on the following dashboards.&lt;/p&gt;

&lt;p&gt;Logs:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufw369qmqe173y284m4j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufw369qmqe173y284m4j.png" alt="logs dashboard" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Metrics:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bpf9kxhcpcq0oeukrnd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bpf9kxhcpcq0oeukrnd.png" alt="metrics dashboard" width="800" height="500"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwts7tbx6jl9hgmddq2mm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwts7tbx6jl9hgmddq2mm.png" alt="metrics dashboard2" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges
&lt;/h2&gt;

&lt;p&gt;At this point, I had a good amount of headache, I called up Mary to remind her that I will not be maintaining or managing the applications and I took out time to discuss with her, the challenges I faced at different points of the setup hoping it will help her if she runs into a problem in the future.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Dockerizing the Frontend:&lt;/strong&gt; I had successfully built the application and it was running locally, but the application was never accessible and therefore cannot be reached by other services in the network. I used the command &lt;code&gt;docker exec -it -u root sh&lt;/code&gt; to access the container for troubleshooting. I also accessed the logs using &lt;code&gt;docker logs&lt;/code&gt;. After some back and forth, I realized the issue was with the open ports, Vite by default runs on port 5173 but my container's open port was 3000. I had to perform a port forwarding internally to ensure the frontend service was accessible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Traefik Routing &amp;amp; TLS Configurations&lt;/strong&gt;&lt;br&gt;
This was probably the most tasking and less fun part of the project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Loki Version Issue&lt;/strong&gt;&lt;br&gt;
This was a minor issue, but it took a lot of back and forth with the documentation before I realized I was using a version that does not match with the configurations I had in my loki config file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. CORS Issue&lt;/strong&gt;&lt;br&gt;
This was not an issue I spent time with, as I was looking forward to it is a common one. But when I moved from localhost to the cloud, I forgot to update my .env and ran into this issue, so if you are ever changing DNS name, remember to make this updates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Managing multi-container applications with a single configuration file.&lt;/li&gt;
&lt;li&gt;Networking containers and defining service dependencies.&lt;/li&gt;
&lt;li&gt;Configuring Traefik for routing traffic, load balancing, and handling TLS certificates.&lt;/li&gt;
&lt;li&gt;Understanding how reverse proxies improve scalability and security.&lt;/li&gt;
&lt;li&gt;Visualizing container performance and debugging issues using dashboards.&lt;/li&gt;
&lt;li&gt;Building Dashboards&lt;/li&gt;
&lt;li&gt;Identifying and fixing container misconfigurations.&lt;/li&gt;
&lt;li&gt;Diagnosing common issues like misaligned ports and CORS issues.&lt;/li&gt;
&lt;li&gt;Leveraging monitoring and alerting to catch performance issues early.&lt;/li&gt;
&lt;li&gt;When trying to troubleshoot, Spend ample time a lot with logs and error messages and always read the documentation!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Deploying Mary’s application was not just about putting code into production; it was a journey of learning, problem-solving, and implementing best practices to ensure scalability, reliability, and observability. By leveraging Docker Compose, Traefik, and a comprehensive monitoring stack, we transformed a simple project into a robust, cloud-deployed application capable of handling real-world demands.&lt;/p&gt;

&lt;p&gt;This process highlights the importance of containerization, network orchestration, and monitoring in modern application deployment. From navigating configuration challenges to ensuring seamless service communication and building insightful dashboards, every step reinforced the value of preparation, testing, and documentation.&lt;/p&gt;

&lt;p&gt;Now, Mary’s application is not only ready to support her growing business but also serves as a model for deploying scalable, well-monitored web applications in the cloud. Whether you're a developer building for a client or managing your own projects, this guide can help you tackle similar deployment challenges with confidence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s next?&lt;/strong&gt; Expand this setup with features like autoscaling, a more robust CI/CD process for automated deployments, or Kubernetes for more advanced container orchestration. The possibilities are endless!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>prometheus</category>
      <category>loki</category>
    </item>
    <item>
      <title>Exploring Microsoft Azure AI Capabilities Using React, Github Actions, Azure Static Apps and Azure AI</title>
      <dc:creator>Utibe</dc:creator>
      <pubDate>Fri, 14 Jun 2024 19:32:05 +0000</pubDate>
      <link>https://dev.to/yutee_okon/exploring-microsoft-azure-ai-capabilities-using-react-github-actions-azure-static-apps-and-azure-ai-4d8g</link>
      <guid>https://dev.to/yutee_okon/exploring-microsoft-azure-ai-capabilities-using-react-github-actions-azure-static-apps-and-azure-ai-4d8g</guid>
      <description>&lt;p&gt;Lately, I've been dedicated to learning Microsoft Azure, and I've been using the Microsoft Learn platform extensively. I recently came across a challenge that requires utilizing Azure AI computer vision capabilities, specifically Azure AI Vision and Azure OpenAI cognitive services, to integrate image analysis and generation features into a product.&lt;/p&gt;

&lt;p&gt;To complete this challenge, Microsoft Learn recommends that one should have experience using JavaScript and React or similar frameworks, experience using GitHub and Visual Studio Code and familiarity with REST APIs.&lt;/p&gt;

&lt;p&gt;While my grasp of Javscript is quite solid and I know a little about React, I have not had any real development experience of it so my React code might look funny, bear with me. &lt;/p&gt;

&lt;p&gt;By the way, If this challenge sounds interesting to you and you want to attempt it, but you are not confident of your frontend development skills, do not be discouraged, as I was not. In fact, my limited knowledge was even more motivation to challenge myself. You can go through the challenge &lt;a href="https://learn.microsoft.com/en-us/training/modules/challenge-project-add-image-analysis-generation-to-app/"&gt;here&lt;/a&gt;. &lt;br&gt;
Like me, you might not get everything right, but do not hesitate to share your progress.&lt;/p&gt;

&lt;p&gt;So, following the challenge requirements, I created a new Azure Static Web App resource and then built a CI/CD pipeline to deploy a React application on Azure, using Azure Static Web Apps service and GitHub Actions. This was my first time trying to automate a deployment pipeline and of course I struggled with this a bit. Next, I setup a react application and fixed up the GUI, spending a lot of time in the react docs and github copilot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjpanxroyw82qqa984u2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjpanxroyw82qqa984u2.png" alt="ui built with react" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Up next, I had to implement the image analysis feature, after going back and forth with the documentation and some code tweaking, I arrived at this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwu95cpcpq991z6tkfgow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwu95cpcpq991z6tkfgow.png" alt="image analysis code using azure computer vision" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code is quite straightforward, but just a little explanation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 7 - 11:&lt;/strong&gt; &lt;br&gt;
Define useful variables. &lt;br&gt;
&lt;strong&gt;Line 12 - 15:&lt;/strong&gt; &lt;br&gt;
Creates an array that holds the query parameters for analysis feedback that is required from the Azure AI engine. here, I need just the caption for the image, but there are many more options that can be requested.&lt;br&gt;
&lt;strong&gt;Lines 17 - 26:&lt;/strong&gt; &lt;br&gt;
Declares an asynchronous function. This function takes in the image URL to be analysed as an argument, and then makes a request to the Azure AI engine with the image URL, the features, and the content type. Finally, the function returns the result gotten from the API call for processing and display on my user interface.&lt;/p&gt;

&lt;p&gt;My version of the challenge  has a fully functional Image analyses capability, but I was unable to complete the Image generation functionality because Azure OpenAI is not available in my region. To test the app, clone the repository and in the project directory run:&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;code&gt;npm start&lt;/code&gt;
&lt;/h5&gt;

&lt;p&gt;This will launch the react app.&lt;/p&gt;

&lt;p&gt;The repository can be found &lt;a href="https://github.com/yutee/cloud-ai"&gt;here&lt;/a&gt;.&lt;br&gt;
If you find it interesting, do not hesitate to leave a star and probably contribute to it. There are several other improvements that could be made, these inprovements can include...&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Image Generation functionality using Open AI/Azure OpenAI&lt;/li&gt;
&lt;li&gt;Better Security measures within the codebase&lt;/li&gt;
&lt;li&gt;Error handling&lt;/li&gt;
&lt;li&gt;Client side authentication&lt;/li&gt;
&lt;li&gt;User Interface micro interactions&lt;/li&gt;
&lt;li&gt;Add possible improvements and features to this README.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Any other ones that you can think of is welcomed, happy hacking!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloud</category>
      <category>ai</category>
      <category>azuredevops</category>
    </item>
    <item>
      <title>Efficient GitHub Operations: Simplifying Repository Management using Github CLI</title>
      <dc:creator>Utibe</dc:creator>
      <pubDate>Sun, 12 May 2024 08:12:24 +0000</pubDate>
      <link>https://dev.to/yutee_okon/efficient-github-operations-simplifying-repository-management-using-github-cli-190l</link>
      <guid>https://dev.to/yutee_okon/efficient-github-operations-simplifying-repository-management-using-github-cli-190l</guid>
      <description>&lt;p&gt;Often, when we set up version control initial files for a project, we tend to log onto the &lt;a href="https://github.com/"&gt;Github&lt;/a&gt; website to create our repositories, clone, and push our projects. While this method works, it can be argued that it is not as efficient. We can manage our GitHub repositories, from creating the repo to maintaining the different versions, right from our terminal. In this article, we will go through the process of achieving this.&lt;/p&gt;

&lt;p&gt;I will be attempting to create a repository for the Nextjs practice project I am currently working on and then push the files from my local machine to the the repository, and I will be doing all of it from my terminal.&lt;/p&gt;

&lt;p&gt;All you need to acheive this is a basic knowledge of the command line interface, and of course a github account.&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 - Install Github CLI
&lt;/h3&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Mac&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;brew install gh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Linux (Ubuntu)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key C99B11DEB97541F0&lt;br&gt;
sudo apt-add-repository https://cli.github.com/packages&lt;br&gt;
sudo apt update&lt;br&gt;
sudo apt install gh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 - Authenticate with Github
&lt;/h3&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;Upon installation completion, you will need to authenticate with Github. Run the following command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gh auth login&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will take you through series of authentication processes. You can choose to authenticate using a web browser or by pasting an authentication token. For any option you choose, just follow the prompts and you will be all set.&lt;br&gt;
I will be using the web browser (for the last time).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwi537yj2k1j0d9amd6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwi537yj2k1j0d9amd6u.png" alt="Image description" width="800" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35n24hfk5ttxz1317xze.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35n24hfk5ttxz1317xze.png" alt="Image description" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkc2gk0c3hi2bzwgwp0r6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkc2gk0c3hi2bzwgwp0r6.png" alt="Image description" width="800" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllffm7rnk6bmqd2s77zs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllffm7rnk6bmqd2s77zs.png" alt="Image description" width="800" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To verify authentication status, run...&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gh auth status&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 - Create Repo
&lt;/h3&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;It's time to create the repo for our project. Do this using the command&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gh repo create &amp;lt;repository-name&amp;gt; --public&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Be sure to replace &lt;em&gt;repository name&lt;/em&gt; with your choice repository name.&lt;br&gt;
The public flag is usually required. It helps github decide if to make your repo public or private. Use &lt;code&gt;--private&lt;/code&gt; id you want otherwise. There are other flags that you can use. Go through the &lt;a href="https://cli.github.com/manual/gh_repo_create"&gt;documentaion&lt;/a&gt; to learn more about your options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1m0ce9pogdq8koy247w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1m0ce9pogdq8koy247w.png" alt="Image description" width="800" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 - Clone the repository and add files locally
&lt;/h3&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;At this point, your repository is up on your github account. It can be accessed via the github url from the command output. We will have to clone the repository, and add our project files from our local machine. To do this, change directory to where you want to have your cloned repo and run the command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gh repo clone &amp;lt;repository-url&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;As usual, replace the &lt;em&gt;repository-url&lt;/em&gt; with your repo url.&lt;/p&gt;

&lt;p&gt;After cloning, if you had already started working locally ads in my case, move your project files to the cloned repo, if not, you can start working from your cloned repo.&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4 - Pushing files to github
&lt;/h3&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;Now we will have to push our initial project files to our remote github repository. But before then, let us quickly inspect our repo on github to be sure it was created and that it is empty. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48wlowsdao9hv6rx9onr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48wlowsdao9hv6rx9onr.png" alt="Image description" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great!&lt;/p&gt;

&lt;p&gt;Now run the follwoing commands to get your project files in.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;br&gt;
git add .&lt;br&gt;
git commit -m "&amp;lt;commit message&amp;gt;"&lt;br&gt;
git push&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If everything is okay and you have a successful push, your project files should be up on github and in the repository that you created earlier.&lt;/p&gt;

&lt;p&gt;Access github via the browser to confirm you had a successful push.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjiuarutcx4b21252rl1s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjiuarutcx4b21252rl1s.png" alt="Image description" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;**&lt;br&gt;
At first, the process of setting up your working environment for seamless repository management using just the command line may seem intimidating. However, once you give it a try, you will realize that it is actually quite easy and straightforward. As you become more comfortable at using your terminal, you will begin to appreciate the efficiency of working mostly from the command line and the conveneince it gives your workflow.&lt;/p&gt;

&lt;p&gt;There are many other powerful features and commands that the  Github CLI offers. These features help us manage or repositories more efficiently. You can check a few other features that may fit your repository management needs on the &lt;a href="https://cli.github.com/manual/"&gt;Github CLI Documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Happy Commits!&lt;/p&gt;

</description>
      <category>github</category>
      <category>git</category>
      <category>softwaredevelopment</category>
      <category>versioncotrol</category>
    </item>
  </channel>
</rss>
