<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Noble Mutuwa  Mulaudzi</title>
    <description>The latest articles on DEV Community by Noble Mutuwa  Mulaudzi (@mutuwa99).</description>
    <link>https://dev.to/mutuwa99</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mutuwa99"/>
    <language>en</language>
    <item>
      <title>Implementing Role-Based Access Control (RBAC) in Minikube</title>
      <dc:creator>Noble Mutuwa  Mulaudzi</dc:creator>
      <pubDate>Fri, 21 Mar 2025 06:35:10 +0000</pubDate>
      <link>https://dev.to/mutuwa99/implementing-role-based-access-control-rbac-in-minikube-4d2d</link>
      <guid>https://dev.to/mutuwa99/implementing-role-based-access-control-rbac-in-minikube-4d2d</guid>
      <description>&lt;h3&gt;
  
  
  &lt;u&gt;Article by Noble Mutuwa Mulaudzi: DevOps Engineer &lt;/u&gt;
&lt;/h3&gt;

&lt;p&gt;Role-Based Access Control (RBAC) allows you to define permissions for users in a Kubernetes cluster. This guide walks through setting up RBAC in Minikube using a ServiceAccount.&lt;/p&gt;




&lt;p&gt;&lt;u&gt;## &lt;strong&gt;Understanding Key RBAC Terms&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Before implementing RBAC, it's important to understand the main concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Role&lt;/strong&gt;: Defines a set of permissions within a namespace.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ClusterRole&lt;/strong&gt;: Similar to a Role but applies across the entire cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RoleBinding&lt;/strong&gt;: Associates a Role with a user, group, or ServiceAccount within a specific namespace.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ClusterRoleBinding&lt;/strong&gt;: Similar to RoleBinding but applies cluster-wide.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ServiceAccount&lt;/strong&gt;: An account used by applications running inside the cluster to interact with Kubernetes resources.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;u&gt;## &lt;strong&gt;Why Implement RBAC?&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;RBAC is crucial for securing your Kubernetes environment by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Restricting Access&lt;/strong&gt;: Prevents unauthorized users from making changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Least Privilege Principle&lt;/strong&gt;: Users and services only get the minimum permissions needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Security&lt;/strong&gt;: Reduces the risk of accidental or malicious actions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance and Auditability&lt;/strong&gt;: Helps meet security policies and regulatory requirements.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;u&gt;## &lt;strong&gt;Step 1: Create a ServiceAccount&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;First, create a &lt;code&gt;ServiceAccount&lt;/code&gt; named &lt;code&gt;dev-user&lt;/code&gt; in the &lt;code&gt;default&lt;/code&gt; namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-user&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; sa.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;u&gt;## &lt;strong&gt;Step 2: Create a Role&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Define a &lt;code&gt;Role&lt;/code&gt; that grants limited permissions. The following configuration allows listing and getting pods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Role&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pod-reader&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pods"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;get"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; role.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3: Bind the Role to the ServiceAccount&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Create a &lt;code&gt;RoleBinding&lt;/code&gt; to associate the &lt;code&gt;pod-reader&lt;/code&gt; role with the &lt;code&gt;dev-user&lt;/code&gt; ServiceAccount.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pod-reader-binding&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-user&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Role&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pod-reader&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the binding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; rolebinding.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;u&gt;## &lt;strong&gt;Step 4: Get a Token for the ServiceAccount&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Run the following command to generate a token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create token dev-user &lt;span class="nt"&gt;--namespace&lt;/span&gt; default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This token is required for authentication.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 5: Use the Token to Authenticate&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Extract the token:&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl create token dev-user &lt;span class="nt"&gt;--namespace&lt;/span&gt; default&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;2. Set up a new context:&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config set-credentials dev-user &lt;span class="nt"&gt;--token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$TOKEN&lt;/span&gt;
kubectl config set-context dev-user-context &lt;span class="nt"&gt;--cluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;minikube &lt;span class="nt"&gt;--user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dev-user
kubectl config use-context dev-user-context
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;3. Verify access:&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should be able to list pods but not create, delete, or modify them.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 6: Test Restricted Access&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Try running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete pod &amp;lt;pod-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should fail due to insufficient permissions.&lt;/p&gt;




&lt;p&gt;&lt;u&gt;## &lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;This setup ensures that &lt;code&gt;dev-user&lt;/code&gt; has restricted access based on RBAC rules. You can extend this setup to include permissions for additional resources like services and deployments if needed.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building Scalable Microservices with Apache Kafka, django, docker and Keycloak,</title>
      <dc:creator>Noble Mutuwa  Mulaudzi</dc:creator>
      <pubDate>Mon, 10 Mar 2025 15:08:59 +0000</pubDate>
      <link>https://dev.to/mutuwa99/building-scalable-microservices-with-apache-kafka-django-docker-and-keycloak-25fd</link>
      <guid>https://dev.to/mutuwa99/building-scalable-microservices-with-apache-kafka-django-docker-and-keycloak-25fd</guid>
      <description>&lt;p&gt;&lt;em&gt;##Architecture diagram:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8m657lqsg1i2uzvnn68.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8m657lqsg1i2uzvnn68.png" alt="Image description" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  By Noble Mutuwa Mulaudzi - DevOps Engineer
&lt;/h4&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Microservices architecture has become the go-to approach for building &lt;strong&gt;scalable, resilient, and maintainable&lt;/strong&gt; applications. In this article, I'll walk you through how I designed and implemented a microservices system using:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Keycloak&lt;/strong&gt; for authentication
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Apache Kafka&lt;/strong&gt; as the message bus
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Django&lt;/strong&gt; for the backend services
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;React&lt;/strong&gt; for the frontend
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt; to containerize everything and run it locally
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system consists of &lt;strong&gt;four microservices&lt;/strong&gt;:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;User Management&lt;/strong&gt; - Handles authentication and user profiles
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Appointment&lt;/strong&gt; - Manages appointment scheduling
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analytics&lt;/strong&gt; - Collects system usage data
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notification&lt;/strong&gt; - Sends notifications based on events
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Why Microservices?
&lt;/h2&gt;

&lt;p&gt;Traditional monolithic applications often struggle with scalability and maintainability. Microservices, on the other hand, offer:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; – Independent services can scale separately.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resilience&lt;/strong&gt; – Failure in one service doesn’t bring down the entire system.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt; – Services can be developed, deployed, and maintained independently.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Django&lt;/strong&gt; – Backend framework for each microservice
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;React&lt;/strong&gt; – Frontend application
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keycloak&lt;/strong&gt; – Centralized authentication and authorization
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Apache Kafka&lt;/strong&gt; – Message broker for event-driven communication
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; – Primary database
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt; – Every service, including Keycloak, Kafka, and the frontend, is containerized
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Microservices Breakdown
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. User Management Microservice
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Handles user profiles and authentication
&lt;/li&gt;
&lt;li&gt;Uses Keycloak for issuing JWT tokens
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Appointment Microservice
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Manages user appointments
&lt;/li&gt;
&lt;li&gt;Emits &lt;code&gt;appointment-events&lt;/code&gt; when an appointment is created
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Analytics Microservice
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Collects system-wide usage data
&lt;/li&gt;
&lt;li&gt;Consumes &lt;code&gt;appointment-events&lt;/code&gt; from Kafka
&lt;/li&gt;
&lt;li&gt;Stores aggregated data for reporting
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Notification Microservice
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Listens for &lt;code&gt;appointment-events&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Sends email to users &lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Event-Driven Communication with Kafka
&lt;/h2&gt;

&lt;p&gt;Kafka enables &lt;strong&gt;asynchronous communication&lt;/strong&gt; between services. The architecture follows a &lt;strong&gt;producer-consumer&lt;/strong&gt; pattern:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Producers&lt;/strong&gt; – Each microservice publishes domain events.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consumers&lt;/strong&gt; – Other services listen to these events.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Topics&lt;/strong&gt; – Organize event streams ( &lt;code&gt;appointment-events&lt;/code&gt;,).
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example: Appointment Creation Event Flow
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;A user books an appointment through the &lt;strong&gt;Appointment&lt;/strong&gt; microservice.
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Appointment&lt;/strong&gt; service emits an &lt;code&gt;appointment.created&lt;/code&gt; event to Kafka.
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Analytics&lt;/strong&gt; service listens to this event and logs the new appointment for reporting.
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Notification&lt;/strong&gt; service listens to the same event and sends a confirmation email.
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Authentication with Keycloak
&lt;/h2&gt;

&lt;p&gt;Keycloak provides:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;User Management&lt;/strong&gt; – Self-registration, social login, password resets
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Role-Based Access Control (RBAC)&lt;/strong&gt; – Services enforce authorization based on user roles
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Token-based Authentication&lt;/strong&gt; – JWT tokens for secure API communication
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each microservice &lt;strong&gt;validates JWT tokens&lt;/strong&gt; before processing incoming requests.  &lt;/p&gt;




&lt;h2&gt;
  
  
  Running the System Locally
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Clone the Repository
&lt;/h3&gt;

&lt;p&gt;The full project is available on GitHub. Clone it using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/Mutuwa99/microservice-kafka.git
&lt;span class="nb"&gt;cd &lt;/span&gt;microservice-kafka
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Run Keycloak
&lt;/h3&gt;

&lt;p&gt;Navigate to the &lt;code&gt;stack/&lt;/code&gt; directory and start Keycloak using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;stack
docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; keycloak-docker-compose.yml up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Navigate to localhost:7080/&lt;/li&gt;
&lt;li&gt;Use 'admin' as the username and passowrd to login&lt;/li&gt;
&lt;li&gt;Create a realm called 'myrealm' and enable it&lt;/li&gt;
&lt;li&gt;Navigate to the 'myrealm' and click clients and create&lt;/li&gt;
&lt;li&gt;Fill in client id and name as 'user-management'&lt;/li&gt;
&lt;li&gt;Set Client authentication and Authorization to on &lt;/li&gt;
&lt;li&gt;Copy the client id and name for future use in the microservices &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Run Apache Kafka
&lt;/h3&gt;

&lt;p&gt;Still in the &lt;code&gt;stack/&lt;/code&gt; directory, start Kafka by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; kafka-docker-compose.yml up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt; Navigate to localhost:8080/ to see the kafka site&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Run All Microservices
&lt;/h3&gt;

&lt;p&gt;NB: before running this ensure :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You navigate to notification service and replace :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EMAIL_HOST_USER = ''  # Replace with your Gmail address
EMAIL_HOST_PASSWORD = ''  # Replace with the app password you generated
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Navigate to all the microservices folders , open their settings.py and replace the 'KEYCLOAK_CLIENT_SECRET' with the secret you got while setting up your client in keycloak&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To start all services, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; services-docker-compose.yml up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Our backend django service are running on these urls:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Usermanagement: &lt;code&gt;localhost:8000/admin/&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Appointment service: &lt;code&gt;localhost:10000/admin/&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Notification service: &lt;code&gt;localhost:9000/admin/&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Analytics service: &lt;code&gt;localhost:8001/admin/&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  our frontend app is running on: &lt;code&gt;localhost:3000/&lt;/code&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Login with the user you created on keycloak&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building microservices with &lt;strong&gt;Django, Keycloak, and Kafka&lt;/strong&gt; provides a scalable and maintainable solution. By using &lt;strong&gt;Docker&lt;/strong&gt; to containerize everything, you can run the entire system locally with ease.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are your thoughts on microservices? Have you used Django, Keycloak, or Kafka in your stack? Let’s discuss in the comments!&lt;/strong&gt; 🚀  &lt;/p&gt;




&lt;p&gt;This version includes the GitHub repository reference and the exact commands to start each component. Let me know if you want any additional changes! 🚀🔥&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are your thoughts on microservices? Have you used Django, Keycloak, or Kafka in your stack? Let’s discuss in the comments!&lt;/strong&gt; 🚀  &lt;/p&gt;




</description>
    </item>
    <item>
      <title>Deploying a robust Kubernetes cluster monitoring solution(node-exporter,prometheus &amp; grafana) with Helm</title>
      <dc:creator>Noble Mutuwa  Mulaudzi</dc:creator>
      <pubDate>Mon, 11 Sep 2023 06:26:07 +0000</pubDate>
      <link>https://dev.to/mutuwa99/deploying-a-robust-kubernetes-cluster-monitoring-solutionnode-exporterprometheus-grafana-with-helm-ck</link>
      <guid>https://dev.to/mutuwa99/deploying-a-robust-kubernetes-cluster-monitoring-solutionnode-exporterprometheus-grafana-with-helm-ck</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Monitoring your AWS Elastic Kubernetes Service (EKS) cluster is crucial for ensuring its health and performance. In this tutorial, I'll walk you through the process of deploying a robust monitoring solution using Helm, Prometheus, Node Exporter, and Grafana.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Article by Noble Mutuwa Mulaudzi&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture Diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8Pj07Oxt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/382zbkv7j8x9wk6kiqqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8Pj07Oxt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/382zbkv7j8x9wk6kiqqe.png" alt="Architecture Diagram" width="765" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before you begin, make sure you have the following prerequisites in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS EKS cluster up and running.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl&lt;/code&gt; installed and configured to connect to your EKS cluster.&lt;/li&gt;
&lt;li&gt;Helm installed on your local machine.&lt;/li&gt;
&lt;li&gt;Basic knowledge of Kubernetes concepts.&lt;/li&gt;
&lt;li&gt;eksctl installed on your local machine&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Setting Up the Cluster
&lt;/h4&gt;

&lt;p&gt;If you don't have an EKS cluster, follow these steps to configure your cluster:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;source code&lt;/strong&gt; : &lt;a href="https://github.com/Mutuwa99/EFK_monitoring"&gt;Noble Mutuwa K8s monitoring files&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Clone the repo and navigate to the application's folder&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apply the &lt;code&gt;cluster.yaml&lt;/code&gt;, &lt;code&gt;deployment.yaml&lt;/code&gt;, and &lt;code&gt;service.yaml&lt;/code&gt; files:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl create cluster &lt;span class="nt"&gt;-f&lt;/span&gt; cluster.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; deployment.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Retrieve the service and access your application via the load balancer:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is our application running on an EKS cluster&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LsFYNpAB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lng3qb3eh4urbmf5xl1f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LsFYNpAB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lng3qb3eh4urbmf5xl1f.png" alt="Image description" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Up Prometheus, Node Exporter, and Grafana
&lt;/h3&gt;

&lt;p&gt;To set up Prometheus, Grafana, and Node Exporter as a monitoring stack for your Kubernetes cluster, you can use Helm to simplify the deployment process. Here are the steps to install the Prometheus stack on your Kubernetes cluster using Helm:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Install Helm (if not already installed):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you haven't already installed Helm on your Windows machine using Chocolatey or any other method, please follow the instructions provided earlier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Add Helm Chart Repositories:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Create a Namespace for Monitoring (Optional):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace monitoring
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Install Prometheus using Helm:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;prometheus-stack prometheus-community/kube-prometheus-stack &lt;span class="nt"&gt;-n&lt;/span&gt; monitoring
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Exposing Prometheus and Grafana Services
&lt;/h3&gt;

&lt;p&gt;To access Prometheus and Grafana, edit the services within the monitoring namespace, and change the service type to LoadBalancer:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ewYoiSib--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8mr8x5ho0nbiw5xzjmh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ewYoiSib--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8mr8x5ho0nbiw5xzjmh.png" alt="LoadBalancer Services" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Accessing Grafana
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Copy the LoadBalancer URL of the Grafana service and paste it into your web browser.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use the following credentials to log in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Username: admin&lt;/li&gt;
&lt;li&gt;Password: prom-operator&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Set Up Data Sources and Dashboards in Grafana
&lt;/h3&gt;

&lt;p&gt;Configure data sources and import dashboards in Grafana to start visualizing your Kubernetes cluster metrics. You can find various Kubernetes-related dashboards in the Grafana dashboard marketplace.&lt;/p&gt;

&lt;p&gt;Here's a dashboard I've configured to monitor my cluster's health:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--453yVVOv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t0eyzy3ml1j0pcaaaauh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--453yVVOv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t0eyzy3ml1j0pcaaaauh.png" alt="Cluster Health Dashboard" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations! You've successfully deployed a robust monitoring solution for your AWS EKS cluster. With Prometheus, Node Exporter, and Grafana in place, you can monitor and visualize metrics, helping you keep your Kubernetes environment healthy and efficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Article by Noble Mutuwa Mulaudzi&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thank you!&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Monitoring Linux Servers with Node Exporter, Prometheus, and Grafana: A Comprehensive Guide</title>
      <dc:creator>Noble Mutuwa  Mulaudzi</dc:creator>
      <pubDate>Thu, 07 Sep 2023 13:44:58 +0000</pubDate>
      <link>https://dev.to/mutuwa99/monitoring-linux-servers-with-node-exporter-prometheus-and-grafana-a-comprehensive-guide-1p35</link>
      <guid>https://dev.to/mutuwa99/monitoring-linux-servers-with-node-exporter-prometheus-and-grafana-a-comprehensive-guide-1p35</guid>
      <description>&lt;p&gt;Server monitoring is a critical aspect of managing a robust infrastructure. In this comprehensive guide, we'll explore how to set up an efficient and versatile monitoring system for Linux servers using Node Exporter, Prometheus, and Grafana. This combination of tools allows you to collect, store, and visualize performance metrics, providing valuable insights into your server's health and performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Article By Noble Mutuwa Mulaudzi&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Architecture diagram &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgxvri04d92paytni945.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgxvri04d92paytni945.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Before we dive into the setup, here are the prerequisites you'll need:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An AWS EC2 instance running a Linux distribution (e.g., Ubuntu).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SSH access to the instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic knowledge of Linux command-line operations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 1: Install Node Exporter
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Node Exporter is a tool for collecting system-level metrics. Here's how to install it:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download and install Node Exporter:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz
tar xvfz node_exporter-1.2.2.linux-amd64.tar.gz
sudo mv node_exporter-1.2.2.linux-amd64/node_exporter /usr/local/bin/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a systemd service unit file for Node Exporter:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/systemd/system/node_exporter.service

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following content to the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Node Exporter
After=network.target

[Service]
ExecStart=/usr/local/bin/node_exporter
Restart=always

[Install]
WantedBy=multi-user.target

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Reload systemd and start Node Exporter:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl daemon-reload
sudo systemctl start node_exporter
sudo systemctl enable node_exporter

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Install Prometheus
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Prometheus is a monitoring and alerting system. Here's how to install it:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Download and install Prometheus:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://github.com/prometheus/prometheus/releases/download/v2.30.2/prometheus-2.30.2.linux-amd64.tar.gz
tar xvfz prometheus-2.30.2.linux-amd64.tar.gz
sudo mv prometheus-2.30.2.linux-amd64/prometheus /usr/local/bin/
sudo mv prometheus-2.30.2.linux-amd64/promtool /usr/local/bin/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a Prometheus configuration file:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/prometheus/prometheus.yml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following basic configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'node_exporter'
    static_configs:
      - targets: ['your-instance-ip:9100']

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a systemd service unit file for Prometheus:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/systemd/system/prometheus.service

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following content to the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Prometheus
After=network.target

[Service]
ExecStart=/usr/local/bin/prometheus --config.file=/etc/prometheus/prometheus.yml
Restart=always

[Install]
WantedBy=multi-user.target

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Reload systemd and start Prometheus:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl daemon-reload
sudo systemctl start prometheus
sudo systemctl enable prometheus

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Install Grafana
&lt;/h3&gt;

&lt;p&gt;Grafana is a visualization and dashboarding tool for monitoring. Here's how to install it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add the Grafana APT repository and install Grafana:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install -y software-properties-common
sudo add-apt-repository "deb https://packages.grafana.com/oss/deb stable main"
sudo wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
sudo apt-get update
sudo apt-get install grafana

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt; Start and enable the Grafana service:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start grafana-server
sudo systemctl enable grafana-server

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Access Grafana:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Grafana's web interface is available at &lt;a href="http://your-instance-ip:3000" rel="noopener noreferrer"&gt;http://your-instance-ip:3000&lt;/a&gt;. You can log in with the default username and password: admin/admin. It's recommended to change the password after the initial login.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can now configure your data source and dashboards to monitor your infrastructure&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmi0qxx3gny7jt58eymo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmi0qxx3gny7jt58eymo.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Above is our Grafana GUI, with beautiful dashboard showing CPU usage of our target machines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Article By &lt;strong&gt;Noble Mutuwa Isaya Mulaudzi&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;***************&lt;strong&gt;&lt;em&gt;Thank you&lt;/em&gt;&lt;/strong&gt;*************&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Streamlining AWS Linux Servers Configuration with Ansible</title>
      <dc:creator>Noble Mutuwa  Mulaudzi</dc:creator>
      <pubDate>Tue, 25 Jul 2023 10:44:07 +0000</pubDate>
      <link>https://dev.to/mutuwa99/streamlining-aws-linux-servers-configuration-with-ansible-h71</link>
      <guid>https://dev.to/mutuwa99/streamlining-aws-linux-servers-configuration-with-ansible-h71</guid>
      <description>&lt;p&gt;In the ever-evolving landscape of cloud computing, Amazon Web Services (AWS) has emerged as a powerhouse, providing robust infrastructure solutions to businesses worldwide. As organizations migrate their workloads to AWS, efficient server management becomes crucial for optimal performance and cost-effectiveness. Enter Ansible, the open-source automation tool that simplifies the management of AWS Linux servers through its powerful configuration management capabilities.&lt;/p&gt;

&lt;p&gt;In this guide, I will walk you through the process of effectively managing your cloud servers using Ansible, we will use Ansible to automate  updates in our webserver,(APACHE) &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tutorial by &lt;strong&gt;Noble Mutuwa Mulaudzi&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;Architecture diagram&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dy8Vo8Kw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ivtobntopslzyid0st9i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dy8Vo8Kw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ivtobntopslzyid0st9i.png" alt="Image description" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Before we start let us get familiar with some Ansible terms&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Ansible playbook&lt;/strong&gt; : a set of organized instructions that define the desired state of remote hosts and the tasks to be executed on them&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Inventory&lt;/strong&gt; : a configuration file that lists the hosts or groups of hosts that Ansible can manage,&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Modules&lt;/strong&gt; : pre-built, reusable scripts that Ansible uses to interact with the remote hosts and carry out specific tasks such as installing packages, managing files, or starting services.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;In this tutorial we will be using &lt;strong&gt;package&lt;/strong&gt; and &lt;strong&gt;service&lt;/strong&gt; modules&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;Pre-requisites for Windows Users&lt;/em&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Install Windows Subsystem for Linux (WSL)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose a Linux Distribution( I  will be using Ubuntu, installed from microsoft store)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install Ansible on the Linux Distribution&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;1 or more Cloud servers with apache installed&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Yeey, we have our linux enviroment set with Ansible installed&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nbDTP7m_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7d0qtulg3861anq4532g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nbDTP7m_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7d0qtulg3861anq4532g.png" alt="Image description" width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Remember to have your target server's key pairs on your enviroment where you are using Ansible.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;Step 1: Preparing the Ansible Inventory:&lt;/em&gt;_
&lt;/h3&gt;

&lt;p&gt;Create an inventory file (e.g., hosts) that lists the IP addresses or hostnames of your AWS Linux instances. Group them logically based on their roles, such as web_servers and database_servers. The inventory file might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
[web_servers]
webserver1 ansible_host=server_ip ansible_user=your_user ansible_ssh_private_key_file=/path/to/key/key.pem

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;You can add as many servers as you want&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;Step 2: Writing the Ansible Playbook:&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Next, create an Ansible playbook (e.g., your_playbook.yml) that defines the configuration tasks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
- name: Update and Restart Apache on EC2 instances
  hosts: web_servers
  become: yes  # This allows Ansible to become root (sudo) to perform updates

  tasks:
    - name: Update Apache
      package:
        name: httpd
        state: latest
      become: yes

    - name: Restart Apache
      service:
        name: httpd
        state: restarted
      become: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;em&gt;Step 3: Executing the Playbook:&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Now, execute the playbook using the following command within the Linux terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i /path/to/your_inventory_file/inventory.ini /path/to/your_playbook/your_playbook.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Ansible will connect to the AWS Linux instances through SSH, perform the specified tasks, and ensure that the systems are updated and equipped with the necessary software.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Here is our result&lt;/em&gt; . and just like that we have successfully updated our webserver with the latest apache through Ansible. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8DT77IX5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cfl57gdfziw53v2722gx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8DT77IX5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cfl57gdfziw53v2722gx.png" alt="Image description" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note that you can use various Ansible modules to perform other Ansible tasks and automate a wide range of operations on remote hosts. Some commonly used Ansible modules include:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;apt&lt;/strong&gt;: Manages packages on Debian/Ubuntu systems, allowing you to install, update, or remove packages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;yum&lt;/strong&gt;: Similar to apt, but for Red Hat/CentOS systems, managing packages accordingly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;copy&lt;/strong&gt;: Enables you to copy files from the control machine to remote hosts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;file&lt;/strong&gt;: Helps manage files and directories on the remote hosts, allowing you to set permissions, ownership, etc&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;git&lt;/strong&gt;: Allows you to clone Git repositories onto the remote hosts.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Having completed this tutorial, you are now equipped to unleash the full power of Ansible and its modules, enabling you to achieve even greater efficiency and automation in managing AWS instances. With Ansible's flexibility and ease of use, you can confidently tackle more complex tasks and drive innovation within your cloud infrastructure. Embrace the potential of Ansible to streamline your server management and open new possibilities for your AWS environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Article by Noble Mutuwa Mulaudzi
&lt;/h4&gt;

</description>
    </item>
    <item>
      <title>CI/CD Pipeline with github actions &amp; AWS EC2 instance</title>
      <dc:creator>Noble Mutuwa  Mulaudzi</dc:creator>
      <pubDate>Thu, 06 Jul 2023 11:54:08 +0000</pubDate>
      <link>https://dev.to/mutuwa99/cicd-pipeline-with-github-actions-aws-ec2-instance-1f1h</link>
      <guid>https://dev.to/mutuwa99/cicd-pipeline-with-github-actions-aws-ec2-instance-1f1h</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;In today's fast-paced software development environment, continuous integration and continuous deployment (CI/CD) pipelines are essential for streamlining the software delivery process. GitHub Actions, coupled with the power of AWS EC2 instances, provide a robust and scalable solution for automating CI/CD workflows. In this article, i will guide you through the process of setting up a CI/CD pipeline using GitHub Actions and an AWS EC2 instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4k3mvwn3autem4dwin5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4k3mvwn3autem4dwin5u.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NOTE:You can follow the same steps to deploy to  shared hosting platforms  like Cpanel, Plesk and more.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  pre-requisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Github Account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Git repository with your application's code.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Creating our web server.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Log in to your AWS Management Console and navigate to the EC2 service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Launch an EC2 instance, choosing the appropriate Amazon Machine Image (AMI) based on your requirements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure security groups and access credentials to allow incoming traffic through ssh, http and https.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Here is our webserver&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fft4tr929ogv0h4o7me20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fft4tr929ogv0h4o7me20.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring GitHub Environment:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Environments helps you to Identify the different environments needed for your application, such as development, staging, and production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In your GitHub repository, navigate to the "Settings" tab and click on "Environments".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the desired environments, specifying a name and optional description for each.  ( i have created the environment to be QA-mywebsite)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fascm7gyhjbsbnhzf3ho3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fascm7gyhjbsbnhzf3ho3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring GitHub Secrets for our QA environment:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;GitHub Actions allows you to securely store sensitive information like access keys or tokens using Secrets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the GitHub repository, go to "Settings" and click on Environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to your environment (QA environments)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add secrets required for connecting to your AWS EC2 instance, such as access keys or SSH private keys..&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Here is our  environment secrets configuartion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sd8gtqxivqotzngbdil.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sd8gtqxivqotzngbdil.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring our workflow file.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the root of your repository , create a folder called .github/workflows and create a file called deploy.yml&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Here is our deploy.yml file.&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

name: Deploy to EC2

on:
  push:
    branches:
      - master


jobs:
  checkout_mycode:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2

      - name: check out
        id: check
        run: |
          echo "checking out the code"


  deploy_to_qa:
    needs: checkout_mycode
    if: github.ref == 'refs/heads/master'
    runs-on: ubuntu-latest
    environment: 
      name: QA-mywebsite
      url: http://noble-mutuwa.com/

    steps:
      - name: Checkout code
        uses: actions/checkout@v2

      - name: Set up SSH
        uses: webfactory/ssh-agent@v0.5.0
        with:
          ssh-private-key: ${{ secrets.SERVER_SSH_PRIVATE_KEY }}

      - name: Copy files to EC2
        run: scp -r -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null "${{ github.workspace }}" ec2-user@50.17.57.13:~/website/

      - name: SSH into EC2
        run: | 
          ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ec2-user@50.17.57.13 "sudo cp -r ~/website/mywebsite/* /var/www/html/"

  qa_send_notification:
    needs: deploy_to_qa
    if: needs.deploy_to_qa.result == 'success'
    runs-on: ubuntu-latest
    steps:
      - name: Send notification to repository owner
        run: |
          REPO_OWNER=$(jq --raw-output .repository.owner.login "${GITHUB_EVENT_PATH}")
          NOTIFICATION="Deployment qa completed."
          API_URL="https://api.github.com/repos/${GITHUB_REPOSITORY}/notifications"
          RESPONSE=$(curl -sSL -H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" -H "Content-Type: application/json" -X POST -d "{\"subject\":\"$NOTIFICATION\",\"repository\":\"$REPO_OWNER/$GITHUB_REPOSITORY\"}" "$API_URL")
          echo "Notification sent to repository owner: $REPO_OWNER"         




&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Deploying  to EC2&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Save the deploy.yml file and push the code to your remote repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to your github repo  to observe the pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Note that after checking the code ,the workflow will prompt you to review the changes and approve in order to deploy to your EC2 instance. (remember that we added reviewers when we created our QA-website environment)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq86ucjxdz8gd8n1w0v9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq86ucjxdz8gd8n1w0v9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We will  review the deployment and approve.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After the deployment has been done, we wil get the end point (url) for our application that we defined in the deploy.yml file&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxuvl7vkscvwajnamxjs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxuvl7vkscvwajnamxjs.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can now access our Application using the url&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq2ih173jlfjn2a7d9qa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq2ih173jlfjn2a7d9qa.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;There is our beautiful  flask(python)  application with a Chatbot&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In my next article , i will take you through on how you can build this chat bot , train it with data and how the chatbot can learn from different responses provided by the user&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Article by Noble Mutuwa Mulaudzi&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a CRUD Python Serverless API with DynamoDB using the Serverless Framework.</title>
      <dc:creator>Noble Mutuwa  Mulaudzi</dc:creator>
      <pubDate>Sun, 04 Jun 2023 15:29:19 +0000</pubDate>
      <link>https://dev.to/mutuwa99/building-a-crud-python-serverless-api-with-dynamodb-using-the-serverless-framework-38l9</link>
      <guid>https://dev.to/mutuwa99/building-a-crud-python-serverless-api-with-dynamodb-using-the-serverless-framework-38l9</guid>
      <description>&lt;p&gt;Hi,My name is Noble Mutuwa Mulaudzi, AWS DevOps Engineer and Linux enthusiast.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll learn how to build a serverless API using Python and DynamoDB, leveraging the power of the Serverless Framework. The API will provide Create, Read, Update, and Delete (CRUD) operations for managing items. We'll also demonstrate how to test the API using Postman.&lt;/p&gt;

&lt;p&gt;_&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture diagram
&lt;/h3&gt;

&lt;p&gt;_&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_9yT_WX7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/46fythcyt7m07258baim.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_9yT_WX7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/46fythcyt7m07258baim.png" alt="Image description" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source code: (&lt;a href="https://github.com/Mutuwa99/python-crud-api"&gt;https://github.com/Mutuwa99/python-crud-api&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To follow along with this tutorial, you'll need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Python 3.x installed on your machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Node.js and npm installed for installing the Serverless Framework.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An AWS account with appropriate permissions to create resources like Lambda &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setting up the Project
&lt;/h3&gt;

&lt;p&gt;1.Install the Serverless Framework by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g serverless

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Create a new directory for your project and navigate into it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir python_rest_api
cd python_rest_api

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Initialize a new Serverless service using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless create --template aws-python3 --name python_rest_api

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.Install the required Python packages by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install boto3

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create separate Python files for each CRUD operation inside the project directory: &lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;create.py&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;delete.py&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;update.py&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;read_one.py&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;read_all.py&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Configuring DynamoDB
&lt;/h3&gt;

&lt;p&gt;1.Open the serverless.yml file and update it with the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service: my-serverless-api

provider:
  name: aws
  runtime: python3.8

functions:
  create:
    handler: create.create
    events:
      - http:
          path: items
          method: post

  readAll:
    handler: read_all.read_all
    events:
      - http:
          path: items
          method: get

  readOne:
    handler: read_one.read_one
    events:
      - http:
          path: items/{id}
          method: get

  update:
    handler: update.update
    events:
      - http:
          path: items/{id}
          method: put

  delete:
    handler: delete.delete
    events:
      - http:
          path: items/{id}
          method: delete

resources:
  Resources:
    ItemsTable:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: Items
        AttributeDefinitions:
          - AttributeName: id
            AttributeType: S
        KeySchema:
          - AttributeName: id
            KeyType: HASH
        ProvisionedThroughput:
          ReadCapacityUnits: 1
          WriteCapacityUnits: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Implementing the CRUD Functions
&lt;/h3&gt;

&lt;p&gt;2.Open the create.py file and update it with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
from botocore.exceptions import ClientError

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Items')

def create(event, context):
    data = json.loads(event['body'])
    item_id = data['id']
    item_name = data['name']
    item_price = data['price']

    try:
        table.put_item(
            Item={
                'id': item_id,
                'name': item_name,
                'price': item_price
            }
        )
        response = {
            'statusCode': 200,
            'body': json.dumps({'message': 'Item created successfully'})
        }
    except ClientError as e:
        response = {
            'statusCode': 500,
            'body': json.dumps({'error': str(e)})
        }

    return response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Open the delete.py file and update it with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
from botocore.exceptions import ClientError

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Items')

def delete(event, context):
    item_id = event['pathParameters']['id']

    try:
        table.delete_item(Key={'id': item_id})
        response = {
            'statusCode': 200,
            'body': json.dumps({'message': 'Item deleted successfully'})
        }
    except ClientError as e:
        response = {
            'statusCode': 500,
            'body': json.dumps({'error': str(e)})
        }

    return response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.Open the update.py file and update it with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
from botocore.exceptions import ClientError

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Items')

def update(event, context):
    item_id = event['pathParameters']['id']
    data = json.loads(event['body'])
    item_name = data['name']
    item_price = data['price']

    try:
        response = table.update_item(
            Key={'id': item_id},
            UpdateExpression='set #name = :n, #price = :p',
            ExpressionAttributeNames={'#name': 'name', '#price': 'price'},
            ExpressionAttributeValues={':n': item_name, ':p': item_price},
            ReturnValues='UPDATED_NEW'
        )
        updated_item = response['Attributes']
        response = {
            'statusCode': 200,
            'body': json.dumps(updated_item)
        }
    except ClientError as e:
        response = {
            'statusCode': 500,
            'body': json.dumps({'error': str(e)})
        }

    return response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.Open the read_one.py file and update it with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
from botocore.exceptions import ClientError

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Items')

def read_one(event, context):
    item_id = event['pathParameters']['id']

    try:
        response = table.get_item(Key={'id': item_id})
        item = response.get('Item')
        if item:
            response = {
                'statusCode': 200,
                'body': json.dumps(item)
            }
        else:
            response = {
                'statusCode': 404,
                'body': json.dumps({'error': 'Item not found'})
            }
    except ClientError as e:
        response = {
            'statusCode': 500,
            'body': json.dumps({'error': str(e)})
        }

    return response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6.Open the read_all.py file and update it with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
from botocore.exceptions import ClientError

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Items')

def read_all(event, context):
    try:
        response = table.scan()
        items = response['Items']
        response = {
            'statusCode': 200,
            'body': json.dumps(items)
        }
    except ClientError as e:
        response = {
            'statusCode': 500,
            'body': json.dumps({'error': str(e)})
        }

    return response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploy the Serverless API using the following command:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless deploy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--txO3Ey-P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ucvciic2aifte4930uhu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--txO3Ey-P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ucvciic2aifte4930uhu.png" alt="Image description" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing the API using Postman
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open Postman and create a new request&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the request URL to the appropriate endpoint for each CRUD operation that you got when you did serverless deloy :&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Replace  with the actual API endpoint you noted down earlier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3.Set the request body for the create and update operations. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "id": "1",
  "name": "Noble's Bill",
  "price": "R10 000"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.Send the request and observe the responses.&lt;/p&gt;

&lt;h3&gt;
  
  
  POST Request:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BNS1F2ba--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vp8a20pc6tu8pswhl64n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BNS1F2ba--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vp8a20pc6tu8pswhl64n.png" alt="Image description" width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Get Request:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7Bt3lZBM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2jgoxbw6du52ltwxabic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7Bt3lZBM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2jgoxbw6du52ltwxabic.png" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Congratulations! You have successfully built a CRUD Python serverless API with DynamoDB using the Serverless Framework. You've also tested the API using Postman.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Feel free to explore and expand upon this foundation to build more complex serverless APIs to suit your specific requirements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;That concludes the tutorial. We've covered how to create a CRUD Python serverless API with DynamoDB using the Serverless Framework and testing through postman&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Article by Noble Mutuwa Mulaudzi
&lt;/h3&gt;

</description>
      <category>serverless</category>
      <category>python</category>
      <category>dynamodb</category>
    </item>
    <item>
      <title>Leveraging Elastic Kubernetes Service for Scalable Deployments</title>
      <dc:creator>Noble Mutuwa  Mulaudzi</dc:creator>
      <pubDate>Fri, 26 May 2023 05:01:47 +0000</pubDate>
      <link>https://dev.to/mutuwa99/leveraging-elastic-kubernetes-service-for-scalable-deployments-280g</link>
      <guid>https://dev.to/mutuwa99/leveraging-elastic-kubernetes-service-for-scalable-deployments-280g</guid>
      <description>&lt;h3&gt;
  
  
  Hi My name is Noble Mutuwa, AWS DevOps Engineer and a linux enthusiast.
&lt;/h3&gt;

&lt;p&gt;AWS EKS (Elastic Kubernetes Service) is a managed Kubernetes service provided by Amazon Web Services. It allows you to easily deploy, manage, and scale containerized applications using Kubernetes. In this article, i will walk you through the steps of deploying a web app on AWS EKS and expose it  using a load balancer.&lt;/p&gt;

&lt;p&gt;Architecture Diagram****&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NcSaJ7t8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w5fuxt34tk6xzjw03io4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NcSaJ7t8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w5fuxt34tk6xzjw03io4.png" alt="Image description" width="685" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Github repo for the source code:
(&lt;a href="https://github.com/Mutuwa99/kubernetes-EKS-"&gt;https://github.com/Mutuwa99/kubernetes-EKS-&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  I will walk you  through the following steps:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Containerize the Web App using docker&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an ECR repository on AWS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Push the application image to ECR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set Up AWS EKS Cluster (using EKSCTL)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy the Web App to AWS EKS(using kubectl)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Expose the Web App using  a Load Balancer&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Clean up our resources&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Amazon Web Services (AWS) Account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker desktop installed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubectl installed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;eksctl installed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS CLI installed and configured&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 1: Creating a docker file
&lt;/h3&gt;

&lt;p&gt;In the root of our project, we will create a docker file that will build our image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM php:8.1.0-apache


# Mod Rewrite
RUN a2enmod rewrite

# Linux Library
RUN apt-get update -y &amp;amp;&amp;amp; apt-get install -y \
    libicu-dev \
    libmariadb-dev \
    unzip zip \
    zlib1g-dev \
    libpng-dev \
    libjpeg-dev \
    libfreetype6-dev \
    libjpeg62-turbo-dev \
    libpng-dev 

# Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer

# PHP Extension
RUN docker-php-ext-install gettext intl pdo_mysql gd

EXPOSE 80

RUN docker-php-ext-configure gd --enable-gd --with-freetype --with-jpeg \
    &amp;amp;&amp;amp; docker-php-ext-install -j$(nproc) gd


COPY --chown=www-data:www-data . /srv/app 
COPY .docker/vhost.conf /etc/apache2/sites-available/000-default.conf 

WORKDIR /srv/app 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After having created our docker file we will build our docker image using the following command&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NB:Make sure you have docker daemon running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t myapp .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;after some few mins our docker image has been built , and is ready to be accessed through the browser.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SHWISq2M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g7dlszyuuqxuw6bixrdz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SHWISq2M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g7dlszyuuqxuw6bixrdz.png" alt="Image description" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;we will run the container to test if we can  access our webapp on the browser
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -p 80:80 myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O0t_zzbn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/34qwnim9jo2925pv0wzl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O0t_zzbn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/34qwnim9jo2925pv0wzl.png" alt="Image description" width="800" height="61"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your can accesss the web application at (localhost:80)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2 : Creating ECR repo on AWS:
&lt;/h3&gt;

&lt;p&gt;We will login to our AWS Account and create an ECR repo that will store this image&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Navigate to services and search ECR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select repositories and create repository&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to your repository and go to view push commands&lt;br&gt;
-We will use push commands provided by AWS ECR  to login to ECR ,build,tag and push our image to ECR from our terminal,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Retrieve an authentication token and authenticate your Docker client to your registry:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws/q0k8p3n5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After a successfull login we will then build the docker image using the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t myapp .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the build completes, tag your image so you can push the image to this repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag myapp:latest public.ecr.aws/q0k8p3n5/myapp:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run the following command to push this image to your newly created AWS repository:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push public.ecr.aws/q0k8p3n5/myapp:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--INKDz8KK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/erkog9i445csnuw0v9pu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--INKDz8KK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/erkog9i445csnuw0v9pu.png" alt="Image description" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We will navigate to our ECR repo and see if our image is pushed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m0kas1Sc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8527i3im2ddfeoa6mcp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m0kas1Sc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8527i3im2ddfeoa6mcp2.png" alt="Image description" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Creating our EKS Cluster using EKSCTL and sending manifest files using kubectl
&lt;/h3&gt;

&lt;p&gt;At the root of our project we will create a file called  cluster.yaml, this file will contain information about our cluster and we will use this file to describe the kind of resources we want in our cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: my-eks-cluster5
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t3.micro
    desiredCapacity: 2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we will create the cluster using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create cluster -f cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This may take close to 15 mins to finish. be patient.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-After 15 mins our cluster is ready. &lt;/p&gt;

&lt;p&gt;As we know that we communicate with kubernetes in a declarative way, we will need to create manifest files that we will send to kubernetes using kubectl. These files serve as declarative blueprints that define the desired state of the application, including the containers, networking, storage, and other resources required for its operation.&lt;/p&gt;

&lt;p&gt;Deployment.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: laravel-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: laravel-app
  template:
    metadata:
      labels:
        app: laravel-app
    spec:
      containers:
        - name: laravel-app
          image: public.ecr.aws/q0k8p3n5/mytest:latest
          ports:
            - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Service.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: laravel-service
  labels:
    app: laravel-app

spec:
  selector:
    app: laravel-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Now we send our manifest files using kubectl&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To apply the deployment files run:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To apply the service file run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Monitor the deployment status using:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deployments

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GCl9EV5A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/26d386h35dbvoalgvg4k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GCl9EV5A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/26d386h35dbvoalgvg4k.png" alt="Image description" width="800" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To access our application on the browser:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy the loadbalancer URL and the port
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;a7468c76bd8d74b3885b1b89c01d4f50-45776134.us-west-2.elb.amazonaws.com:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yeeey,Here is our web application deployed on Kubernetes using EKS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aAQ620zn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wtunfx7b1tpc7z7hs7vc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aAQ620zn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wtunfx7b1tpc7z7hs7vc.png" alt="Image description" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cleaning up:
&lt;/h3&gt;

&lt;p&gt;We will use eksctl to delete all resources we have created&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl delete cluster --region us-west-2 my-eks-cluster-demo1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After having done this projet you can go on to add it on your resume.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All the best&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deploying a php application on AWS using EB command line interface(eb cli)</title>
      <dc:creator>Noble Mutuwa  Mulaudzi</dc:creator>
      <pubDate>Mon, 22 May 2023 04:57:56 +0000</pubDate>
      <link>https://dev.to/mutuwa99/deploying-a-php-application-on-aws-using-eb-command-line-interfaceeb-cli-1ooh</link>
      <guid>https://dev.to/mutuwa99/deploying-a-php-application-on-aws-using-eb-command-line-interfaceeb-cli-1ooh</guid>
      <description>&lt;h3&gt;
  
  
  Hi my Name is Noble Mutuwa Mulaudzi,AWS DevOps Engineer and  a Linux enthusiast.
&lt;/h3&gt;

&lt;p&gt;In this article i am going to show you how you can quickly deploy a php application on AWS using EB cli (without having to login to the AWS management console.&lt;/p&gt;

&lt;p&gt;Source code: (&lt;a href="https://github.com/Mutuwa99/Ebcli"&gt;https://github.com/Mutuwa99/Ebcli&lt;/a&gt;)&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Deploying a PHP application on Amazon Web Services (AWS) can be a seamless process with the help of the Elastic Beanstalk Command Line Interface (EB CLI). AWS Elastic Beanstalk provides a platform-as-a-service (PaaS) for deploying and managing applications. In this article, i will walk you through the steps to deploy your PHP application on AWS using the EB CLI, enabling you to easily scale and manage your application infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites:
&lt;/h3&gt;

&lt;p&gt;Before we begin, make sure you have the following prerequisites in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An AWS account: Sign up for an AWS account if you don't have one already.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS CLI: Install the AWS Command Line Interface (CLI) on your local machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EB CLI: Install the Elastic Beanstalk Command Line Interface (EB CLI) on your local machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A PHP application: Have a PHP application ready for deployment.(You can download mine on my githu repo)&lt;br&gt;
&lt;a href="https://github.com/Mutuwa99/Ebcli"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 1: Set Up AWS CLI and EB CLI:
&lt;/h4&gt;

&lt;p&gt;First, you need to configure the AWS CLI with your AWS account credentials. Open your terminal and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Provide your AWS Access Key ID, Secret Access Key, default region, and output format when prompted.&lt;/p&gt;

&lt;p&gt;Next, install the EB CLI by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install awsebcli --upgrade --user

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Initialize the EB CLI:
&lt;/h3&gt;

&lt;p&gt;In your terminal, navigate to your PHP application's directory and initialize the EB CLI by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eb init

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 3: Setting up configs  for our application:
&lt;/h4&gt;

&lt;p&gt;When a user accesses laravel  application via a web browser, the server looks directly into the "public" folder to handle the incoming request. In that case we will have to predefine the document root of our application and set it to /public&lt;/p&gt;

&lt;p&gt;Create a folder called .ebextentions/01setup.config on your project root  and paste the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;option_settings:
  - namespace: aws:elasticbeanstalk:container:php:phpini
    option_name: document_root
    value: /public
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J6H0jPqs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/def5pjxm1wxd8n0xqyau.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J6H0jPqs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/def5pjxm1wxd8n0xqyau.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Follow the prompts to select your region, application name, and platform. Choose PHP as the platform for your application.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AF6sw29f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6vjsafit82sl8udckg13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AF6sw29f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6vjsafit82sl8udckg13.png" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Create an Environment:
&lt;/h3&gt;

&lt;p&gt;To create an environment for your PHP application, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eb create environment-name

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TS8O3Dpb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/51gciaco54qu4svgejrf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TS8O3Dpb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/51gciaco54qu4svgejrf.png" alt="Image description" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Replace "environment-name" with a suitable name for your environment. This process may take a few minutes to provision the necessary AWS resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Deploy Your Application:
&lt;/h3&gt;

&lt;p&gt;Once the environment is created, you can deploy your PHP application by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eb deploy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OmNsdYDE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sg69dpuvlknnovn2dlax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OmNsdYDE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sg69dpuvlknnovn2dlax.png" alt="Image description" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  step 6: Checking env health:
&lt;/h3&gt;

&lt;p&gt;Once you have deployed your application you can check your app env health by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eb status

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9Gd6BC_g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/80wxp8q4q9roi99fl9uh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9Gd6BC_g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/80wxp8q4q9roi99fl9uh.png" alt="Image description" width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Access Your Application:
&lt;/h3&gt;

&lt;p&gt;After the deployment is complete, you can access your application using the environment URL provided by Elastic Beanstalk. Run the following command to open the application in your default browser:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eb open
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y5m7qrqJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nxl1669pao0lge9hmu7n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y5m7qrqJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nxl1669pao0lge9hmu7n.png" alt="Image description" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  yeey we have sucessfully deployed our php application on AWS through EB CLI(without logging in to the management console)
&lt;/h3&gt;

&lt;p&gt;Thank you&lt;/p&gt;

&lt;p&gt;Noble Mutuwa Mulaudzi&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
