<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Poojan Mehta</title>
    <description>The latest articles on DEV Community by Poojan Mehta (@poojan18).</description>
    <link>https://dev.to/poojan18</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/poojan18"/>
    <language>en</language>
    <item>
      <title>No More Hardcoded Secrets: Automatic Database Credential Rotation with Vault, AKS and Postgres🔐</title>
      <dc:creator>Poojan Mehta</dc:creator>
      <pubDate>Mon, 17 Feb 2025 02:19:15 +0000</pubDate>
      <link>https://dev.to/poojan18/no-more-hardcoded-secrets-automatic-database-credential-rotation-with-vault-aks-and-postgres-1nmn</link>
      <guid>https://dev.to/poojan18/no-more-hardcoded-secrets-automatic-database-credential-rotation-with-vault-aks-and-postgres-1nmn</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/poojan18/secrets-management-101-a-technical-approach-with-aks-terraform-and-vault-284p"&gt;Part 1 of this series&lt;/a&gt;, we set up HashiCorp Vault in an AKS cluster using Terraform, configured ExternalSecrets, and demonstrated how to fetch secrets from Vault's KV engine into Kubernetes.&lt;/p&gt;

&lt;p&gt;Now, let's take it a step further. Static credentials are risky—they can be leaked, misused, or forgotten.🤯To mitigate this, Vault provides Dynamic Secrets, allowing credentials to be generated on-demand, time-bound, and auto-revoked after expiration.&lt;/p&gt;

&lt;p&gt;✅In this article, we’ll deploy &lt;code&gt;PostgreSQL&lt;/code&gt; in our AKS cluster using Helm, explore Vault's database secrets engine to generate short-lived credentials, set &lt;code&gt;externalSecrets&lt;/code&gt; and &lt;code&gt;vaultDynamicSecrets&lt;/code&gt; to natively sync those credentials in the cluster. &lt;/p&gt;

&lt;p&gt;Let's jump right in.!🚀&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0oiiw9yft0dswqkfah6.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0oiiw9yft0dswqkfah6.gif" alt="Jump" width="480" height="480"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.giphy.com/media/v1.Y2lkPTc5MGI3NjExenpiYjYydGwzOGV5NWxnZ2RnZXlkdmkzM3ZxbGZndGQzY3E2Z20ybCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/W30fUtuR7aFxuQotfV/giphy.gif" rel="noopener noreferrer"&gt;GIF Credit&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  👉🏻 1) Setup PostgreSQL using Helm:
&lt;/h3&gt;

&lt;p&gt;We will use the &lt;a href="https://github.com/bitnami/charts/tree/main/bitnami/postgresql" rel="noopener noreferrer"&gt;bitnami helm chart&lt;/a&gt;, and use the default parameters for the sake of simplicity. Fine-tuning parameters can be added in a separate &lt;code&gt;values.yaml&lt;/code&gt; file and applied with the installation command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;my-release oci://registry-1.docker.io/bitnamicharts/postgresql &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;hostNetwork&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will generate the default Postgres user password and store it as a secret in the cluster. Use the below-mentioned command to fetch the password in decoded format.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get secret &lt;span class="nt"&gt;--namespace&lt;/span&gt; default my-release-postgresql &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.data.postgres-password}"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;📝Note: The configured password will be ignored on a new installation in case the previous PostgreSQL release was deleted through the helm command. In that case, old PVC will have an old password, and setting it through helm won't take effect. Deleting persistent volumes (PVs) will solve the issue.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvpuiapwtko2fq6wn0lt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvpuiapwtko2fq6wn0lt.png" alt="Helm install Postgres" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Up next, let's create a &lt;u&gt;non-root&lt;/u&gt; user in the database which we will be using for all the interactions between Postgres and Vault. Since we are aiming at credential rotations, keeping the root user intact will ensure we never lose access to the database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# execute interactive bash shell with the database statefulset&lt;/span&gt;
kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; my-release-postgresql-0 &lt;span class="nt"&gt;--&lt;/span&gt; /bin/bash

&lt;span class="c"&gt;# login to the database &lt;/span&gt;
psql &lt;span class="nt"&gt;-h&lt;/span&gt; 127.0.0.1 &lt;span class="nt"&gt;-U&lt;/span&gt; postgres &lt;span class="nt"&gt;-p&lt;/span&gt; 5432 &lt;span class="nt"&gt;-W&lt;/span&gt;

&lt;span class="c"&gt;#Create user with password&lt;/span&gt;
CREATE USER vault WITH PASSWORD &lt;span class="s1"&gt;'vault123'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c"&gt;# Grant privileges&lt;/span&gt;
GRANT ALL PRIVILEGES ON DATABASE postgres TO vault&lt;span class="p"&gt;;&lt;/span&gt;
ALTER USER vault WITH SUPERUSER&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the vault user is created&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxwop52lnnjsbbz584wc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxwop52lnnjsbbz584wc.png" alt="Create Vault User" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  👉🏻 2) Create Database Secret Engine from Vault UI:
&lt;/h3&gt;

&lt;p&gt;The Vault database secrets engine dynamically generates database credentials based on configured roles, eliminating the need to hardcode credentials. It supports various databases through plugins and allows for both dynamic and static roles.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Vault's &lt;strong&gt;leasing mechanism&lt;/strong&gt; assigns a Time To Live (TTL) to dynamic secrets and tokens, ensuring they are valid for a specified period. Once the TTL expires, Vault can revoke the secret or token, necessitating periodic lease renewals by clients to maintain access.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Navigate to the Vault UI and create a new secrets engine.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiysdgora4a7fojbqfek1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiysdgora4a7fojbqfek1.png" alt="New database secrets engine" width="800" height="642"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give an appropriate name and adjust the default lease TTL and Max lease TTL if needed. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrmwb4b4o0ddj4t1xyd5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrmwb4b4o0ddj4t1xyd5.png" alt="Secrets engine configuration" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, select the database plugin as &lt;code&gt;PostgreSQL&lt;/code&gt; and pass the connection URL &lt;code&gt;postgresql://{{username}}:{{password}}@localhost:5432/database-name&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;💡Here, Postgresql is the connection method, username and password of the Vault user created in the previous step, and IP address of the ClusterIP service created while Postgres installation followed by the database as Postgres. &lt;/p&gt;

&lt;p&gt;The reason for using private clusterIP is that the database is running the same cluster and can be accessed through the service connected with the statefulset. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To further enhance the security, TLS configuration can also be added in this step. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8azqe76k2z6carav7gl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8azqe76k2z6carav7gl.png" alt="DynamicTest configuration" width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🤐Let's configure the role for this connection. Dynamic roles generate unique, time-limited database credentials for each service request. In contrast, static roles map Vault roles to existing database usernames, and Vault manages automatic password rotation for these static credentials.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ Static roles are not recommended for root credentials as rotating them will no longer keep the authentication between Vault and Postgres.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqcac1lrkoerfe1e8uui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqcac1lrkoerfe1e8uui.png" alt="Dynamic Role Config" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the main part. The role is attached to the database connection and it generates a dynamic username and password with 10 minutes of validity (Default TTL) and max TTL of 1 day. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbp62q2d75pubt39o8h1h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbp62q2d75pubt39o8h1h.jpg" alt="Meme" width="687" height="619"&gt;&lt;/a&gt;&lt;a href="https://www.reddit.com/r/adhdmeme/comments/quo8vl/were_all_good_at_keeping_secrets_or_actually_just/" rel="noopener noreferrer"&gt;Image Source&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We have kept the validity to only 10 minutes to verify the rotation. Based on the sensitivity of the application, the TTL duration can be adjusted. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here, &lt;code&gt;creation statement&lt;/code&gt; and &lt;code&gt;revocation statement&lt;/code&gt; consist of SQL queries which would be performed when new credential request is triggered. In our example, it will create a database role with password and expiry and grant &lt;code&gt;select&lt;/code&gt; privileges on the given table(s). At the time of expiry, it will revoke all the permissions and drop the user. &lt;/p&gt;

&lt;p&gt;As shown in the screenshot below, on clicking "Generate Credentials" from the vault UI, the short-lived credentials are displayed one-time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3njip8mg48emz2d6rpr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3njip8mg48emz2d6rpr.png" alt="Get Dynamic Credentials" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's log in with this user and verify if the granted permissions got applied or not. As shown below, it shows temporary users with their expiry dates added as attributes, and any other operation than &lt;code&gt;select&lt;/code&gt; would not work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmcsrkahpz5hbwg6ybqk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmcsrkahpz5hbwg6ybqk.png" alt="Dynamic User Login" width="800" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Okay, so far so good. But how to natively fetch these secrets in the cluster?🤔🤔 There comes the &lt;code&gt;**vaultDynamicSecrets**&lt;/code&gt; in the picture. &lt;/p&gt;
&lt;h3&gt;
  
  
  👉🏻 3) Configure VaultDynamicSecret resource:
&lt;/h3&gt;

&lt;p&gt;Since we already have &lt;code&gt;externalSecrets&lt;/code&gt; set up to natively fetch credentials from external secret stores like Vault, integrating these dynamic credentials with the same would make it more flexible to use with other Kubernetes resources through native secrets. &lt;/p&gt;

&lt;p&gt;By default, the externalSecrets resource we used in part 1 only supports KV (Key-value) secret engine. Here, we will be using &lt;strong&gt;generators&lt;/strong&gt; which allow to generate values from the given source. Generators can be defined as a custom resource and re-used across different ExternalSecrets.&lt;/p&gt;

&lt;p&gt;The VaultDynamicSecret generator specifically integrates with HashiCorp Vault to retrieve dynamic secrets directly from Vault's database secrets engine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;generators.external-secrets.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VaultDynamicSecret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-credentials&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/database/creds/dynamicrole"&lt;/span&gt;
  &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;GET"&lt;/span&gt;
  &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;VAULT&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;SERVER&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ADDRESS&amp;gt;"&lt;/span&gt;
    &lt;span class="na"&gt;auth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;tokenSecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vault-token"&lt;/span&gt;
          &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;external-secrets"&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;token"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This resource definition creates a &lt;code&gt;VaultDynamicSecret&lt;/code&gt;, which dynamically retrieves database credentials from HashiCorp Vault. &lt;/p&gt;

&lt;p&gt;The credentials are fetched from the specified path &lt;code&gt;(/database/creds/dynamicrole)&lt;/code&gt; in Vault using the &lt;strong&gt;GET&lt;/strong&gt; method. &lt;br&gt;
The Vault server address and authentication are provided, with the token stored in a Kubernetes secret named vault-token within the external-secrets namespace.&lt;/p&gt;

&lt;p&gt;This is one side of the bridge. On the next side, let's create &lt;code&gt;externalSecret&lt;/code&gt; resource to connect with &lt;code&gt;vaultDynamicSecrets&lt;/code&gt; resource and fetch the short-lived credentials natively.&lt;/p&gt;

&lt;p&gt;Here, a native Kubernetes secret named &lt;code&gt;db-credentials&lt;/code&gt; will be created keeping the refresh interval to 1 hour and referencing data source from the generator we created in the last step.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ExternalSecret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dynamic-external-secret"&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;refreshInterval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1h"&lt;/span&gt;
  &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-credentials&lt;/span&gt;
  &lt;span class="na"&gt;dataFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;sourceRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;generatorRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;generators.external-secrets.io/v1alpha1&lt;/span&gt;
        &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VaultDynamicSecret&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;db-credentials"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ready.! Let's deploy the resources :)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fys9vd86vc3gp5ino01qz.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fys9vd86vc3gp5ino01qz.jpeg" alt="Kubectl apply meme" width="450" height="680"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://x.com/shreemaan_abhi" rel="noopener noreferrer"&gt;Image Credits&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywlfzrpevea84ijfodjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywlfzrpevea84ijfodjc.png" alt="Dynamic Secret Deploy" width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's verify the changes by describing the secret resource in the namespace &lt;code&gt;external-secrets&lt;/code&gt;. Two keys are stored with a username and password in it. This action is supported by the creation statement declared in the role attached to the database secrets engine. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovtwmkphtlyvchh41yje.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovtwmkphtlyvchh41yje.png" alt="Get DB Credentials" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How can we move ahead without verifying the rotation?😮‍💨 Run the below commands to fetch the decoded values of the secrets at intervals of 10 minutes and see the changes in action.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Fetch username from the secret&lt;/span&gt;
kubectl get secret db-credentials &lt;span class="nt"&gt;-n&lt;/span&gt; external-secrets &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.data.username}"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;--decode&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt;

&lt;span class="c"&gt;# Fetch password from the secret&lt;/span&gt;
kubectl get secret db-credentials &lt;span class="nt"&gt;-n&lt;/span&gt; external-secrets &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.data.password}"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;--decode&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid03fn340tljcepwt6v5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid03fn340tljcepwt6v5.gif" alt="Minions Meme" width="498" height="220"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://tenor.com/en-GB/view/yes-happy-fun-minions-gif-14709381" rel="noopener noreferrer"&gt;GIF Credit&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And it worked. We got rotated credentials with fine-grained privileges which expire at every TTL duration. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faerd9jub5ct23j7794vq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faerd9jub5ct23j7794vq.png" alt="Secret Rotation in Action" width="800" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are still reading, thanks for making it to the end.🥹 We've taken secrets management to the next level by introducing dynamic database credentials with Vault! Instead of relying on boring static, long-lived passwords, we now have ephemeral credentials that are automatic, time-bound with built-in rotation and seamlessly injected into Kubernetes pods via ExternalSecrets.! Wohoooo 🥳&lt;/p&gt;

&lt;p&gt;This means no more hardcoded database passwords, reduced security risks from leaked credentials, and no manual rotation headaches—everything is automated! 🚀&lt;/p&gt;

&lt;p&gt;With this, our Kubernetes workloads are now safer, more scalable, and fully automated in handling sensitive data.&lt;/p&gt;




&lt;h4&gt;
  
  
  💡 Let’s Connect!
&lt;/h4&gt;

&lt;p&gt;Thanks for reading! 🎉 I hope this guide helped you understand dynamic secrets and automated credential rotation in Kubernetes. If you have feedback, suggestions, or want to discuss more, feel free to reach out! 💬&lt;/p&gt;

&lt;p&gt;Find me on &lt;a href="https://www.linkedin.com/in/poojanmehta18/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; and check out the full project on &lt;a href="https://github.com/poojan1812/secrets-management" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. Let’s build smarter, more secure cloud solutions together! 🚀&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vault</category>
      <category>azure</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Secrets Management 101: A technical approach with AKS, Terraform, and Vault</title>
      <dc:creator>Poojan Mehta</dc:creator>
      <pubDate>Tue, 14 Jan 2025 00:50:33 +0000</pubDate>
      <link>https://dev.to/poojan18/secrets-management-101-a-technical-approach-with-aks-terraform-and-vault-284p</link>
      <guid>https://dev.to/poojan18/secrets-management-101-a-technical-approach-with-aks-terraform-and-vault-284p</guid>
      <description>&lt;p&gt;🌟 Welcome, fellow DevOps enthusiasts! 🌟&lt;/p&gt;

&lt;p&gt;Welcome to our journey into Kubernetes secrets management! 🚀 In this two-part series, we'll delve into the essentials of securely managing secrets across different Kubernetes clusters. &lt;br&gt;
In this first half, we'll walk you through setting up an &lt;strong&gt;Azure Kubernetes Service (AKS) cluster using Terraform, deploying HashiCorp Vault&lt;/strong&gt;, and utilizing &lt;strong&gt;ExternalSecrets&lt;/strong&gt; to fetch secrets within the cluster. By the end of this article, you'll have a robust setup that ensures your secrets are both secure 🔒 and easily accessible within your AKS environment.&lt;/p&gt;

&lt;p&gt;📝 &lt;strong&gt;Pre-requisites:&lt;/strong&gt; Before we dive into the technical details, make sure you have the following prerequisites in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An active Azure subscription and Azure CLI installed &lt;/li&gt;
&lt;li&gt;Terraform CLI, Kubectl CLI, and Helm CLI installed 
Ensure you have these tools ready to follow along with the steps outlined in this guide. Let's get started! 🚀&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9otyq87icdudaqknpzv7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9otyq87icdudaqknpzv7.jpg" alt="IaC" width="600" height="338"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://airwalkreply.com/using-the-aws-developer-tools-to-deploy-terraform" rel="noopener noreferrer"&gt;Image Source&lt;/a&gt;&lt;/p&gt;


&lt;h4&gt;
  
  
  &lt;strong&gt;Step 1) Setting up AKS with Terraform&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Let's understand the terraform code for &lt;strong&gt;AKS&lt;/strong&gt;. I have used the &lt;strong&gt;modular&lt;/strong&gt; approach and the full code can be found in the repository mentioned below. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provider&lt;/strong&gt; Configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_client_config"&lt;/span&gt; &lt;span class="s2"&gt;"current"&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This data source retrieves metadata about the authenticated Azure client, such as tenant and object IDs, which are crucial for configuring role-based access and other resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure Resource Group:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_resource_group"&lt;/span&gt; &lt;span class="s2"&gt;"rg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;resource_group_location&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg_name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Defines a resource group using variables for location and name, making the configuration flexible and environment-agnostic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure Kubernetes Cluster:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_kubernetes_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"k8s"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;dns_prefix&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dns_prefix&lt;/span&gt;
  &lt;span class="nx"&gt;kubernetes_version&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kubernetes_version&lt;/span&gt;

  &lt;span class="nx"&gt;azure_active_directory_role_based_access_control&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;azure_rbac_enabled&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;tenant_id&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azurerm_client_config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tenant_id&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;identity&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"SystemAssigned"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;default_node_pool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"agentpool"&lt;/span&gt;
    &lt;span class="nx"&gt;vm_size&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vm_size&lt;/span&gt;
    &lt;span class="nx"&gt;node_count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;node_count&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;network_profile&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;network_plugin&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"azure"&lt;/span&gt;
    &lt;span class="nx"&gt;load_balancer_sku&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"standard"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For defining the Kubernetes cluster, we have used the &lt;code&gt;azurerm_kubernetes_cluster&lt;/code&gt; resource and added the location, name, resource group, and Kubernetes version details. &lt;/p&gt;

&lt;p&gt;🔐Security is a crucial aspect, and we integrate &lt;strong&gt;Azure Active Directory (AAD) for role-based access control (RBAC)&lt;/strong&gt;. By enabling Azure AD RBAC, we ensure that access to the Kubernetes cluster is managed using Azure AD. The &lt;u&gt;tenant ID&lt;/u&gt; is referenced from the current Azure client configuration, linking the cluster to the correct Azure AD tenant.&lt;/p&gt;

&lt;p&gt;Rest, nodepool, load balancer, and network plugin fields are kept default. In the final part, we assign the &lt;strong&gt;cluster-admin&lt;/strong&gt; role to the current Azure client-authenticated user who gets administrative privileges.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role Assignment:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_role_assignment"&lt;/span&gt; &lt;span class="s2"&gt;"aks"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;scope&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_kubernetes_cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;k8s&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;role_definition_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Azure Kubernetes Service RBAC Cluster Admin"&lt;/span&gt;
  &lt;span class="nx"&gt;principal_id&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azurerm_client_config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;object_id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since the code is ready with the minimal required configuration, it is now time to deploy it using Terraform cli. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Initialize&lt;/strong&gt; Terraform and set up the environment, downloading necessary plugins and configuring the backend.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq0hbmau2hofra0ihsfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq0hbmau2hofra0ihsfw.png" alt="Terraform init" width="800" height="425"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Validate&lt;/strong&gt; Configuration and detect if there are any semantic errors.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform validate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffedn7vrh8z3d9h98za30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffedn7vrh8z3d9h98za30.png" alt="Terraform Validate" width="758" height="184"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Plan&lt;/strong&gt; the Deployment, and generate the change in the infrastructure with the current defined state mentioned in the code. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4g44wftp63gi6c4kn9g8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4g44wftp63gi6c4kn9g8.jpg" alt="Plan Meme" width="430" height="293"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3o5kcg53qysc6w76tkb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3o5kcg53qysc6w76tkb.png" alt="Terraform Plan" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ready to go&lt;/strong&gt;.! Run terraform &lt;strong&gt;apply&lt;/strong&gt; with the subscription id inline variable followed by auto-approve argument to bypass the runtime confirmation. The benefit of adding this inline variable is to eliminate the risk of exposing sensitive details like subscription ID in the terraform&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform apply &lt;span class="nt"&gt;--var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;subsciption-id&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"pass-your-id"&lt;/span&gt; &lt;span class="nt"&gt;--auto-approve&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3b7asdq47gyaplker9t9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3b7asdq47gyaplker9t9.png" alt="Terraform Apply" width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Applies the planned changes to reach the desired state, creating and configuring resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authenticating to the AKS Cluster 🔐&lt;/strong&gt;&lt;br&gt;
Once your AKS cluster is up and running, you'll need to authenticate to it in order to manage and deploy applications.&lt;br&gt;
Use the az aks get-credentials command to download the kubeconfig file for your AKS cluster. This file allows Kubectl to authenticate to your cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;az aks get-credentials &lt;span class="nt"&gt;--resource-group&lt;/span&gt; &amp;lt;resource-group-name&amp;gt; &lt;span class="nt"&gt;--name&lt;/span&gt; &amp;lt;cluster-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7r7ltiyr2clu3invcpoy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7r7ltiyr2clu3invcpoy.png" alt="AKS Get Credentials" width="800" height="56"&gt;&lt;/a&gt;&lt;br&gt;
This command merges the cluster's kubeconfig with your existing kubeconfig file (or creates a new one if it doesn't exist).&lt;/p&gt;

&lt;p&gt;Verify Connection: To verify that you are authenticated and can access the cluster, use the kubectl get nodes command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffswioeajc8szlfwni6g4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffswioeajc8szlfwni6g4.png" alt="Cluster Admin" width="800" height="346"&gt;&lt;/a&gt;&lt;br&gt;
The screenshot above confirms the cluster-admin role assignment to the logged-in tenant. &lt;/p&gt;


&lt;h4&gt;
  
  
  &lt;strong&gt;Step 2) Setting Up HashiCorp Vault on AKS🕵🏻‍♂️&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;HashiCorp Vault is a powerful tool for managing secrets and protecting sensitive data. It allows you to store and tightly control access to tokens, passwords, certificates, and encryption keys. Vault's robust API and comprehensive audit logs make it a vital part of any security infrastructure. By using Vault, you ensure that your secrets are managed securely, minimizing the risk of unauthorized access.&lt;/p&gt;

&lt;p&gt;We'll deploy &lt;strong&gt;HashiCorp&lt;/strong&gt; &lt;strong&gt;Vault&lt;/strong&gt; using &lt;strong&gt;Helm&lt;/strong&gt;, a package manager for Kubernetes. This deployment will include a custom value.yaml file to enable the &lt;strong&gt;Vault UI&lt;/strong&gt; with a LoadBalancer service, making it accessible from outside the cluster.&lt;/p&gt;

&lt;p&gt;-&amp;gt; Create a values.yaml file to override the default configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;ui&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
  &lt;span class="na"&gt;externalPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8200&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-&amp;gt; Install Vault with Helm:&lt;br&gt;
First, add the HashiCorp Helm repository and update it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxt1mz8uzf95y3r3gjcp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxt1mz8uzf95y3r3gjcp.png" alt="Helm Repolist" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-&amp;gt; Next, deploy Vault using the custom values.yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;vault hashicorp/vault &lt;span class="nt"&gt;-f&lt;/span&gt; values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjez9ktgqmb4534ktey2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjez9ktgqmb4534ktey2.png" alt="Helm install vault" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This command deploys Vault into your Kubernetes cluster with the settings defined in values.yaml. Verify the deployed version and resources in the screenshots below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbt51zfscoasjrxo5gq4b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbt51zfscoasjrxo5gq4b.png" alt="Kubectl get all" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After deployment, the Vault starts in a &lt;strong&gt;sealed&lt;/strong&gt; state, meaning it can't perform any operations until it's unsealed. Unsealing is a critical security feature that ensures the Vault server remains secure. To unseal Vault, you'll need to use the kubectl exec command to run the unseal process within the Vault pod.&lt;/p&gt;

&lt;p&gt;First, initialize Vault to make it operational:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;vault-0 &lt;span class="nt"&gt;--&lt;/span&gt; vault operator init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxceanynzvyvay3vzjyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxceanynzvyvay3vzjyl.png" alt="Vault Init" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once Vault is initialized, you can access the Vault UI through the LoadBalancer service.&lt;/p&gt;

&lt;p&gt;-&amp;gt; Get the LoadBalancer IP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look for the service named "vault" and note the external IP address.&lt;/p&gt;

&lt;p&gt;-&amp;gt; Access the UI:&lt;br&gt;
Open your browser and navigate to &lt;strong&gt;http://:8200&lt;/strong&gt;. You'll be prompted to pass 3 unseal keys which were displayed as output for the vault operator init command we ran earlier. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61m4xbfu9xb5ne2plhwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61m4xbfu9xb5ne2plhwq.png" alt="Vault Unseal" width="800" height="581"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-&amp;gt; Log in to Vault:&lt;br&gt;
Use the &lt;strong&gt;root token&lt;/strong&gt; to log in. The root token was generated during the Vault initialization process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a8j7n7q2o5ie5oocb40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a8j7n7q2o5ie5oocb40.png" alt="Root token" width="800" height="763"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-&amp;gt; Creating a New Secret Engine:&lt;br&gt;
Vault uses secret engines to store and manage secrets. Let's create a new &lt;strong&gt;KV (key-value)&lt;/strong&gt; secret engine to store our secrets. Navigate to the secret engine option in the UI and create one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuh3puebjhuk8z2t0pde.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuh3puebjhuk8z2t0pde.png" alt="Create Secret Engine" width="800" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once completed, store key-value pairs which are supposed to be sensitive data injected into the pods as secret. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6t94lpdsvufl2paj51g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6t94lpdsvufl2paj51g.png" alt="Secret Creation" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;(p.s: for simplicity of the demo, we kept other options in the secret engine default and performed the demo through UI, but the same can be done via CLI and API actions)&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h4&gt;
  
  
  &lt;strong&gt;Step 3) Installing and Configuring ExternalSecrets on AKS&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnr8t0e9yu45gvkzbxi57.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnr8t0e9yu45gvkzbxi57.gif" alt="Image description" width="220" height="160"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://tenor.com/en-GB/view/boss-baby-movie-its-a-secret-whisper-gif-12772093" rel="noopener noreferrer"&gt;GIF Source&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are ExternalSecrets and Why Do We Need Them?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ExternalSecrets provides a &lt;strong&gt;Kubernetes&lt;/strong&gt;-&lt;strong&gt;native&lt;/strong&gt; way to fetch secrets from external secret management systems like HashiCorp Vault. It helps in maintaining a &lt;strong&gt;centralized&lt;/strong&gt; secrets store, enabling easier management and more secure access controls.&lt;/p&gt;

&lt;p&gt;Two primary resources used in ExternalSecrets are ClusterSecretStore and SecretStore. Here's the difference:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ClusterSecretStore&lt;/strong&gt;: A cluster-wide resource that defines how to connect to an external secret store like HashiCorp Vault. It is accessible across multiple namespaces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SecretStore&lt;/strong&gt;: A namespace-scoped resource for defining the connection to an external secret store. It is specific to a single namespace.&lt;/p&gt;

&lt;p&gt;Using ClusterSecretStore allows multiple namespaces to share the same external secret store configuration, making it efficient for larger deployments.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploying ExternalSecrets Using Helm&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-&amp;gt; Install ExternalSecrets with Helm:&lt;br&gt;
First, add the ExternalSecrets Helm repository and update it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add external-secrets https://charts.external-secrets.io
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-&amp;gt; Next, deploy ExternalSecrets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;external-secrets external-secrets/external-secrets &lt;span class="nt"&gt;-n&lt;/span&gt; external-secrets &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="nt"&gt;--installCRDs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1p6hmeqdlult56cjeb8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1p6hmeqdlult56cjeb8.png" alt="Install ExternalSecrets" width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This command deploys ExternalSecrets into your Kubernetes cluster in a new namespace, and it will install the CRDs as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filhbirg0vnu0kvsvvtju.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filhbirg0vnu0kvsvvtju.png" alt="Get all ExternalSecrets" width="800" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-&amp;gt; Creating ClusterSecretStore Resource&lt;br&gt;
We'll create a ClusterSecretStore resource that defines the connection to HashiCorp Vault. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡We have used vault's root authentication token, but one can create a non-root token and use it. Don't forget to base64 encode your token before passing in the secret. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here’s the YAML configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterSecretStore&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;vault&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://PUBLIC_IP_OF_VAULT:8200"&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;secret"&lt;/span&gt;
      &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;v2"&lt;/span&gt;
      &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;external-secrets"&lt;/span&gt;
      &lt;span class="na"&gt;auth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;tokenSecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vault-token"&lt;/span&gt;
          &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;external-secrets"&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;token"&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vault-token&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ROOT TOKEN IN BASE64 ENCODED FORMAT&lt;/span&gt; &lt;span class="c1"&gt;# "root"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this as clustersecretstore.yaml and apply it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; clustersecretstore.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-&amp;gt; Creating ExternalSecret Resource&lt;br&gt;
Now we'll create an ExternalSecret resource to fetch secrets from Vault. Here,  Here’s the YAML configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ExternalSecret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vault-example&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;refreshInterval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1h"&lt;/span&gt;
  &lt;span class="na"&gt;secretStoreRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterSecretStore&lt;/span&gt;
  &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application-sync&lt;/span&gt;
  &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;secretKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PASSWORD&lt;/span&gt;
      &lt;span class="na"&gt;remoteRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret&lt;/span&gt;
         &lt;span class="na"&gt;property&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PASSWORD&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this as externalsecret.yaml and apply it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; externalsecret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verifying the Configuration&lt;/p&gt;

&lt;p&gt;To ensure everything is set up correctly, we'll use several kubectl commands.&lt;/p&gt;

&lt;p&gt;-&amp;gt; Verify the ExternalSecret Resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get externalsecrets
kubectl get clustersecretstore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hscfsdieeb7v31bnvqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hscfsdieeb7v31bnvqq.png" alt="Secret app sync" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-&amp;gt; Verify the Creation of the Secret Resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe secret application-sync
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujm0pjdnw5rx9ajzsfif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujm0pjdnw5rx9ajzsfif.png" alt="Get Secrets JSONPath" width="800" height="224"&gt;&lt;/a&gt;&lt;br&gt;
-&amp;gt; Fetch the Value of the Secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get secret my-secret &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.data.my-key}"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;--decode&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command fetches the value of the secret, decoding it from Base64. It is the same value we stored inside the secret engine in the vault UI. &lt;/p&gt;

&lt;p&gt;In short, Kubernetes never knew about this secret in the Hashicorp Vault, but we used externalSecrets to bridge this gap and make those secrets run like native Kubernetes secrets. From here, we can inject this in pods as regular secrets either as an environmental variable, or volume, or secret reference.&lt;/p&gt;

&lt;p&gt;Alas.!🤓 We finally made it to the end of the first half of the demo. &lt;strong&gt;We've set up an AKS cluster with Terraform, deployed HashiCorp Vault for secure secrets, and integrated ExternalSecrets to fetch them.&lt;/strong&gt; Your secrets are now safely managed in Kubernetes. &lt;br&gt;
But the adventure isn't over! Stay tuned for more exciting discoveries! &lt;/p&gt;

&lt;p&gt;Find me on &lt;a href="https://www.linkedin.com/in/poojanmehta18/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; and check out the full project on &lt;a href="https://github.com/poojan1812" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. Let’s build smarter, more secure cloud solutions together! 🚀&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>azure</category>
      <category>terraform</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deploy Single-Node RedHat OpenShift 4.9 Cluster on AWS</title>
      <dc:creator>Poojan Mehta</dc:creator>
      <pubDate>Tue, 12 Jul 2022 16:02:03 +0000</pubDate>
      <link>https://dev.to/poojan18/deploy-single-node-redhat-openshift-49-cluster-on-aws-18lh</link>
      <guid>https://dev.to/poojan18/deploy-single-node-redhat-openshift-49-cluster-on-aws-18lh</guid>
      <description>&lt;p&gt;In this article, I’ll guide you through the installation steps in order to deploy OpenShift 4.9 on a single node EC2 server.&lt;/p&gt;

&lt;p&gt;Before Jumping into the details, let’s understand what OpenShift is actually. OpenShift is a enterprise grade container orchestration tool by RedHat built on &lt;a href="https://poojan-mehta.medium.com/why-so-kubernetes-1e7242537abb" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;. This product provides much more than core Kubernetes capabilities and covers almost every phase of the container management lifecycle. In this article, we will install the Opensource flavour of the OpenShift container platform 4.9. Basic understanding of containers and Kubernetes is suggested to learn and work on OpenShift. We will use the OpenSource version of the OpenShift container platform which is also known as &lt;a href="https://github.com/okd-project/okd" rel="noopener noreferrer"&gt;OKD project&lt;/a&gt;. Also, we will use the Full-stack Automation approach to install the single-node cluster over AWS EC2 service.&lt;/p&gt;

&lt;p&gt;Basically, the Full-Stack Automation approach will allow us to utilize the power and availability of the existing cloud services and club multiple things together in a Terraform script and deploy a cluster-based solution with just a few clicks. Also, the user can customize the cluster configuration and set a few parameters within the script to get the customized cluster solution. As we consider OpenShift as an enterprise version of Kubernetes, it comes up with more features and pre-requisites. One of them is a public domain if you’re looking forward to deploy the solution on a public cloud provider and planning to deploy workloads that are publicly accessible.&lt;/p&gt;

&lt;p&gt;I’ve already purchased a domain and maintaining a DNS Hosted Zone within Route53 Service which implies the pre-requisite is fulfilled. If you’ve purchased a domain from other provider than Route53, you should consider creating a Hosted zone and alter the nameservers in the original domain provider site.&lt;/p&gt;

&lt;p&gt;We will provision a single-node openshift cluster via an EC2 instance as a client node. The client node will simply work as an intermediate between the cluster and the admin and will enable the communication between them.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step: 1) Create SSH key in the client node
&lt;/h4&gt;

&lt;p&gt;→ The first step is to create one ssh key in the client node to further bind it with the full stack automation script. The SSH key is created in case we need to login inside the master node in future and perform some cluster operations. Follow the steps below to create one.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;ssh-keygen&lt;br&gt;&lt;br&gt;
eval “$(ssh-agent -s)”&lt;br&gt;&lt;br&gt;
ssh-add id_rsa&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Give appropriate name to the SSH key (optional) and click enter. You can verify the key has been created by going to the &lt;strong&gt;~/.ssh&lt;/strong&gt; folder. With the ssh-add command, we are adding the identity of the created keyfile with the system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkn3pltnqvp1bmh2d21c1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkn3pltnqvp1bmh2d21c1.png" width="800" height="500"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Generate and add SSH key&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step:2) Configure AWS Profile for the programmatic access
&lt;/h4&gt;

&lt;p&gt;→ In this step, we will configure one AWS profile with Access and Secret keys so that the Full stack automation script can make the API calls to the AWS console on our behalf and preform infrastructure provisioning tasks.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;aws configure — — profile dev&lt;br&gt;&lt;br&gt;
export AWS_PROFILE=dev&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febrr40i11xt6tui9ln3o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febrr40i11xt6tui9ln3o.png" width="800" height="237"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Configure AWS Profile&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After running this command, pass the AWS access and secret keys🔒 and then run the export command to set the current profile for the shell.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step:3) Install OpenShift binaries from GitHub
&lt;/h4&gt;

&lt;p&gt;→ Considering that we are using the OpenSource version of the OpenShift, in order to get the installation binaries, we will take reference of the official&lt;a href="https://github.com/okd-project/okd" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt; of the OKD Project. In this demo we are downloading openshift v4.9 as zip files. Verify the same with the screenshot below. Install the binaries and extract the zip files using the tar command.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;tar -xvf openshift-client-linux-4.9.0–0.okd-2022–02–12–140851.tar.gz&lt;br&gt;&lt;br&gt;
tar -xvf openshift-install-linux-4.9.0–0.okd-2022–02–12–140851.tar.gz&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jd1gjlk7x8c0g6kx9rc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jd1gjlk7x8c0g6kx9rc.png" width="800" height="378"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;OpenShift installer zip file&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step:4) Create a installation config YAML file using the script
&lt;/h4&gt;

&lt;p&gt;→ By using the OpenShift install executable file with the required arguments, we will now create one configuration file in YAML format. Run the below command to initiate the config file creation and select the respective options from the dropdown menu.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;./openshift-install create install-config — dir=.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;→Don’t forget to select the SSH key we created in Step 1. In the next option, we have to select the resource provider. This Full-Stack automation approach supports almost all major cloud providers and we will select AWS in this case.&lt;br&gt;&lt;br&gt;
 → Afterwards, the script expects AWS region and the domain name for the cluster which I’ve given ap-south-1 and my own domain having a hosted zone in route53. In addition, we’ve to provide a unique name of the cluster which will be binded with the base domain and eventually be provided as a publicly routed URL for the console or the deployed workloads over the cluster.&lt;/p&gt;

&lt;p&gt;→ The last remained field is Pull Secret. Earlier, OpenShift 4.x Versions were not opensource. Hence, back then, in order to pull the OpenShift images from the registry, the user had to authenticate with a unique pull secret obtained from their respective RedHat developer account. The option became obsolete after the consideration of the OKD project under OpenSource guidelines but still the script seeks the pull secret. A workaround to bypass this authentication is mentioned in this &lt;a href="https://github.com/okd-project/okd/issues/182" rel="noopener noreferrer"&gt;GitHub Issue &lt;/a&gt;. I’ve followed the same and passed null authentication secret which is &lt;em&gt;{“auths”:{“fake”:{“auth”:”aWQ6cGFzcwo=”}}}.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→After feeding all the inputs, a config file named install-config.yml will be created in the target directory. Here we can see the current directory as the target dir.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qfv9n9yarecj48xr8ql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qfv9n9yarecj48xr8ql.png" width="800" height="251"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;OpenShift Install script&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step:5) Customize the existing file and edit the number of nodes
&lt;/h4&gt;

&lt;p&gt;→ Open the file editor and change the node number for Worker from 3 to 0. Also, change the node number for master node from 3 to 1. This concludes that the provisioned cluster will operate on a single node which means the same node will work as both master and slave.&lt;/p&gt;

&lt;p&gt;→ We are keeping all other fields as default much there is much more to customize based on requirements. You can refer the article for detailed explanation on &lt;a href="https://docs.openshift.com/container-platform/4.9/installing/installing_aws/installing-aws-customizations.html" rel="noopener noreferrer"&gt;customized installation.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe3mdhcr3lp1x79abh8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe3mdhcr3lp1x79abh8p.png" width="800" height="498"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Edited YAML file&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step:6) Run the create cluster command and initiate the installation
&lt;/h4&gt;

&lt;p&gt;→ We’ve fulfilled the pre-requisites in order to launch our single-node OpenShift v4.9 over an EC2 instance. The script is edited with respect to our current requirement. Now let’s run the cluster creation command.&lt;/p&gt;

&lt;p&gt;→ Here we are giving reference of the created config file as the — dir argument. So, the create cluster command will take the inputs from the config file and provision the single node cluster by using the access and secret keys provided by the admin.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;./openshift-install create cluster — dir=.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrd7vwo4luqjch7at00j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrd7vwo4luqjch7at00j.png" width="800" height="238"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;2 nodes provisioned. One master and one bootstrap.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→ The installation will take somewhere around 30–40 mins. Hold tight and grab a coffee until then.&lt;br&gt;&lt;br&gt;
 → Also, the installation script will launch an intermediate node for bootstrapping. This node is responsible for configuring the cluster and making all the cluster components up and running. The bootstrap node will be destroyed after the master node is working and in healthy state.&lt;/p&gt;

&lt;p&gt;→During the installation process, the cluster will perform internal health checks to make sure each components are up and running in a operatable state.&lt;/p&gt;

&lt;p&gt;→ Also, the installation will come up with a default cluster user named Kubeadmin and the corresponding password (~/auth/kubeadmin-password). We can use this user and password to login in the cluster and perform the operations. Also, a publically routable URL for the OpenShift Web UI will be printed in the shell output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79fkqxt1q8y9736d4bph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79fkqxt1q8y9736d4bph.png" width="800" height="698"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;post Installation output&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→ Now, export the Kubeconfig file so that the oc client will be able to fetch the cluster information and context from that file and perform cluster operations to the mentioned endpoints.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;export KUBECONFIG=/root/okd/auth/kubeconfig&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Step:7) Copy the OC binaries to the executable directory
&lt;/h4&gt;

&lt;p&gt;→ In order to run the OC client from the command line, we have to first copy the client binaries to the executable location in the linux system. Use the below command for the same.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;cp oc /usr/bin/&lt;br&gt;&lt;br&gt;
cp kubectl /usr/bin/&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehg7yuclm9oul5t2j6q8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehg7yuclm9oul5t2j6q8.png" width="800" height="711"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;oc client&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Finally, all the pre-requisites and the cluster installation process is completed and now we have a running single-node OpenShift v4.9 working as expected. We can verify the same by running the status command.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;oc status&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;→Also, with the printed console URL, we can login inside the WebUi using the temporary Kubeadmin user and explore the console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4iq5geyrmv38u9ys7ok.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4iq5geyrmv38u9ys7ok.png" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Done and Dusted. The OpenShift installation is completed. That’s it from my side for today. There is a lot more to learn in the ecosystem.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;THANKS, A LOT FOR READING THIS SO ATTENTIVELY&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’ll be grateful to have connections like you on &lt;a href="https://www.linkedin.com/in/poojanmehta18/" rel="noopener noreferrer"&gt;&lt;strong&gt;Linkedln&lt;/strong&gt;&lt;/a&gt; ** ** 🧑‍💼&lt;/p&gt;

</description>
      <category>aws</category>
      <category>redhatopenshift</category>
      <category>containers</category>
      <category>containerorchestrati</category>
    </item>
    <item>
      <title>How to Host WordPress with AWS RDS as Database?</title>
      <dc:creator>Poojan Mehta</dc:creator>
      <pubDate>Tue, 02 Feb 2021 06:20:24 +0000</pubDate>
      <link>https://dev.to/poojan18/how-to-host-wordpress-with-aws-rds-as-database-195e</link>
      <guid>https://dev.to/poojan18/how-to-host-wordpress-with-aws-rds-as-database-195e</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkmunt7g27czxc9s6ker.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkmunt7g27czxc9s6ker.png" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  In this article, I’ll show how to set up WordPress(frontend) and AWS RDS database step by step.
&lt;/h4&gt;

&lt;p&gt;Description:&lt;/p&gt;

&lt;p&gt;🔅 Create an AWS EC2 instance and configure it with Apache web server and compatible PHP and MYSQL environment&lt;br&gt;&lt;br&gt;
🔅 Download and configure PHP-based application name “WordPress”.&lt;br&gt;&lt;br&gt;
🔅 WordPress stores data at the backend in the MySQL Database Server. Therefore, you need to set up a MySQL server using the AWS RDS service.&lt;br&gt;&lt;br&gt;
🔅 Provide the endpoint string to the WordPress application to connect with RDS and make it work.&lt;/p&gt;

&lt;p&gt;First things first, let’s create an EC2 instance from the console and configure the Apache webserver.&lt;/p&gt;

&lt;p&gt;I’ve used RHEL8 AMI throughout and performed the task by following all the dependencies. Manually performed all the steps but the same setup can be(should be) done using IaC tools like Terraform.&lt;/p&gt;

&lt;p&gt;→SSH to the instance and run the following commands to install httpd, PHP, and MySQL(with dependencies) and start the services:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;yum module install httpd php mysql&lt;br&gt;&lt;br&gt;
yum install php-mysqlnd&lt;br&gt;&lt;br&gt;
systemctl start httpd&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgna69avm85qy8qb1i0f0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgna69avm85qy8qb1i0f0.png" width="800" height="439"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;httpd, PHP, MySQL installed and service started for httpd&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→ Install the latest version of WordPress(written in PHP)and copy the same in the document root of the httpd(/var/www/html) in the system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[https://wordpress.org/latest.tar.gz](https://wordpress.org/latest.tar.gz) --output wordpress.tar.gz //install wordpress

tar xf wordpress.tar.gz //extract the downloaded file

cp -rv wordpress /var/www/html // copy the content in doc. root
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-&amp;gt;If all the above steps are successful then we can access the front end with public-ip/wordpress and the o/p will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddv4eoi2efac8oejq1z5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddv4eoi2efac8oejq1z5.png" width="800" height="614"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Default page after installation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Leave this till here and let’s configure the AWS RDS database from the console&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa83g4iq6rrb8jfn3bp9c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa83g4iq6rrb8jfn3bp9c.png" width="800" height="358"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Selected MySQL flavor&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→AWS RDS offers various flavors of relational databases that include MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL Server, and Amazon Aurora. RDS is a fully managed service from AWS, so users have to only focus on data and databases.&lt;/p&gt;

&lt;p&gt;After selecting the database engine and creation method, set the instance name and admin user and password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysus6fsgksah79jdrdj9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysus6fsgksah79jdrdj9.png" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now select compute and storage unit for the DB instance as per requirement. I have used the free-tier RDS method, so t2.micro flavor and default storage type is selected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchakviik2nj3tyz2gxmu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchakviik2nj3tyz2gxmu.png" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a next step, select network set up for the instance. I have taken default VPC and subnet. Usually, the database is running in the backend of our application and is not meant for public access. So, it’s suggested to disable public access of RDS. As a security group, good practice to open the specific port and allow only admin IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3t7o1xugf8n49vieury.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3t7o1xugf8n49vieury.png" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Still, there are additional options that give more customization and post-launch actions. Totally depends on the requirement which facilities you need. I’ve left the else part default. Click on Create Database to launch the RDS.&lt;/p&gt;

&lt;p&gt;It takes around 10 mins to launch the DB cluster. As this is a fully managed service, the user need not configure anything.&lt;/p&gt;

&lt;p&gt;→Let’s connect to the RDS instance with the unique endpoint. I am using an ec2 instance within the vpc of RDS. So, being in the same network, we can connect.&lt;/p&gt;

&lt;p&gt;Use the syntax mysql -h  -u  -p and pass the password on prompt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fei4zu06fopwgk7p704va.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fei4zu06fopwgk7p704va.png" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now create a database to further connect with the WordPress site. Use the syntax create database &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhwzonb5jkgo3aqn7rdk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhwzonb5jkgo3aqn7rdk.png" width="800" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→Database created. Now let’s head towards the WordPress dashboard where we left and pass the above database name, username, password, and instance endpoint on that screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07pmcybh1l4jlp6ivj2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07pmcybh1l4jlp6ivj2w.png" width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On submit, internally it will write the above entries in the wp-config file and connect the front-end to the backend database. On a successful connection, the user will have to fill in the WordPress site details and dashboard login details. Run the WordPress installation when done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flfcz74qgelgaxvndymey.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flfcz74qgelgaxvndymey.png" width="800" height="776"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;😇Perfect. Each requirement for our problem statement is fulfilled. The below screen implies a successful installation and connection with the database. Here, all the user data is stored in a very reliable and powerful RDS instance. So even if the front-end goes down, the most critical user data won’t lose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07ftbr2r6tg9ejo6uwyk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07ftbr2r6tg9ejo6uwyk.png" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s all I have for this article. Thanks a lot for reading.🤗&lt;/p&gt;

&lt;p&gt;I’ll be grateful to have connections like you on &lt;a href="http://www.linkedin.com/in/poojan-mehta-17a514191" rel="noopener noreferrer"&gt;&lt;strong&gt;Linkedl&lt;/strong&gt;&lt;/a&gt; *&lt;em&gt;n *&lt;/em&gt; 🧑‍💼&lt;/p&gt;

</description>
      <category>database</category>
      <category>wordpress</category>
      <category>aws</category>
      <category>rds</category>
    </item>
    <item>
      <title>Why so Kubernetes?</title>
      <dc:creator>Poojan Mehta</dc:creator>
      <pubDate>Sat, 26 Dec 2020 11:36:34 +0000</pubDate>
      <link>https://dev.to/poojan18/why-so-kubernetes-4a4e</link>
      <guid>https://dev.to/poojan18/why-so-kubernetes-4a4e</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foafm1fjjj1vgzv0zu4nv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foafm1fjjj1vgzv0zu4nv.png" width="800" height="463"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Credits: ashleymcnamara/gophers&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you’ve been anywhere near the IT industry, you’ve very likely heard the term &lt;strong&gt;containers&lt;/strong&gt; 🚢. The adoption of containers is growing exponentially as they are lightweight, portable, and revolutionary fast. Nowadays, deployments over containers are holding an upper hand over VMs for larger environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7tayt3op94q4puwoig2q.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7tayt3op94q4puwoig2q.jpeg" width="500" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ Containers provide isolated runtime environments and allocated resources are exclusively presented to the container, and any alteration won’t affect other running containers.&lt;/p&gt;

&lt;p&gt;→ Such environments provide greater efficiency with running, managing, and in resource consumption.&lt;/p&gt;

&lt;p&gt;→ &lt;strong&gt;With great power, comes great responsibilities&lt;/strong&gt; ✌️. Imagine a situation where you have been using containers in production and your application starts getting massive traffic. All you need to do is scaling the application. How will you perform this? How will you decide inter-connection of containers within no time? How will you monitor all these containers and manage health-checks😥? We need an orchestration platform to scale and manage our containers.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is where tools like Kubernetes☸️ comes into play.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;
  
  
  What exactly is Kubernetes?
&lt;/h4&gt;

&lt;p&gt;→ &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes(K8s)&lt;/a&gt; an open-source project by &lt;strong&gt;Google&lt;/strong&gt; , hosted by &lt;a href="https://www.cncf.io/" rel="noopener noreferrer"&gt;Cloud Native Computing Foundation&lt;/a&gt; that has become one of the most popular container orchestration (simply container management) tools around; it allows you to deploy and manage fault-tolerant, resource optimal container applications at scale.&lt;/p&gt;

&lt;p&gt;→In simple words, you can bundle together groups of containers, and Kubernetes helps you easily and effectively manage those containers. It can span across on-premises, public, private or hybrid clouds. For this reason, Kubernetes is now an undisputed platform to host cloud-native applications that require rapid scaling. While in best practices, K8s is more often used with &lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Docker&lt;/strong&gt;&lt;/a&gt;. It provides :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Service Topology&lt;/strong&gt; : Routing the traffic based upon cluster topology.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Discovery and Load Balancing&lt;/strong&gt; : Service discovery is the process of figuring out how to connect to a service. Kubernetes has its own mechanism for that. It gives pods(basically one or more containers) unique IP addresses and a single IP to the collection of pods that enable load-balancing across the pods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage Orchestration&lt;/strong&gt; : This allows you to mount the storage system of your choice. Be it public, private cloud storage, NFS, and many more...&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Horizontal Scaling&lt;/strong&gt; : Scale your application up or down with commands, UI, or via auto-scaling seamlessly!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Rollouts and Rollbacks&lt;/strong&gt; : Monitoring the health-checks of the application ensures almost no downtime and matches the desired state for deployment solution. Kubernetes will rollout and rollback for you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self Healing&lt;/strong&gt; : Always takes care of the desired state and restart, replace and reschedule the containers and only ready-to-work containers are advertised to the clients.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a successful open source project, the community is as important as having great code. K8s has a very thriving community across the world. It has more than 3000 active contributors(according to the August 2019&lt;a href="https://www.cncf.io/cncf-kubernetes-project-journey/#:~:text=At%20its%20beginning%2C%20Kubernetes%20had,inception%20and%207x%20since%20acceptance." rel="noopener noreferrer"&gt; report&lt;/a&gt;).&lt;/p&gt;
&lt;h4&gt;
  
  
  How Kubernetes Works?
&lt;/h4&gt;

&lt;p&gt;→A working Kubernetes deployment is called a cluster. A cluster mainly contains the master node and several workers nodes. The role of the master node is to maintain the desired state and the worker nodes actually run the application workloads.&lt;/p&gt;

&lt;p&gt;→If a worker node goes down, Kubernetes starts new pods on a functioning worker node. This makes the process of managing containers easy and simple. Your work involves configuring Kubernetes and defining nodes, pods, and the containers within them. Kubernetes handles orchestrating the containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fll95pbx5fwxddbcx7c0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fll95pbx5fwxddbcx7c0d.png" width="752" height="518"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Overview of a basic Kubernetes cluster&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Some common terms for batter understanding of K8s:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpmwu8igt1gt2j36uo49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpmwu8igt1gt2j36uo49.png" width="800" height="685"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Node&lt;/em&gt;: The machine that performs the tasks&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Pod&lt;/em&gt;: The smallest communicable unit is a group of containers deployed in a node. Pods abstract network and storage from underlying containers.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Kubelet&lt;/em&gt;: It is responsible for maintaining a set of pods and ensures the defined containers are started and running.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;etcd&lt;/em&gt;: It is a consistent and high-available key-value store.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The adoption of Kubernetes as the Go-To platform for hosting production-grade applications is ever increasing. Big brands like The New York Times, HBO, Reddit, Airbnb, Pinterest, Pokemon — all have their own K8s stories to tell. And many more are on their way to join them.&lt;/p&gt;
&lt;h4&gt;
  
  
  Kubernetes meets the real world: Airbnb’s story😄
&lt;/h4&gt;

&lt;p&gt;→ Airbnb is an online marketplace for sharing, renting homes, and experiences. The transition from monolithic to the microservice architecture by Airbnb is quite commendable. The organization needed to scale horizontally to ensure continuous delivery and keep upscaling by adding new services. The sole purpose was to enable continuous delivery with a microservices architecture so that the team of over 1000 engineers can put up faster delivery.&lt;/p&gt;

&lt;p&gt;→ Airbnb adopted to support the developer's team and configured, deployed over 250 critical production services to Kubernetes. Airbnb managed to scale with the microservice environment having over 20,000 deployments per week(all environments, all apps).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllpeu10nwps5wdulsp78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllpeu10nwps5wdulsp78.png" width="800" height="445"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;All slides available&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→ Initially, the configuration was manual and rigid and not much evolved. Then, they shifted to configuration management with the chef on a monolithic. But the very complex hierarchy of services, inheriting configuration was not working as expected. Modifying the chef recipes was quite frequent and it would take down other services on convergence. And at the latest Airbnb moved to Kubernetes which automated orchestration of containerized setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv253ev0lq3xqcxnjda9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv253ev0lq3xqcxnjda9r.png" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;→&lt;/em&gt; More specifically with Kubernetes, the declarative approach seemed more resilient, and an easier scheduling process leads to optimization of cost. Also, it came with all features and advantages of Docker made the environment more granular. And most important, YML configurations are extremely human-understandable and provide a hassle-free development experience.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;→Kube-gen&lt;/em&gt;, a tool for K8s, helped Airbnb take service parameters (defined in a single YAML file) and generate the complete K8s services manifests containing all the necessary configurations.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The outcome of this shift was quite decent, Kubernetes enabled Airbnb to add a layer of abstraction over the containers and set up an automated management workflow. Today, almost 50% of Airbnb workloads are running on K8s ☸&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For more information and insights, go through the below keynote 👇:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ytu3aUCwlSg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Solutions like &lt;strong&gt;Kubernetes&lt;/strong&gt; are buzzing with the spirit of DevOps&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Getting Ready for the Kubernetes-driven future🤩:
&lt;/h4&gt;

&lt;p&gt;Container-based microservices applications are the future and Kubernetes is their platform. It has reached a level where organizations can undoubtedly depend on and thrive in the competition. That’s why the big three cloud providers have all launched managed versions of K8s namely &lt;a href="https://aws.amazon.com/eks/?whats-new-cards.sort-by=item.additionalFields.postDateTime&amp;amp;whats-new-cards.sort-order=desc&amp;amp;eks-blogs.sort-by=item.additionalFields.createdDate&amp;amp;eks-blogs.sort-order=desc" rel="noopener noreferrer"&gt;EKS&lt;/a&gt; by AWS, &lt;a href="https://cloud.google.com/kubernetes-engine" rel="noopener noreferrer"&gt;GKE&lt;/a&gt; by GCP, and &lt;a href="https://azure.microsoft.com/en-in/services/kubernetes-service/" rel="noopener noreferrer"&gt;AKS&lt;/a&gt; by Azure. &lt;a href="https://www.openshift.com/" rel="noopener noreferrer"&gt;RedHat Openshift&lt;/a&gt; is also a contender of Kubernetes distribution that one must not neglect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetefxjyy9r1art8vtan9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetefxjyy9r1art8vtan9.jpeg" width="600" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;== This seems extremely true and a bit relatable with frequently evolving K8s environments.&lt;/p&gt;

&lt;p&gt;With its latest release in Dec. 2020, K8s &lt;a href="https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/" rel="noopener noreferrer"&gt;depreciated&lt;/a&gt; dockershim(a component of kubelet) that favors the use of container runtime interfaces created for K8s. We are yet to witness the emergence of the implementation of runtimes like &lt;a href="https://cri-o.io/#:~:text=CRI%2DO%20is%20an%20implementation,as%20the%20runtime%20for%20kubernetes.&amp;amp;text=It%20is%20a%20lightweight%20alternative%20to%20using%20Docker%2C%20Moby%20or,as%20the%20runtime%20for%20Kubernetes." rel="noopener noreferrer"&gt;cri-o&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I really appreciate your time and attention in reading this piece. I’ll be grateful to have connections like you on &lt;a href="http://www.linkedin.com/in/poojan-mehta-17a514191" rel="noopener noreferrer"&gt;&lt;strong&gt;Linkedl&lt;/strong&gt;&lt;/a&gt; *&lt;em&gt;n *&lt;/em&gt; 🧑‍💼&lt;/p&gt;

</description>
      <category>airbnb</category>
      <category>containers</category>
      <category>kubernetes</category>
      <category>docker</category>
    </item>
    <item>
      <title>Setting Up Ansible for EC2 With Dynamic Inventory</title>
      <dc:creator>Poojan Mehta</dc:creator>
      <pubDate>Fri, 25 Sep 2020 05:18:35 +0000</pubDate>
      <link>https://dev.to/poojan18/setting-up-ansible-for-ec2-with-dynamic-inventory-4c4o</link>
      <guid>https://dev.to/poojan18/setting-up-ansible-for-ec2-with-dynamic-inventory-4c4o</guid>
      <description>&lt;h3&gt;
  
  
  Setting Up Ansible for EC2 With Dynamic Inventory🙂
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4drasbgm3alhyc7o6ou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4drasbgm3alhyc7o6ou.png" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  In this article, I will demonstrate how to provision EC2 instance using ANSIBLE and how do set up a more agile environment using the DYNAMIC INVENTORY.
&lt;/h3&gt;

&lt;p&gt;→Pre-requisites:&lt;/p&gt;

&lt;p&gt;— &amp;gt;RedHat Ansible downloaded and configured in the local system.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Do check out my previous article for Ansible👇👇:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://medium.com/@Poojan_Mehta/linux-automation-using-ansible-a726ad7a71de" rel="noopener noreferrer"&gt;LINUX AUTOMATION WITH ANSIBLE&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  ~Problem Statement:
&lt;/h4&gt;

&lt;p&gt;♦️ Deploy Web Server on AWS through ANSIBLE!&lt;/p&gt;

&lt;p&gt;🔹 Provision EC2 instance through ansible.&lt;/p&gt;

&lt;p&gt;🔹 Retrieve the IP Address of instance using the dynamic inventory concept.&lt;/p&gt;

&lt;p&gt;🔹 Configure the webserver through ansible!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As Ansible is built on top of python, a Python Software Development Kit (SDK) is required that enables the configuration of AWS services. The package is an object-oriented API named boto3.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install boto3 //assuming python3 is installed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;→STEP-1)&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;In the first step, I provisioned an ec2 instance with this playbook.&lt;/li&gt;
&lt;li&gt;Here, the RedHat system itself calls the API for configuration on AWS, and this procedure is done on the local machine that’s why the host is supposed to be localhost.&lt;/li&gt;
&lt;li&gt;For authentication to the AWS account, create one &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html" rel="noopener noreferrer"&gt;IAM&lt;/a&gt; user that has less privileged than the root account. The AWS_ACCESS_KEY and AWS_SECRET key are passed explicitly through an Ansible vault named secret.yml&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1503zhewb0euc4t6484.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1503zhewb0euc4t6484.png" width="800" height="213"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Encrypted Vault🔒&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- hosts: localhost
  vars_files:
      - secret.yml
  tasks:
   - name: Provision os in AWS
     ec2:
      key_name: "keytask" //keypair to be attached to the instance  
      instance_type: "t2.micro"
      image: "ami-0ebc1ac48dfd14136" //amazon linux 
      count: 1
      wait: yes
      vpc_subnet_id: "subnet-e7780dab"
      region: "ap-south-1" //asia-pecific-south region of AWS
      state: present
      assign_public_ip: yes
      group_id: "sg-0512d293cfb4af6e4" //security group 
      aws_access_key: "{{ myuser }}"
      aws_secret_key: "{{ mypass }}"
     register: ec2   

- debug:
       var: ec2.instances[0].public_ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ditbpql7y6httue5osr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ditbpql7y6httue5osr.png" width="800" height="477"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;ansible-playbook ec2.py — ask-vault-pass🚀&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Ansible register allows the user to capture the output and store as variables and can be used in different scenarios. The variables will contain the value returned by the task.&lt;/p&gt;

&lt;p&gt;The register variable will print the public IP address of the instance from Ansible facts it gathers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hdsdr7m4udj07svy9c4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hdsdr7m4udj07svy9c4.png" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;→STEP-2)&lt;/strong&gt;
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;The instance has been launched! &lt;strong&gt;Next what?&lt;/strong&gt; 🤔🤔&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We need to dump the IP address of this instance into the inventory file and do the further procedure!&lt;/p&gt;

&lt;p&gt;Wondering I will simply write the IP in the host file🤭?? &lt;strong&gt;NAH ! Not manually&lt;/strong&gt; 🤫🤫&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm17j7yremzoq8lo5gny.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm17j7yremzoq8lo5gny.jpeg" width="339" height="149"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  AND THIS IS WHERE 🔥DYNAMIC INVENTORY🔥 COMES TO PLAY:
&lt;/h4&gt;

&lt;p&gt;→Ansible dynamic inventory is a concept that contains scripts that work as external APIs and pulls the information(facts) of a particular provider.&lt;/p&gt;

&lt;p&gt;→The gathered facts will be dynamically dumped into the host file and further, we can create groups of these hosts according to requirement.&lt;/p&gt;

&lt;p&gt;→Copy the following files into the controller node to enable dynamic inventory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.py](https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.py)

[https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.ini](https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.ini)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;→Both files need to be in executable format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x ec2.py
chmod +x ec2.ini
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;→Also, for account authentication, pass AWS_ACCESS_KEY and AWS_SECRET_KEY in the ec2.ini file. This will contact to AWS on our behalf and retrieve the information of the ec2 instance.&lt;/p&gt;

&lt;p&gt;→Edit the inventory file in the ANSIBLE.CFG configuration files too.&lt;/p&gt;

&lt;p&gt;→Now, to see the output, run ./ec2.py - - list&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0hvghl0i42k5rgiyv7v.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0hvghl0i42k5rgiyv7v.jpeg" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fteobpk02i5lhjwuihx69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fteobpk02i5lhjwuihx69.png" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→Also, run ansible all — — list-hosts to see the available hosts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqpv0tw4mf1dgk3ymqi9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqpv0tw4mf1dgk3ymqi9.png" width="800" height="479"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Host added dynamically😃&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;→STEP-3)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;→With a defined host, now the final step is to deploy our application! In this example, I am deploying an apache webserver.&lt;/p&gt;

&lt;p&gt;→Before that, enter the key file in the ansible configuration file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private_key_file= /root/path-to-private-key 🔒
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file also needs to be executable .. chmod 600 key_name.pem&lt;/p&gt;

&lt;h4&gt;
  
  
  🙌Out of the box yet important information about file access:
&lt;/h4&gt;

&lt;p&gt;These numbers show different types of permissions given to a file or a directory.&lt;br&gt;&lt;br&gt;
the format is: chmod XYZ&lt;br&gt;&lt;br&gt;
x is the root or owner permissions&lt;br&gt;&lt;br&gt;
y is the group permissions&lt;br&gt;&lt;br&gt;
z is the permission for other users&lt;br&gt;&lt;br&gt;
Now let’s get to know what does these numbers mean. So, there are generally three types of permissions: read (r), write (w), and executable (x)&lt;br&gt;&lt;br&gt;
each number denotes some kind of permissions. They are:&lt;br&gt;&lt;br&gt;
0 = no permission&lt;br&gt;&lt;br&gt;
1 = only execute (- — x)&lt;br&gt;&lt;br&gt;
2 = only write (-w-)&lt;br&gt;&lt;br&gt;
3 = write an execute (-wx)&lt;br&gt;&lt;br&gt;
4 = only read (r — )&lt;br&gt;&lt;br&gt;
5 = read and execute (r-x)&lt;br&gt;&lt;br&gt;
6 = read and write (rw-)&lt;br&gt;&lt;br&gt;
7 = all (rwx)chmod 777: here, 7 means all permissions and three 7s means the rwx permission is given to all (owner, group, and other)&lt;br&gt;&lt;br&gt;
similarly, you can calculate the same for all the numbers.&lt;/p&gt;

&lt;p&gt;Now, run one playbook that downloads the required packages into the instance and copy the code into the document root of the webserver.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- hosts: all
  become: yes
  remote_user: ec2-user //login as this user in the instance
  tasks:

- name: Download Httpd and Git in remote system
        package:
         name:
           - httpd
           - git
         state: present

- name: Clone code from GitHub
        git:
         repo: '[https://username:password@github.com/poojan1812/Ansible.git']
         dest: "/var/www/html/"

- name: start the services of httpd
        service:
         name: "httpd"
         state: restarted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pk2f8z7dlak6cepyp29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pk2f8z7dlak6cepyp29.png" width="800" height="612"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;ansible-playbook server.yml&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→The output of this playbook -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp46hwwqj52vlzsuga4n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp46hwwqj52vlzsuga4n.png" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2eovs1yl9dh68v5s1q0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2eovs1yl9dh68v5s1q0q.png" width="800" height="612"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Service started and code copied from GitHub to the doc. root&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  FINAL OUTPUT-
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4t2q3uni05i9y25vkm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4t2q3uni05i9y25vkm5.png" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;THAT’S IT&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;→🤗All steps completed and the Problem statement matched successfully!!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;THANKS, A LOT FOR READING THIS SO ATTENTIVELY&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’ll be grateful to have connections like you on &lt;a href="http://www.linkedin.com/in/poojan-mehta-17a514191" rel="noopener noreferrer"&gt;&lt;strong&gt;Linkedl&lt;/strong&gt;&lt;/a&gt; *&lt;em&gt;n *&lt;/em&gt; 🧑‍💼&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faszq74d6qy9v20a2reli.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faszq74d6qy9v20a2reli.jpeg" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;




</description>
      <category>aws</category>
      <category>ansible</category>
      <category>linux</category>
      <category>webserver</category>
    </item>
    <item>
      <title>LINUX AUTOMATION WITH ANSIBLE</title>
      <dc:creator>Poojan Mehta</dc:creator>
      <pubDate>Sat, 05 Sep 2020 11:13:53 +0000</pubDate>
      <link>https://dev.to/poojan18/linux-automation-with-ansible-2h64</link>
      <guid>https://dev.to/poojan18/linux-automation-with-ansible-2h64</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9he43fstn1cpn1ghcfnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9he43fstn1cpn1ghcfnz.png" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  This article is a demonstration of Automation using ANSIBLE
&lt;/h3&gt;

&lt;h4&gt;
  
  
  ~WHAT IS ANSIBLE? 🤔🤔
&lt;/h4&gt;

&lt;p&gt;→Ansible is an infrastructure automation tool from &lt;strong&gt;REDHAT.&lt;/strong&gt; It is widely used in the configuration of systems and setting up deployment environments.&lt;/p&gt;

&lt;p&gt;→Ansible is an abstraction layer that covers all operating systems under its umbrella that helps configuration of large heterogeneous environment. It is built on top of PYTHON 🐍.&lt;/p&gt;

&lt;p&gt;→Ansible has &lt;strong&gt;Modules&lt;/strong&gt; that enables performing various tasks in the system.Power⚡ of Ansible is &lt;strong&gt;Playbooks.&lt;/strong&gt; Playbooks are nothing but YML files containing modules as per user requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem Statement For This Hands-On-&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Write an Ansible PlayBook that does the following operations in the managed nodes:&lt;/p&gt;

&lt;p&gt;🔹 Configure Docker&lt;/p&gt;

&lt;p&gt;🔹 Start and enable Docker services&lt;/p&gt;

&lt;p&gt;🔹 Pull the httpd server image from the Docker Hub&lt;/p&gt;

&lt;p&gt;🔹 Copy the html code in /var/www/html directory and start the web server&lt;/p&gt;

&lt;p&gt;🔹 Run the httpd container and expose it to the public&lt;/p&gt;

&lt;p&gt;→The system from which user is operating Ansible is called &lt;strong&gt;Controller Node&lt;/strong&gt; and all working nodes under controller node are called &lt;strong&gt;Managed Nodes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;→Here I have taken only one Managed node. The file containing information about managed node is called &lt;strong&gt;Inventory.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- hosts: DockerSlave
  vars_files:
    - secret.yml //will discuss about secret later in this article
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmvyvo4pyxymcc1kn2ce.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmvyvo4pyxymcc1kn2ce.jpeg" width="800" height="593"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Managed node and inventory file&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→Mention inventory file in Ansible configuration file&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcq21ec1xw0gfh0zga8e5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcq21ec1xw0gfh0zga8e5.png" width="800" height="649"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  →&lt;strong&gt;STEP -1)&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&amp;gt;Configure yum repository for docker in slave system
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: yum configuration in slave system
    yum_repository:
      name: DockerRepo
      baseurl: [https://download.docker.com/linux/centos/7/x86\_64/stable/](https://download.docker.com/linux/centos/7/x86_64/stable/)
      description: docker repo
      enabled: true
      gpgcheck: no

- name: Docker installation
    command: "yum install docker-ce --nobest -y"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6atk3srr2shy7jvdhnny.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6atk3srr2shy7jvdhnny.png" width="800" height="205"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Yum repo configured in slave node&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;→STEP-2)&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Install &lt;strong&gt;Docker&lt;/strong&gt; 🐋 and start the services.&lt;/li&gt;
&lt;li&gt;As Ansible is built on top of Python, it is required to download docker package for python.&lt;/li&gt;
&lt;li&gt;Ansible command and service modules are used in this step.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Docker installation
    command: "yum install docker-ce --nobest -y"

- name: Start docker services
    service:
     name: "docker"
     state: started
     enabled: yes

- name: Install python36 package
    package:
     name: python36
     state: present

- name: Install python library for docker
    pip:
     name: docker-py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4frv419pv1kiyalyc8ai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4frv419pv1kiyalyc8ai.png" width="800" height="114"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Docker installed&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foibmw3hk6hgb0m6tyr30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foibmw3hk6hgb0m6tyr30.png" width="800" height="649"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Docker services started&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;→STEP-3)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;→Here, &lt;strong&gt;Httpd&lt;/strong&gt; image for web server is pulled from &lt;a href="http://hub.docker.com" rel="noopener noreferrer"&gt;DockerHub&lt;/a&gt; 🐋&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Pull docker image
    docker_image:
     name: httpd:latest
     source: pull
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;→STEP-4)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;→In this step, using git module, cloned the code from GitHub. One of the ways is to pass the credentials in the link to get authenticated in GitHub. But, it’s not a good practice to pass as plain text in the playbook😕.&lt;/p&gt;

&lt;p&gt;→Use concept of Vault to encrypt the secret information and pass it in playbook as parameters. I have created one Vault that contains my GitHub credentials.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitb2ss8iz3a8lrg92uwk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitb2ss8iz3a8lrg92uwk.png" width="800" height="364"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Encrypted vault🤗&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Clone code from GitHub
    git:
     repo: '[@github](https://{{gituser}}:{{gitpass}}&amp;lt;a%20href=).com/poojan1812/hybrid-cloud.git'"&amp;gt;https://{{gituser}}:{{gitpass}}[@github](http://twitter.com/github).com/poojan1812/hybrid-cloud.git'
     dest: "/root/code_html/"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&amp;gt;This will clone the repository in the destination folder of slave system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n2wibcyyn2ma3tq70xm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n2wibcyyn2ma3tq70xm.png" width="792" height="122"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Code cloned in slave node&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;→STEP-5)&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The last step will launch a &lt;strong&gt;docker container&lt;/strong&gt; and expose the port for the public world.&lt;/li&gt;
&lt;li&gt;Docker_container module is used here to launch and manage the container.&lt;/li&gt;
&lt;li&gt;The HTML code from local system will be copied in the document root of web server.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Launch container
    docker_container:
     name: img_httpd
     image: httpd:latest
     state: started
     exposed_ports:
      - "80"
     ports:
      - "2025:80"
     volumes:
      - /root/code_html:/usr/local/apache2/htdocs/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqa564b0xj4qayb6vbdj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqa564b0xj4qayb6vbdj.png" width="800" height="164"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Launched container🚀&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now, run the following command to apply the playbook in managed node
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook &amp;lt;file-name&amp;gt;.yml --ask-vault-pass 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxtfqv34zxaph6meqnl4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxtfqv34zxaph6meqnl4.png" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxe9wv4i5u6k0rzcvu1bf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxe9wv4i5u6k0rzcvu1bf.png" width="800" height="535"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;DONE🤩🤩🤩&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;FINAL OUTPUT-&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl &amp;lt;ip-container&amp;gt;/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faznjzwsrbvxkxm3qedx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faznjzwsrbvxkxm3qedx0.png" width="800" height="645"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rb2za5ffuisum00zt7t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rb2za5ffuisum00zt7t.png" width="410" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Thank you guys for taking the time out to read my rant lol!🥰😅
&lt;/h4&gt;

&lt;p&gt;I’ll be grateful to have connections like you on &lt;a href="http://www.linkedin.com/in/poojan-mehta-17a514191" rel="noopener noreferrer"&gt;&lt;strong&gt;Linkedl&lt;/strong&gt;&lt;/a&gt; *&lt;em&gt;n *&lt;/em&gt; 🧑‍💼&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zg4fdkuarmgm3jn1b65.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zg4fdkuarmgm3jn1b65.jpeg" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>automation</category>
      <category>linux</category>
      <category>docker</category>
    </item>
    <item>
      <title>DevOps Automation 2</title>
      <dc:creator>Poojan Mehta</dc:creator>
      <pubDate>Wed, 12 Aug 2020 05:43:58 +0000</pubDate>
      <link>https://dev.to/poojan18/devops-automation-2-4o4b</link>
      <guid>https://dev.to/poojan18/devops-automation-2-4o4b</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbrv5zplgy37gt8lvm12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbrv5zplgy37gt8lvm12.png" width="700" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→WHAT IS THE PROJECT?&lt;/p&gt;

&lt;p&gt;TOOLS USED — GIT, JENKINS, DOCKER&lt;/p&gt;

&lt;p&gt;Problem Statement-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;JOB1:&lt;/strong&gt; Pull the GitHub repo automatically when some developer commits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JOB2:&lt;/strong&gt; By looking at the code or program file, Jenkins should automatically start the respective language interpreter install image container to deploy code ( eg. If code is of PHP, then Jenkins should start the container that has PHP already installed)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JOB3:&lt;/strong&gt; Test your application if it is working or not.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JOB4:&lt;/strong&gt; If the application is not working, then send an email to the developer with an error message and if it is running file send a success message.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JOB5:&lt;/strong&gt; To monitor the container where the application is hosted, fails due to any reason then this job will automatically start the container again.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahd49c8hod03rq6hbmw9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahd49c8hod03rq6hbmw9.jpeg" width="300" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First Create DockerFiles for 3 different image requirements-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Dockerfile for Jenkins image&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forc9gh1d7i9zc5ygegfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forc9gh1d7i9zc5ygegfk.png" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;~Here, CentOs image is taken and git, python, sudo, net-tools, and similar essential tools are to be downloaded.&lt;/p&gt;

&lt;p&gt;~Also, install Jenkins and grant Sudo powers to Jenkins user. Also, add the command to start Jenkins services automatically as soon as the container is launched.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Dockerfile for Html image&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes8tuduk95otuq22n4pg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes8tuduk95otuq22n4pg.png" width="800" height="609"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;~Here also, CentOs image is taken and important packages are downloaded and HTTPS service will start after the container is launched.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Dockerfile for PHP image&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lxqancpl4oj2n2laged.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lxqancpl4oj2n2laged.png" width="800" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;~Same piece of image and packages from the HTML image and PHP package is also added.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I’ve passed -y option in every command to ignore user prompt while downloading.&lt;/li&gt;
&lt;li&gt;The final output of all images after build —&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsuxv52xq8q5kfffcl0c5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsuxv52xq8q5kfffcl0c5.png" width="800" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo50po8s4vkaelqa64qnl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo50po8s4vkaelqa64qnl.png" width="800" height="604"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Jenkins_task, html_img, and php_img are dockerfile created and Jenkins service is also started automatically&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JOB1-&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;→This job will pul the code from the GitHub and store it in the Host system in the specified folder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkw6clfmu9pkaj8anqsf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkw6clfmu9pkaj8anqsf.png" width="800" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyee4f37kxa90injlvuo8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyee4f37kxa90injlvuo8.png" width="800" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmykhhfwar1nnoalszvy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmykhhfwar1nnoalszvy.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4bek0mgrklphj7g9us4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4bek0mgrklphj7g9us4.png" width="800" height="612"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Here, an already running container of HTML of PHP will stop and code will be copied in the folder named data_folder.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Poll SCM will always monitor the Git repository and pull the code instantly after code is pushed in GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rbtu2ugj7sqp5kgfg3z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rbtu2ugj7sqp5kgfg3z.png" width="800" height="614"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Job1 success and job 2 triggered&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JOB2-&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;→This job will detect the code and will launch a docker container accordingly.&lt;/p&gt;

&lt;p&gt;-&amp;gt;If HTML code is pulled from Git then html_img image and if PHP code is copied then php_img image will be used&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fverukbbrljnlnavuync0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fverukbbrljnlnavuync0.png" width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jbgpupd436koblge6wt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jbgpupd436koblge6wt.png" width="800" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4cjyh5mhykh0kfdart9a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4cjyh5mhykh0kfdart9a.png" width="800" height="618"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhz5s3hiaq2yoqvq17cug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhz5s3hiaq2yoqvq17cug.png" width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&amp;gt;This job is chained to job1. So, it’ll automatically run just after a successful build of job1.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93hu09bzt6vjth2muw2r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93hu09bzt6vjth2muw2r.png" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhx8vwtl5ykk1ht3tac3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhx8vwtl5ykk1ht3tac3.png" width="800" height="604"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Job1 pulled HTML data, so a docker container for HTML is launched&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sfm9514wwzofrch4z3r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sfm9514wwzofrch4z3r.png" width="800" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JOB3-&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;→This is the main job!! This is a chained job with upstream job-2 and will trigger after a successful build of job2.&lt;/p&gt;

&lt;p&gt;→Here, the HTTP status code of the webserver is used and passed in the condition.&lt;/p&gt;

&lt;p&gt;-&amp;gt;The status code is to be stored in a variable through the output of the curl command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv38bdsiqyoibya0xqnki.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv38bdsiqyoibya0xqnki.png" width="800" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tjqkef1zyn9tgl3gkjo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tjqkef1zyn9tgl3gkjo.png" width="800" height="622"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo92lxup80hyc6areu5r3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo92lxup80hyc6areu5r3.png" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnsjw1epzoz65f9ad39u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnsjw1epzoz65f9ad39u.png" width="800" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&amp;gt;Also, Jenkins mail plugin can be used to send customized mail notification.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fciwf78bn2as0hf4nuk52.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fciwf78bn2as0hf4nuk52.jpeg" width="799" height="622"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Configure Jenkins as a mail client. Jenkins will contact the google SMTP server.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;-Jenkins is a third party application and will be considered as a third-party application for a google account. Create a 16 digit password and provide it to Jenkins.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add POST BUILD ACTION and select MAIL NOTIFICATION and add the recipient mail address.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhjrfc0amr0tr1gewgug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhjrfc0amr0tr1gewgug.png" width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7ho7jbmr6bjq44utbgg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7ho7jbmr6bjq44utbgg.png" width="800" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5k6ljgm2c8f9g95fu24.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5k6ljgm2c8f9g95fu24.png" width="699" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JOB4-&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;→This is an independent job that will continuously monitor the Jenkins container and launch one new container if the running one is failed due to any reason!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9dvvnoj752vgkmoy56a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9dvvnoj752vgkmoy56a.png" width="800" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbv4vf8sesvemcgpyvfn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbv4vf8sesvemcgpyvfn.png" width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;THAT’S IT !! AUTOMATIC JENKINS PIPELINE SET UP IS DONE.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;FUTURE SCOPE — INTEGRATING THIS PIPELINE WITH MULTIPLE DOCKERFILES AND LAUNCH DIFFERENT OS ACCORDING TO REQUIREMENT ON THE FLY!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjv60qfyiabjqjpm28n9u.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjv60qfyiabjqjpm28n9u.jpeg" width="304" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub URL for reference — &lt;a href="https://github.com/poojan182/jenkins-pipeline" rel="noopener noreferrer"&gt;https://github.com/poojan182/jenkins-pipeline&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;THANKS A LOT FOR YOUR ATTENTION FOR SO LONG!!&lt;/p&gt;

</description>
      <category>automation</category>
      <category>docker</category>
      <category>git</category>
      <category>pipeline</category>
    </item>
    <item>
      <title>Face Recognition Using Transfer Learning</title>
      <dc:creator>Poojan Mehta</dc:creator>
      <pubDate>Sat, 06 Jun 2020 06:56:00 +0000</pubDate>
      <link>https://dev.to/poojan18/face-recognition-using-transfer-learning-1fmc</link>
      <guid>https://dev.to/poojan18/face-recognition-using-transfer-learning-1fmc</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbvkq37pvxwdllppynvq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbvkq37pvxwdllppynvq.jpeg" width="500" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→First of all, many thanks to MR.VIMAL DAGA SIR for mentoring and training in Machine learning from very basic to advance level&lt;/p&gt;

&lt;p&gt;→I have completed Face Recognition Using Transfer Learning&lt;/p&gt;

&lt;p&gt;→Environment requirements —&lt;/p&gt;

&lt;p&gt;Keras, TensorFlow, Numpy, Jupyter, cv2&lt;/p&gt;

&lt;p&gt;→In this, I have used VGG16 pre-created dataset and using this pre-trained model implemented face-recognition over my dataset&lt;/p&gt;

&lt;p&gt;→I have taken 2 faces of Virat Kohli and Rohit Sharma and collected some images.&lt;/p&gt;

&lt;p&gt;→It is required to have Testing and Training data. So, suggested taking 80:20 ratio of training: testing dataset.&lt;/p&gt;

&lt;p&gt;→Screenshots of whole code and workflow&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fii9szpplfxlwfnduocr6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fii9szpplfxlwfnduocr6.png" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4b1lhs901var3z4hrmg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4b1lhs901var3z4hrmg.png" width="800" height="479"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;dataset&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→snapshots of code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6m407o6778f2ss33za91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6m407o6778f2ss33za91.png" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7951d5vnnn96gqc9exx3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7951d5vnnn96gqc9exx3.png" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nvbmrfbseciy2jvog11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nvbmrfbseciy2jvog11.png" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygmz9ly3bmwim50lc7ys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygmz9ly3bmwim50lc7ys.png" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fluk4kgdgy8q27seyzvof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fluk4kgdgy8q27seyzvof.png" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29zc5vxl2frrcpi848wy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29zc5vxl2frrcpi848wy.png" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1pizmmezftyhobaziyce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1pizmmezftyhobaziyce.png" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu10kpgymjfuww09cifuy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu10kpgymjfuww09cifuy.png" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→Still, adding more CNN may lead us towards higher accuracy&lt;/p&gt;

&lt;p&gt;final output-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty2ltwhqjja7krxmmj3g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty2ltwhqjja7krxmmj3g.png" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub URL for reference- &lt;a href="https://github.com/poojan182/face-recognition" rel="noopener noreferrer"&gt;https://github.com/poojan182/face-recognition&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→Thank you for your attention!&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>transferlearning</category>
      <category>vgg16</category>
      <category>facerecognition</category>
    </item>
    <item>
      <title>Machine learning and DevOps integration</title>
      <dc:creator>Poojan Mehta</dc:creator>
      <pubDate>Sat, 30 May 2020 11:42:15 +0000</pubDate>
      <link>https://dev.to/poojan18/machine-learning-and-devops-integration-155j</link>
      <guid>https://dev.to/poojan18/machine-learning-and-devops-integration-155j</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqu772zcfomqzvx3kmwp.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqu772zcfomqzvx3kmwp.jpeg" width="768" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→First of all, many thanks to MR.VIMAL DAGA SIR for mentoring and training in Machine learning and DevOps from very basic to advance level&lt;/p&gt;

&lt;p&gt;→Using this valuable knowledge I have completed the task of integration Machine learning with DevOps&lt;/p&gt;

&lt;p&gt;→Tools used for DevOps — Git, Jenkins, Docker,&lt;/p&gt;

&lt;p&gt;→Concept used for Machine Learning- Deep learning&lt;/p&gt;

&lt;p&gt;SYSTEM CONFIGURATION/REQUIREMENTS-&lt;/p&gt;

&lt;p&gt;→Base os Windows and virtual os Redhat Linux&lt;/p&gt;

&lt;p&gt;→Jenkins installed in your RedHat system (including GitHub plugin)&lt;/p&gt;

&lt;p&gt;→Docker installed with an image which can run python code&lt;/p&gt;

&lt;p&gt;→Git installed in Windows and Redhat os&lt;/p&gt;

&lt;p&gt;Problem statement- While creating a deep learning model, the process becomes too time taking and somewhat manual. And we all know, DevOps is used when we need automation!!&lt;/p&gt;

&lt;p&gt;→Here deep learning will keep on creating the model and DevOps will keep on monitoring it and when deep learning gets accuracy lower than certain level Jenkins will automatically enhance the Dl code and will increase accuracy on its own&lt;/p&gt;

&lt;p&gt;→STEP 1 →Creating a deep learning code and push it to GitHub with .PY file extension&lt;/p&gt;

&lt;p&gt;→I have used cifar-10 dataset&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl52n4a9ejc2lentifkf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl52n4a9ejc2lentifkf.png" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68mit715boowhtfsgq87.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68mit715boowhtfsgq87.png" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfuczkjb0h77ivnvl0np.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfuczkjb0h77ivnvl0np.png" width="800" height="430"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;deep-learning code&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Steps to get your code into GitHub-&lt;/p&gt;

&lt;p&gt;1) git clone  — to clone the repository to your local system&lt;/p&gt;

&lt;p&gt;2) ml_devops_integration1_py.py — code which is mentioned above&lt;/p&gt;

&lt;p&gt;3) git add  — to add the file into the staging area&lt;/p&gt;

&lt;p&gt;4) git commit -m “message” — to make your changes permanent&lt;/p&gt;

&lt;p&gt;5) git push origin — to push your changes into the remote repository&lt;/p&gt;

&lt;p&gt;NOW HEADING TOWARDS CREATING A DOCKER IMAGE WHICH IS COMPATIBLE WITH PYTHON CODE&lt;/p&gt;

&lt;p&gt;CREATE A DOCKERFILE WITH FOLLOWING&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7c8n25ik89psa5hr4ec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7c8n25ik89psa5hr4ec.png" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run docker build command after creating docker file&lt;/p&gt;

&lt;p&gt;→NOW THERE IS A PIPELINE OF 6 JENKINS JOBS&lt;/p&gt;

&lt;p&gt;JOB1&amp;gt;This job will simply copy code from Github and paste into a specific folder in Redhat os&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F060jxe29mmerdzhgmp9p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F060jxe29mmerdzhgmp9p.png" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgcde9jt30ru9jxe8j56.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgcde9jt30ru9jxe8j56.png" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1o8apnqca9uojllqvygk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1o8apnqca9uojllqvygk.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4k4u0kk8jg8mf2i8re80.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4k4u0kk8jg8mf2i8re80.png" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;job 1 detail&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;JOB2&amp;gt;This job is critical and important as well&lt;/p&gt;

&lt;p&gt;→This container will first search for KERAS and it will launch the os with provided docker file so that only deep learning code runs on that os&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xwe1bphs3gzf5e6izy1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xwe1bphs3gzf5e6izy1.png" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59xa9we3a6c0i90z7iar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59xa9we3a6c0i90z7iar.png" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1nxwxdm73oryha0b798.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1nxwxdm73oryha0b798.png" width="800" height="410"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;job 2 detail&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here job 1 is set upstream for this. Because it is important to get code from GitHub&lt;/p&gt;

&lt;p&gt;Execute shell for reference-&lt;/p&gt;

&lt;p&gt;“if sudo grep Keras | cat /MlOps_workspace/ml_devops_integration1_py.py&lt;br&gt;&lt;br&gt;
then&lt;br&gt;&lt;br&gt;
 if sudo docker ps -a | grep img1&lt;br&gt;&lt;br&gt;
 then&lt;br&gt;&lt;br&gt;
 sudo docker rm -f img1&lt;br&gt;&lt;br&gt;
 sudo docker run — name img1 -dit -v /MlOps_workspace:/Mlworkspace python_img:v1&lt;br&gt;&lt;br&gt;
 else&lt;br&gt;&lt;br&gt;
 sudo docker run — name img1 -dit -v /MlOps_workspace:/Mlworkspace python_img:v1&lt;br&gt;&lt;br&gt;
 fi&lt;br&gt;&lt;br&gt;
else&lt;br&gt;&lt;br&gt;
echo “This is not a deep learning code”&lt;br&gt;&lt;br&gt;
fi”&lt;/p&gt;

&lt;p&gt;→Here we have to mount permanent volume so that we can have our data persistent&lt;/p&gt;

&lt;p&gt;JOB-3&amp;gt; This job will run the PYTHON code for initial level&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbtsgpwspa87n31jdkif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbtsgpwspa87n31jdkif.png" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flf5rcnnkbwnm9hauppt6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flf5rcnnkbwnm9hauppt6.png" width="800" height="411"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;job3 detail&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Execute shell for reference-&lt;/p&gt;

&lt;p&gt;sudo docker exec img1 python3 /Mlworkspace/ml_devops_integration1_py.py&lt;/p&gt;

&lt;p&gt;JOB-4&amp;gt;&amp;gt;This job will check accuracy in our initial run&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgww35g5qo00bclzcji0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgww35g5qo00bclzcji0s.png" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3oj5ha3byldrx67m9om.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3oj5ha3byldrx67m9om.png" width="800" height="414"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;job 4 detail&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→If accuracy is less than 95% then Jenkins will do changes in code like increasing epochs and adding consolation layers to reach towards higher accuracy&lt;/p&gt;

&lt;p&gt;Execute shell for reference-&lt;/p&gt;

&lt;p&gt;read=$(sudo cat /MlOps_workspace/result1.txt)&lt;/p&gt;

&lt;p&gt;echo $read*100 | bc -l&lt;br&gt;&lt;br&gt;
if (( $(echo “$read &amp;gt; 0.95” | bc -l) ));&lt;br&gt;&lt;br&gt;
then&lt;br&gt;&lt;br&gt;
 echo “Achieved accuracy”&lt;br&gt;&lt;br&gt;
 exit 1&lt;br&gt;&lt;br&gt;
else&lt;br&gt;&lt;br&gt;
 cd /MlOps_workspace/&lt;br&gt;&lt;br&gt;
sudo sed -i ‘/^epoch=.*/a epoch=epoch+5’ ml_devops_integration1_py.py&lt;br&gt;&lt;br&gt;
sudo sed -i “62i model.add(Conv2D(filters=32, kernel_size=3, padding=’same’, activation=’relu’, input_shape=input_shape))” ml_devops_integration1_py.py&lt;br&gt;&lt;br&gt;
sudo sed -i “63i model.add(MaxPool2D(pool_size=2))” ml_devops_integration1_py.py&lt;br&gt;&lt;br&gt;
fi&lt;/p&gt;

&lt;p&gt;→sed keyword is used to add specific code into our original file&lt;/p&gt;

&lt;p&gt;JOB-5&amp;gt;This job will run again the edited code and find new accuracy&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxern82rbq6v5ejq1ktpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxern82rbq6v5ejq1ktpv.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yeqe04q0gb3wfrlknad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yeqe04q0gb3wfrlknad.png" width="800" height="410"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;job5 detail&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→This job is set-upstream to job4 and it will run until we get 95% accuracy in our model&lt;/p&gt;

&lt;p&gt;→It will keep on adding epochs and convolutional layers till achieving accuracy&lt;/p&gt;

&lt;p&gt;JOB-6&amp;gt; This is the final job of the whole process&lt;/p&gt;

&lt;p&gt;→This job will monitor the docker container&lt;/p&gt;

&lt;p&gt;→In any case, if the container goes down or fails to launch then this job will launch a new container&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihu37zljqex72rojar0f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihu37zljqex72rojar0f.png" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;THAT’S IT&lt;/p&gt;

&lt;p&gt;→The WHOLE PROCESS IS AUTOMATED WITH CERTAIN ACCURACY&lt;/p&gt;

&lt;p&gt;→LEARNED MANY NEW CONCEPT THROUGH THIS TASK AND LOOKING FORWARD TO ENHANCING THIS EVEN BATTER&lt;/p&gt;

&lt;p&gt;GitHub link for reference- &lt;a href="https://github.com/poojan182/workspace_repo" rel="noopener noreferrer"&gt;https://github.com/poojan182/workspace_repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading!! Kudos to you&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Getting Started with CI/CD</title>
      <dc:creator>Poojan Mehta</dc:creator>
      <pubDate>Thu, 07 May 2020 08:58:00 +0000</pubDate>
      <link>https://dev.to/poojan18/getting-started-with-cicd-4f7d</link>
      <guid>https://dev.to/poojan18/getting-started-with-cicd-4f7d</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbrv5zplgy37gt8lvm12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbrv5zplgy37gt8lvm12.png" width="700" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→WHAT IS THE PROBLEM STATEMENT?&lt;/p&gt;

&lt;p&gt;TOOLS USED — GIT, JENKINS, DOCKER.&lt;/p&gt;

&lt;p&gt;→Jenkins plays 3 jobs in this system&lt;/p&gt;

&lt;p&gt;job-1) Jenkins will keep on monitoring and keep on deploying out the site on a TESTING server&lt;/p&gt;

&lt;p&gt;job-2)Jenkins will keep on monitoring and keep on deploying out the site on the PRODUCTION server&lt;/p&gt;

&lt;p&gt;job-3)It will run this job only when it is triggered by the testing team and it will merge branched and run job 2 and finally destroy the testing server!&lt;/p&gt;

&lt;p&gt;SYSTEM CONFIGURATION/REQUIREMENTS-&lt;/p&gt;

&lt;p&gt;→Base os Windows and virtual os Redhat Linux&lt;/p&gt;

&lt;p&gt;→Jenkins installed in your RedHat system (including GitHub plugin)&lt;/p&gt;

&lt;p&gt;→Docker installed with https configuration&lt;/p&gt;

&lt;p&gt;→Git installed in Windows and Redhat os&lt;/p&gt;

&lt;p&gt;→Ngrok (optional)&lt;/p&gt;

&lt;p&gt;DETAILED DESCRIPTION ABOUT THE WHOLE PROCESS-&lt;/p&gt;

&lt;p&gt;→ Starting with GIT&lt;/p&gt;

&lt;p&gt;| → Create a git repository on GitHub and clone it into your local system&lt;/p&gt;

&lt;p&gt;| →Create files in the local system and push them to GitHub&lt;/p&gt;

&lt;p&gt;COMMANDS USED-&lt;/p&gt;

&lt;p&gt;1) git clone  — to clone the repository to your local system&lt;/p&gt;

&lt;p&gt;2) notepad file1.txt — to create a file for our use&lt;/p&gt;

&lt;p&gt;3) git add file1.txt — to add the file into the staging area&lt;/p&gt;

&lt;p&gt;4) git commit -m “message” — to make your changes permanent&lt;/p&gt;

&lt;p&gt;5) git push origin — to push your changes into the remote repository&lt;/p&gt;

&lt;p&gt;6) git branch  — to create a branch in the local system and then make changes and commit then push them too&lt;/p&gt;

&lt;p&gt;7) git merge — to merge the branch with the master branch&lt;/p&gt;

&lt;p&gt;some useful commands&lt;/p&gt;

&lt;p&gt;1)git log — to find all versions of our files&lt;/p&gt;

&lt;p&gt;2)git status — to get track information of all files&lt;/p&gt;

&lt;h3&gt;
  
  
  JOB 1-
&lt;/h3&gt;

&lt;p&gt;→ Job 1 of Jenkins is to take files from GitHub and copy deploying it into TESTING SERVER&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ged4ad1zmbkgsbncrcg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ged4ad1zmbkgsbncrcg.png" width="174" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ Click on “new item” to get started&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo48hz8pv8xb8wbj3m2cu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo48hz8pv8xb8wbj3m2cu.png" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→Click on “freestyle project” and add name&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feybnfy3nmk8eb3cyauob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feybnfy3nmk8eb3cyauob.png" width="640" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ Select GIT and then link you GitHub profile where you want to download file&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcjd8a26frqaxcys82jvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcjd8a26frqaxcys82jvg.png" width="640" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→Build trigger with “POLL SCM” in make Jenkins watch your GitHub every time&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvckcilbpjph3z0oxg41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvckcilbpjph3z0oxg41.png" width="640" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→Last step to “EXECUTE SHELL” and inform Jenkins where to put downloaded files&lt;/p&gt;

&lt;p&gt;→In Linux, Jenkins user does not have all the privileges to manipulate files&lt;/p&gt;

&lt;p&gt;| →By using “SUDO” command will give all power to Jenkins&lt;/p&gt;

&lt;p&gt;→With this job, Jenkins will deploy files to the testing server (Here we launch new os only if our testing server is not running)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JOB 2-&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;→&lt;/strong&gt; Job 2 of Jenkins is to copy the file and deploy it to “PRODUCTION SERVER”&lt;/p&gt;

&lt;p&gt;→Create a new job and give an appropriate name&lt;/p&gt;

&lt;p&gt;→Link job2 with GitHub and&lt;/p&gt;

&lt;p&gt;→Use POLLSCM to keep monitoring GitHub&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfr59bi2sf4ihj04iryc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfr59bi2sf4ihj04iryc.png" width="640" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→As shown in image use commands which copy files from GitHub and deploy into the webserver&lt;/p&gt;

&lt;p&gt;→Here job 2 is completed&lt;/p&gt;

&lt;p&gt;→But job2 is an “UPSTREAM PROJECT” which will only work if job 3 builds successfully&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JOB 3-&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;→Job 3 is the main part of this system&lt;/p&gt;

&lt;p&gt;→Job 2 is totally dependent on job 3. This is called “JOB CHAINING”&lt;/p&gt;

&lt;p&gt;→First link your repository with storing credentials&lt;/p&gt;

&lt;p&gt;→Then specify the branch you need to monitor and in advanced options give “MERGE BEFORE BUILD”. So this will merge branches and then build the job&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6va52sfq1vn21g9liuk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6va52sfq1vn21g9liuk.png" width="640" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→Job 3 is most sensitive because if this job gets wrong execution then our whole site in the production server gets harmed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95inj6wm8pxjz6mr8azn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95inj6wm8pxjz6mr8azn.png" width="640" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→In this job we use “BUILD TRIGGERS REMOTELY”&lt;/p&gt;

&lt;p&gt;→So it will trigger just after the token is provided manually by the testing team&lt;/p&gt;

&lt;p&gt;→As soon as the testing team pass and run the trigger, job 3 will run and our site will be in production server&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcf7nlnaak8oi27ch39fs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcf7nlnaak8oi27ch39fs.png" width="640" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→After the testing server approves, we no longer need the testing server&lt;/p&gt;

&lt;p&gt;→So in EXECUTE SHELL we run the&lt;br&gt;&lt;br&gt;
script to destroy the testing server&lt;/p&gt;

&lt;h3&gt;
  
  
  THAT’S IT!! OUR WORK DONE THROUGH AUTOMATION SYSTEM
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0l6sbsrg7h4nt61kfys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0l6sbsrg7h4nt61kfys.png" width="640" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→FINAL OUTPUT OF OUR FILE&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo85wttp5saucy46fvpqd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo85wttp5saucy46fvpqd.png" width="640" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-&amp;gt;OUTPUT OF DESTROYED TESTING SERVER!!&lt;/p&gt;

&lt;p&gt;→GITHUB URL FOR REFERENCE- &lt;a href="https://github.com/poojan1812/testing" rel="noopener noreferrer"&gt;https://github.com/poojan1812/testing&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→HAD A GOOD EXPERIENCE OF LEARNING AND BUILDING THIS PROJECT!!&lt;/p&gt;

&lt;p&gt;FUTURE SCOPE -&lt;/p&gt;

&lt;p&gt;→FURTHER WE CAN INTEGRATE THIS SYSTEM WITH VARIOUS TOOLS AND AUTOMATION SYSTEM.&lt;/p&gt;

&lt;p&gt;→WE CAN INTEGRATE THIS WITH MACHINE LEARNING MODELS AND LINK THIS WHOLE WITH CLOUD COMPUTING TO TO CREATE A MASTERPIECE AUTOMATION SYSTEM WITH STORAGE EFFICIENCY INCLUDING ARTIFICIAL INTELLIGENCE&lt;/p&gt;

&lt;p&gt;→THIS RAW SYSTEM CAN BE DIRECTLY USED EFFORTLESSLY IN DevOps WORLD&lt;/p&gt;

</description>
      <category>automation</category>
      <category>jenkins</category>
      <category>summerinternships</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
