<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: taqiyeddinedj</title>
    <description>The latest articles on DEV Community by taqiyeddinedj (@taqiyeddinedj).</description>
    <link>https://dev.to/taqiyeddinedj</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/taqiyeddinedj"/>
    <language>en</language>
    <item>
      <title>SQUID PROXY SERVER</title>
      <dc:creator>taqiyeddinedj</dc:creator>
      <pubDate>Wed, 30 Aug 2023 14:54:36 +0000</pubDate>
      <link>https://dev.to/taqiyeddinedj/squid-proxy-server-3513</link>
      <guid>https://dev.to/taqiyeddinedj/squid-proxy-server-3513</guid>
      <description>&lt;p&gt;&lt;strong&gt;HMM, Now it is time to create a caching/proxy server using squid proxy open source project&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt install squid&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo systemctl enable --now squid&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touk@ubuntu:/etc/squid$ tree
.
├── conf.d
│   ├── debian.conf
│   └── local.conf
├── errorpage.css
└── squid.conf

1 directory, 4 files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For simplicity, we’ll create our customize config file in conf.d, and as you can see i have local.conf file under the conf.d directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touk@ubuntu:/etc/squid/conf.d$ cat local.conf
http_port 8080
cache_dir ufs /var/spool/squid 100 16 256
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, I have setup squid to use port 8080, and use it as my caching server&lt;br&gt;
this will work only for my local host because i need to create some acl and decide which clients will use it&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo systemctl restart squid&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;squid will use the default port 3128 by default ( other ports are in UNCONN state)&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touk@ubuntu:/etc/squid/conf.d$ sudo ss -tunap | less -s | grep 8080
tcp    LISTEN  0       256                                             *:8080                                              *:*                                   users:(("squid",pid=117073,fd=17))
touk@ubuntu:/etc/squid/conf.d$ sudo ss -tunap | less -s | grep squid
udp    UNCONN  0       0                                         0.0.0.0:53267                                       0.0.0.0:*                                   users:(("squid",pid=117073,fd=9))
udp    UNCONN  0       0                                               *:38167                                             *:*                                   users:(("squid",pid=117073,fd=5))
udp    ESTAB   0       0                                           [::1]:48837                                         [::1]:47244                               users:(("squid",pid=117073,fd=20))
tcp    LISTEN  0       256                                             *:8080                                              *:*                                   users:(("squid",pid=117073,fd=17))
tcp    LISTEN  0       256   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And sure enough, from the access logs i can see my caching is working perfect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@ubuntu:/etc/squid/conf.d# tail -f /var/log/squid/access.log
1693403379.468 293103 127.0.0.1 TCP_TUNNEL/200 2459 CONNECT incoming.telemetry.mozilla.org:443 - HIER_DIRECT/34.120.208.123 -
1693403379.468 293171 127.0.0.1 TCP_TUNNEL/200 5155 CONNECT incoming.telemetry.mozilla.org:443 - HIER_DIRECT/34.120.208.123 -
1693403381.469 171233 127.0.0.1 TCP_TUNNEL/200 5919 CONNECT adservice.google.dz:443 - HIER_DIRECT/142.251.143.98 -
1693403381.469 289049 127.0.0.1 TCP_TUNNEL/200 6976 CONNECT googleads.g.doubleclick.net:443 - HIER_DIRECT/142.251.143.162 -
1693403381.469 171521 127.0.0.1 TCP_TUNNEL/200 8278 CONNECT adservice.google.com:443 - HIER_DIRECT/142.251.143.162 -
1693403390.429 175780 127.0.0.1 TCP_TUNNEL/200 18484 CONNECT encrypted-tbn0.gstatic.com:443 - HIER_DIRECT/142.251.143.206 -
1693403391.429 171131 127.0.0.1 TCP_TUNNEL/200 8341 CONNECT id.google.com:443 - HIER_DIRECT/142.251.143.131 -
1693403391.429 286197 127.0.0.1 TCP_TUNNEL/200 18109 CONNECT www.gstatic.com:443 - HIER_DIRECT/142.251.143.99 -
1693403392.429 294374 127.0.0.1 TCP_TUNNEL/200 1654131 CONNECT www.google.com:443 - HIER_DIRECT/142.251.143.100 -
1693403478.431 171006 127.0.0.1 TCP_TUNNEL/200 5449 CONNECT contile.services.mozilla.com:443 - HIER_DIRECT/34.117.237.239 -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dxVQx89M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7l5ofe0sy776x4zav46p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dxVQx89M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7l5ofe0sy776x4zav46p.jpg" alt="Image description" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  RESTRICTING SERVER ACCESS
&lt;/h2&gt;

&lt;p&gt;​&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@ubuntu:/etc/squid/conf.d# cat local.conf
http_port 8080
cache_dir ufs /var/spool/squid 800 16 256
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/htpasswd
auth_param basic realm proxy

acl internal src 192.168.1.0/255.255.255.0
acl authenticated proxy_auth REQUIRED


acl blocked_websites dstdomain facebook.com fb.com linux.com
http_access deny blocked_websites
http_access allow internal authenticated
​

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;​&lt;br&gt;
​&lt;br&gt;
&lt;strong&gt;Let me explain this:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I am using the proxy via port 8080&lt;/li&gt;
&lt;li&gt;I am using the cache directory in /var/spool/squid and 800 is the size by default with a megabyte&lt;/li&gt;
&lt;li&gt;Now i have created the ACL, i am creating the hosts which are defined by 'internal' and then the action that will be applied on them
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;acl internal src 192.168.1.0/255.255.255.0
http_access allow internal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jTUgJUbA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ukxr7iq20dsarp64pt9m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jTUgJUbA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ukxr7iq20dsarp64pt9m.jpg" alt="Image description" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Squid you can restrict access by domain name or regular expression&lt;br&gt;
​&lt;br&gt;
If it is a long list of domain you can use a file populated by these domains, maybe you have a script that will do this for you&lt;br&gt;
In this example, i'll block video streaming from domains inserted into a file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;acl video_streaming dstdomain "/etc/squi/streaming.list"
http_access deny video_streaming
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can view a full list of ACL TYPES on this &lt;a href="https://wiki.squid-cache.org/SquidFaq/SquidAcl"&gt;page&lt;/a&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>networking</category>
      <category>security</category>
    </item>
    <item>
      <title>My First CI/CD Pipeline with JENKINS!</title>
      <dc:creator>taqiyeddinedj</dc:creator>
      <pubDate>Fri, 18 Aug 2023 09:44:11 +0000</pubDate>
      <link>https://dev.to/taqiyeddinedj/my-first-cicd-pipeline-with-jenkins-58dl</link>
      <guid>https://dev.to/taqiyeddinedj/my-first-cicd-pipeline-with-jenkins-58dl</guid>
      <description>&lt;h2&gt;
  
  
  Project Scope and Tools Used:
&lt;/h2&gt;

&lt;p&gt;This project centers around a comprehensive CI/CD pipeline, showcasing the integration of popular tools that are widely utilized in the field.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Repository Overview:
&lt;/h2&gt;

&lt;p&gt;When you take a look at my GitHub repository, you'll notice several key files that play a crucial role in this pipeline's functionality.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;project-root/
├── templates/
│   └── index.html
├── Dockerfile
├── app.py
├── Jenkinsfile
├── deployment-service.yml
├── script.groovy
├── .gitignore
└── requirements.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/taqiyeddinedj/ci-cd_project"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The repository contains files such as 'app.py,' which is a Flask web application."&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Image Creation:
&lt;/h2&gt;

&lt;p&gt;To build the application image, I leverage a Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.7-alpine
COPY . /app
WORKDIR /app 
RUN pip install flask
CMD ["python", "app.py"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y0vyqvLR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdgre7455wu87byeuuvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y0vyqvLR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdgre7455wu87byeuuvg.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check my docker hub repo&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://hub.docker.com/repository/docker/taqiyeddinedj/my-repo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the deployment stage of our CI/CD pipeline, I've crafted a Kubernetes Deployment configuration file, &lt;strong&gt;&lt;code&gt;deployment.yml&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;which defines the specifications for our application's deployment&lt;/p&gt;

&lt;p&gt;The accompanying Service configuration file, &lt;strong&gt;&lt;code&gt;service.yml&lt;/code&gt;&lt;/strong&gt;, defines how our application can be accessed&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  namespace: default 
spec:
  selector:
    matchLabels:
      app: myapp
  replicas: 2
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: taqiyeddinedj/my-repo:webapp-2.0
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m" 
        ports:
        - containerPort: 8080

---

apiVersion: v1
kind: Service
metadata:
  name: myapp-service
  namespace: default
spec:
  type: NodePort 
  selector:
    app: myapp
  ports:
    - port: 8080
      targetPort: 8080
      nodePort: 30000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Jenkins Setup and Building Triggers:
&lt;/h2&gt;

&lt;p&gt;For continuous integration, I've set up Jenkins on a separate local machine using a Docker container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sREq0PN3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zss2lzhwylqyu60wegzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sREq0PN3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zss2lzhwylqyu60wegzu.png" alt="Image description" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Inside the Jenkins container, I've included the Docker runtime, enabling it to build Docker images directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vTd3GFW0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ci64hyg2q4j8ykknskeo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vTd3GFW0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ci64hyg2q4j8ykknskeo.png" alt="Image description" width="800" height="84"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using a webhook, any push to the repository automatically triggers the build process."&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipeline Stages**:
&lt;/h2&gt;

&lt;p&gt;The pipeline consists of several distinct stages, each serving a specific purpose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0sta02cg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e381lochqsyjhjzrdaa2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0sta02cg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e381lochqsyjhjzrdaa2.png" alt="Image description" width="800" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These stages include initialization, testing (which identifies the active branch), building and pushing to Docker Hub, and the final deployment to a Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jenkins File and Groovy Syntax:
&lt;/h2&gt;

&lt;p&gt;The pipeline is orchestrated using a Jenkinsfile, written in Groovy syntax."&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def buildDockerImage() {
    echo "Building the docker image...."
    withCredentials([usernamePassword(credentialsId:'dockr-hub-repo', passwordVariable: 'PASS', usernameVariable: 'USER')]){
        sh "docker build -t taqiyeddinedj/my-repo:webapp-2.0 ."
        sh " echo $PASS | docker login -u $USER --password-stdin"
        sh "docker push taqiyeddinedj/my-repo:webapp-2.0"
    }
}

def deploytok8s() {
    echo "Deploying now the apllication on the kubernetes cluster"
    kubernetesDeploy (configs: 'deployment-service.yml', kubeconfigId: 'kubernetes')
}

return this
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting up Your Own Kubernetes Cluster:
&lt;/h2&gt;

&lt;p&gt;Establishing my personal Kubernetes cluster quite challenging, but i made it work."&lt;br&gt;
  Troubleshooting was a significant aspect of getting the cluster operational.&lt;/p&gt;
&lt;h2&gt;
  
  
  Integration of Kubernetes with Jenkins:
&lt;/h2&gt;

&lt;p&gt;Connecting Kubernetes with Jenkins was a critical step. I discovered a helpful plugin on Stack Overflow, conveniently provided by the Jenkins community.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/71084850/jenkins-pipeline-to-deploy-on-kubernetes#:~:text=Download%20Kubernetes%20Continuous%20Plugin%201.0.0%20version%20from%20https%3A%2F%2Fupdates.jenkins.io%2Fdownload%2Fplugins%2Fkubernetes-cd%2F1.0.0%2Fkubernetes-cd.hpi,%22Deploy%22%20button%20as%20shown%20below%3A%20Then%20run%20manually%3A"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An obstacle I encountered was a certificate signing issue, likely related to port forwarding. This led me to seek a cluster accessible from the public network.&lt;/p&gt;
&lt;h2&gt;
  
  
  Transition to Azure and Deploying a Cluster**:
&lt;/h2&gt;

&lt;p&gt;To address these challenges, I migrated to Microsoft Azure and successfully deployed a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4rrGO1i---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/reobwtr84hrd6f0i9omy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4rrGO1i---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/reobwtr84hrd6f0i9omy.png" alt="Image description" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There's a specific command that needs to be included in the Jenkinsfile for this Azure-based cluster setup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/usr/bin/env groovy
def gv

pipeline {
    agent any
    stages {
        stage('Init') {
            steps {
                script {
                    gv = load "script.groovy"
                }
            }
        }
        stage('test') {
            steps {
                script {
                    echo "Testing the application"
                    echo "Executing pipeline for branch $BRANCH_NAME"
                }
            }
        }
        stage ('Build &amp;amp; pushing'){
            steps {
                script {
                    gv.buildDockerImage()

                }
            }
        }
        stage ('Deploy to K8S'){
            steps {
                script {
                    gv.deploytok8s()

                }
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To ensure seamless connectivity, the kubeconfig file is stored in a hidden directory (.kube) within your home directory, and its contents are uploaded to Jenkins as special credentials."&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Cluster Status:
&lt;/h2&gt;

&lt;p&gt;Currently, the Azure-based Kubernetes cluster is up and running, serving as the backbone of our robust CI/CD pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion and Summary:
&lt;/h2&gt;

&lt;p&gt;In summary, i have taken you through the intricate process of establishing a comprehensive CI/CD pipeline.&lt;br&gt;
  From the initial setup of Jenkins and Docker, to overcoming Kubernetes integration challenges, and finally transitioning to a reliable Azure-based cluster, we've covered a wealth of insights and practical steps.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>jenkins</category>
      <category>docker</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Automating Infrastructure with Ansible</title>
      <dc:creator>taqiyeddinedj</dc:creator>
      <pubDate>Sat, 29 Jul 2023 12:45:22 +0000</pubDate>
      <link>https://dev.to/taqiyeddinedj/deploy-load-balancer-using-ansible-4ale</link>
      <guid>https://dev.to/taqiyeddinedj/deploy-load-balancer-using-ansible-4ale</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;In this article, we will explore how to set up a load balancing infrastructure using Ansible on three Azure instances. The manager instance will serve as the load balancer, while the other two instances will act as web servers. To streamline the process, we will organize our playbooks into a single file and leverage Ansible's import feature for modularity and better management.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aDTDRBve--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tsesh5gg27rc87a1hz5s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aDTDRBve--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tsesh5gg27rc87a1hz5s.png" alt="Image description" width="794" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Azure Instances:
&lt;/h2&gt;

&lt;p&gt;To begin, we provision three Azure instances: one as the manager and the other two as web servers. The manager instance will be responsible for handling the load balancing, while the web server instances will serve the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Ansible on the Manager:
&lt;/h2&gt;

&lt;p&gt;Once the instances are up and running, we install Ansible on the manager instance. Ansible provides a simple and efficient way to automate the configuration and management of our infrastructure.&lt;/p&gt;

&lt;p&gt;Since Ansible is available in the Extra Packages for Enterprise Linux (EPEL)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install epel-release

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install ansible

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring the Ansible Inventory:
&lt;/h2&gt;

&lt;p&gt;Next, we configure the Ansible inventory to include the two web server instances. The inventory allows Ansible to know which hosts it should target for various tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U3gM7bKs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0eclw9swd46xgop6z856.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U3gM7bKs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0eclw9swd46xgop6z856.png" alt="Hosts" width="259" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Organizing Playbooks:
&lt;/h2&gt;

&lt;p&gt;To keep our playbooks organized and modular, we create a single playbook file named "all-playbooks.yml." This file will include multiple playbooks using the import feature in Ansible.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#all-playbooks.yml
---
  - import-playbook: update.yml
  - import-playbook: install-services.yml
  - import-playbook: setup-app.yml
  - import-playbook: setup-lb.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Package Management:
&lt;/h2&gt;

&lt;p&gt;In our "all-playbooks.yml" file, the first imported playbook is "update.yml." This playbook is responsible for updating the system packages on all three instances. It ensures that our instances have the latest updates before proceeding with the configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: updating nodes
  hosts: all
  become: true
  tasks:
  - name: updating
    yum:
      name: '*'
      state: latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Installing Apache and PHP:&lt;/strong&gt;&lt;br&gt;
In the same "all-playbooks.yml" file, we import the "install-services.yml" playbook. This playbook installs Apache and PHP, but only on the manager instance. As the load balancer, the manager does not need the same application as the web servers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- hosts: all
  become: true
  tasks:
  - name: install apache
    yum:
      name: 
        - httpd
        - php
      state: present

  - name: Ensure appache starts
    service:
      name: httpd
      state: started
      enabled: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Uploading Application Files:
&lt;/h2&gt;

&lt;p&gt;We upload the application file "index.php" to the web server instances. To ensure that Apache restarts whenever changes happen, we use handlers, which are triggered when specific events occur.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- hosts: nodes
  become: true
  tasks:
  - name: upload application file
    copy: 
      src: ../index.php
      dest: /var/www/html
      mode: 0755
    notify: restart apache

  handler:
    - name: restart apache
      service: name=httpd state=restarted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Load Balancer Configuration:
&lt;/h2&gt;

&lt;p&gt;To set up the manager as a load balancer, we leverage the power of Jinja templates within Ansible. In the "setup-lb.yml" playbook, we insert a Jinja snippet inside an Apache directive called "proxybalancer." This configuration enables the manager to distribute incoming requests across the two web server instances, effectively balancing the load.&lt;/p&gt;

&lt;p&gt;See the jinja code :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ProxyRquests off
&amp;lt;Proxy balancer://webcluster &amp;gt;
    {% for hosts in groups ['nodes'] %}
        BalancerMember http://{{hostvars[hosts]['anible_host']}}
    {% endfor %}
    ProxySet lbmethod=byrequests
&amp;lt;/Proxy&amp;gt;

# Optional
&amp;lt;Location /balancer-manager&amp;gt;
  SetHandler balancer-manager
&amp;lt;/Location&amp;gt;
ProxyPass /balancer-manager !
ProxyPass / balancer://webcluster/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's take a look at the result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Jv0c_-Ms--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/22mtdochdeitpp3qsjxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Jv0c_-Ms--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/22mtdochdeitpp3qsjxy.png" alt="Image description" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---wPeZlGo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/spkzy4eem77205v2t2mk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---wPeZlGo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/spkzy4eem77205v2t2mk.png" alt="Image description" width="800" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QCz_-M39--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rkpuy859fona3znoxobm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QCz_-M39--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rkpuy859fona3znoxobm.png" alt="Image description" width="800" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;By following the steps outlined in this article, we have successfully configured a load balancing infrastructure using Ansible and three Azure instances. The manager instance acts as the load balancer, while the other two instances serve as web servers. Organizing playbooks into a single file using Ansible's import feature makes the management and maintenance of the infrastructure much more efficient. With this setup, we have a scalable and robust system that can handle increased traffic and ensure high availability for our application.&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>automation</category>
      <category>devops</category>
    </item>
    <item>
      <title>DOCKER SWARM CLUSTER &amp; NFS</title>
      <dc:creator>taqiyeddinedj</dc:creator>
      <pubDate>Mon, 24 Jul 2023 16:39:11 +0000</pubDate>
      <link>https://dev.to/taqiyeddinedj/docker-swarm-cluster-nfs-491f</link>
      <guid>https://dev.to/taqiyeddinedj/docker-swarm-cluster-nfs-491f</guid>
      <description>&lt;h2&gt;
  
  
  Docker Swarm emerges as an excellent solution, offering a simple and scalable cluster management system. To complement this, utilizing NFS (Network File System) as a shared volume storage brings significant advantages, providing seamless data sharing and robustness across the Swarm cluster. Moreover, it is essential to emphasize the importance of referring to official documentation for both Docker Swarm and NFS, as it ensures a well-informed and successful setup.
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The web app code :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sUaBcFv7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nitgli0l4u9626cgwkgf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sUaBcFv7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nitgli0l4u9626cgwkgf.png" alt="Image description" width="439" height="227"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;My Cluster:&lt;/strong&gt; &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--66Xc43YI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s0viu1lnoykq7fe3f9qh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--66Xc43YI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s0viu1lnoykq7fe3f9qh.png" alt="Image description" width="800" height="63"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;On the NFS Server (NFS Server Host):&lt;/strong&gt;&lt;br&gt;
The necessary steps and commands to set up an NFS server and mount the NFS share on the client node (Docker Swarm node):&lt;br&gt;
&lt;em&gt;Install NFS Server:&lt;/em&gt;&lt;br&gt;
&lt;code&gt;sudo yum install nfs-utils&lt;/code&gt;&lt;br&gt;
&lt;em&gt;Create the Shared Directory:&lt;/em&gt;&lt;br&gt;
&lt;code&gt;sudo mkdir -p /shared_dir&lt;/code&gt;&lt;br&gt;
&lt;em&gt;Export the Shared Directory at /etc/exports:&lt;/em&gt;&lt;br&gt;
&lt;code&gt;/shared_dir  *(rw,sync,no_root_squash)&lt;/code&gt;&lt;br&gt;
&lt;em&gt;Apply the NFS Export Changes:&lt;/em&gt;&lt;br&gt;
&lt;code&gt;sudo exportfs –a&lt;/code&gt;&lt;br&gt;
&lt;em&gt;Start NFS Server:&lt;/em&gt;&lt;br&gt;
&lt;code&gt;sudo systemctl enable nfs-server&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;On the NFS Client (Docker Swarm Node):&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sudo yum install nfs-utils&lt;/code&gt;&lt;br&gt;
&lt;em&gt;Create the Target Mount Directory:&lt;/em&gt;&lt;br&gt;
&lt;code&gt;sudo mkdir -p /mnt/shared_dir&lt;/code&gt;&lt;br&gt;
&lt;em&gt;Mount the NFS Share:&lt;/em&gt;&lt;br&gt;
&lt;code&gt;sudo mount -t nfs nfs_server:/shared_dir /mnt/shared_dir&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE: fro NFS3 and NFS4 you should explicitly allow Port 2049&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now we create the NFS docker volume on the worker node : &lt;br&gt;
&lt;code&gt;docker volume create --driver local --name website_volume --opt type=nfs4 --opt device=:/shared_dir --opt o=addr=20.111.58.70,rw,nolock&lt;/code&gt;&lt;br&gt;
&lt;em&gt;Now Create Docker service with NFS volume :&lt;/em&gt;&lt;br&gt;
&lt;code&gt;docker service create --replicas=2 --name hostname_service --restart-condition on-failure -p 80:80 --mount type=volume,source=website_volume,target=/app taqiyeddinedj/hostname_web_app&lt;/code&gt;&lt;/p&gt;

</description>
      <category>docekr</category>
      <category>devops</category>
      <category>swarm</category>
    </item>
    <item>
      <title>AWS HA WEB APP</title>
      <dc:creator>taqiyeddinedj</dc:creator>
      <pubDate>Fri, 14 Jul 2023 22:21:23 +0000</pubDate>
      <link>https://dev.to/taqiyeddinedj/aws-ha-web-app-5dod</link>
      <guid>https://dev.to/taqiyeddinedj/aws-ha-web-app-5dod</guid>
      <description>&lt;p&gt;Experience seamless performance and uninterrupted availability as you interact with the app's cutting-edge features&lt;br&gt;
The servers are not mine tho lol &lt;br&gt;
&lt;strong&gt;SO&lt;/strong&gt;&lt;br&gt;
Starting with the creation of RDS because it takes some time to deploy, &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G9vXclUD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fwfui2r4gd7hpu696qpu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G9vXclUD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fwfui2r4gd7hpu696qpu.png" alt="Image description" width="800" height="251"&gt;&lt;/a&gt;&lt;br&gt;
then going to S3 and create two buckets:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j2v3qYqG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zis9f9l5c8hxs40ixadi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j2v3qYqG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zis9f9l5c8hxs40ixadi.png" alt="Image description" width="800" height="233"&gt;&lt;/a&gt;&lt;br&gt;
The deployment of the word press site can take 15 minutes, I strongly recommend to follow official steps !&lt;br&gt;
&lt;a href="https://aws.amazon.com/getting-started/hands-on/deploy-wordpress-with-amazon-rds/"&gt;https://aws.amazon.com/getting-started/hands-on/deploy-wordpress-with-amazon-rds/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some Useful TIPS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;launch EC2 from a template, you will need it for the Auto scaling group&lt;/li&gt;
&lt;li&gt;The ALB, is created to span 3 subnets for the ASG instances&lt;/li&gt;
&lt;li&gt;It's very helpful to use health check of ALB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A73w55jf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n7ikrpqi1grj93le4t05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A73w55jf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n7ikrpqi1grj93le4t05.png" alt="Image description" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And finally this is my Wordpress HIGHLY AVAILABLE !&lt;br&gt;
And as you can see (check link URI), I am using Cloudfront which means it gets the data it is sharing from the S3 bucket so images can load faster!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R4HrdIg---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x0wkcxndtir6b7iolkfc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R4HrdIg---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x0wkcxndtir6b7iolkfc.png" alt="Image description" width="800" height="224"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Final Result:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GiA4OvOo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjf5rr67ct5dvur78k1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GiA4OvOo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjf5rr67ct5dvur78k1z.png" alt="Image description" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
