<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Wilson Anorue</title>
    <description>The latest articles on DEV Community by Wilson Anorue (@wiley19).</description>
    <link>https://dev.to/wiley19</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/wiley19"/>
    <language>en</language>
    <item>
      <title>4 Mistakes to Avoid When Setting Up a CI/CD Pipeline</title>
      <dc:creator>Wilson Anorue</dc:creator>
      <pubDate>Thu, 15 Aug 2024 12:35:17 +0000</pubDate>
      <link>https://dev.to/wiley19/4-mistakes-to-avoid-when-setting-up-a-cicd-pipeline-38ah</link>
      <guid>https://dev.to/wiley19/4-mistakes-to-avoid-when-setting-up-a-cicd-pipeline-38ah</guid>
      <description>&lt;p&gt;Continuous Integration and Continuous Deployment (CI/CD) pipelines automate the process of building, testing, and deploying code to remote servers, streamlining software delivery. Having built CI/CD pipelines for many applications in real business environments, I’ve made mistakes and seen colleagues make them too — each providing valuable lessons.&lt;/p&gt;

&lt;p&gt;These experiences build up the expertise of any DevOps engineer, so I wanted to share what I’ve learned.&lt;/p&gt;

&lt;p&gt;Making these mistakes can ruin your projects, disrupt your application environment, or cause your projects to drag on far longer than it should.&lt;/p&gt;

&lt;p&gt;Based on my experience, here’s what you should do — or watch out for — when setting up your CI/CD pipeline. These tips are vendor-agnostic, meaning they apply whether you’re using GitHub Actions, Jenkins, TravisCI, AmplifyCI, CircleCI, or others.&lt;/p&gt;

&lt;p&gt;Let’s explore these common mistakes and how to avoid them.&lt;/p&gt;

&lt;p&gt;It’s common knowledge to use environment variables or secrets in your pipeline to avoid manually typing in passwords, SSH keys, connection strings, and other sensitive details. Since this is widely understood, I won’t dwell on it here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Take a Snapshot of Your Server Before You Begin
&lt;/h2&gt;

&lt;p&gt;If you’re planning to make changes to your server environment — such as adding or deleting files — especially if you’re new to this, it’s crucial to take a snapshot of your server before setting up your CI/CD pipeline. Snapshots are quick and straightforward to create, and they provide an easy way to restore your server to its previous state if something goes wrong.&lt;/p&gt;

&lt;p&gt;I once witnessed a colleague accidentally delete our server environment and critical system files while configuring a CI/CD pipeline to sync code changes. This could have been easily avoided with a simple snapshot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set Up Your SSH Key Properly
&lt;/h2&gt;

&lt;p&gt;To connect to your remote server where you’ll deploy your application, you’ll need both a private key and a public key. You can either create a new key pair specifically for your pipeline or use the existing public key you already use to access your instance. Either option works, but you’ll need to copy the contents of your private key file and paste them into GitHub Secrets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fbo1rbffi3wlnrx6fdl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fbo1rbffi3wlnrx6fdl.png" alt="Connecting to server through SSH" width="311" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are two important things to keep in mind:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Avoid Opening Your Key File as a Text File&lt;/strong&gt;&lt;br&gt;
When copying your private key, it’s better to display its contents in your terminal and copy it from there, especially if you’re using a Windows system. Opening the file directly as a text file can lead to formatting issues. Use the following commands to display the private key file content: &lt;code&gt;cat key.pem&lt;/code&gt; for Linux or &lt;code&gt;type key.pem&lt;/code&gt; for Windows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Copy the Entire Key Content, Including Headers&lt;/strong&gt;&lt;br&gt;
Ensure you copy the entire content of the key file, including the headers and footers like; &lt;code&gt;— –BEGIN RSA PRIVATE KEY — —&lt;/code&gt; and &lt;code&gt;— –END RSA PRIVATE KEY — –&lt;/code&gt;. Paste the complete content into your GitHub Secrets box without any spaces.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Additionally, it’s better to append the key content to a file using commands like echo or cat rather than copying and pasting it manually into a text editor. For example, if you need to add a new SSH public key to your &lt;code&gt;~/.ssh/authorized_keys&lt;/code&gt; file, using the command line approach is more reliable and less error-prone. Here’s how to do it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "ssh-rsa ***your key content***" &amp;gt;&amp;gt; ~/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or using &lt;strong&gt;cat&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat ~/.ssh/rsa_key.pub &amp;gt;&amp;gt; ~/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This method reduces the risk of errors compared to manual editing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Carefully Review Your Deployment Path
&lt;/h2&gt;

&lt;p&gt;It’s crucial to thoroughly review and understand the deployment path where your code or files will be affected, especially when deleting or syncing files on your server. If you’re resyncing files to your server, create a dedicated folder where your code files will be resynced, and always double-check the path before testing the pipeline.&lt;/p&gt;

&lt;p&gt;I once worked on a project where a colleague mistakenly resynced files directly to the root directory &lt;strong&gt;(/home/bitnami/)&lt;/strong&gt;. This error resulted in the code being deployed in the root environment and inadvertently deleting other essential folders, including our &lt;strong&gt;/.ssh/&lt;/strong&gt; directory, environment paths, and other critical files.&lt;/p&gt;

&lt;p&gt;This led to significant work to regain SSH access to the server and recreate the SSH public and private keys. Since we had done extensive configurations on the server, starting from scratch would have been far more stressful and time-consuming.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Upload new files 
        run: |
          rsync -avz --no-times --delete-after --exclude '.git' ./ bitnami@${{ secrets.YOUR_SERVER_IP }}:/home/bitnami
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As example, the code above will delete all files in the root directory &lt;strong&gt;(/home/bitnami)&lt;/strong&gt; including your SSH key which is usually located in that directory.&lt;/p&gt;

&lt;p&gt;To avoid such scenarios, always keep a snapshot of your server as a backup.&lt;/p&gt;

&lt;p&gt;Here are the high-level steps to recreate SSH keys for your server if you find yourself in a similar situation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite&lt;/strong&gt;: Ensure you can connect to the server via SSH through an alternative method, such as browser-based sessions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generate an SSH RSA Key Pair&lt;/strong&gt;:&lt;br&gt;
Use your terminal to generate a new SSH RSA key pair.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add the Public Key Content to Your Server&lt;/strong&gt;:&lt;br&gt;
Append the public key file content (.pub) to your &lt;code&gt;~/.ssh/authorized_keys&lt;/code&gt; file. It’s recommended to use the echo or cat commands, as discussed earlier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "ssh-rsa ***your key content***" &amp;gt;&amp;gt; ~/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat ~/.ssh/rsa_key.pub &amp;gt;&amp;gt; ~/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Add the Private Key to Your Local Machine or Pipeline Secrets&lt;/strong&gt;:&lt;br&gt;
Store the private key content on your local machine or copy it to the secrets environment of your pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test the New Key&lt;/strong&gt;:&lt;br&gt;
Attempt to connect to your server using the new SSH key. It should work now.&lt;br&gt;
By following these steps and being meticulous about your deployment paths, you can avoid costly mistakes and ensure smoother operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Server Configuration Management Tools for Your Environment
&lt;/h2&gt;

&lt;p&gt;In one of the challenging experiences I mentioned earlier, we could have saved ourselves a lot of stress if we had set up our environment using configuration management tools like Ansible, Chef, or Puppet.&lt;/p&gt;

&lt;p&gt;These tools would have allowed us to easily replicate the same configuration on another server when we lost SSH access to the previous one.&lt;/p&gt;

&lt;p&gt;Instead of struggling to regain access, we could have simply spun up a new server and run the configuration playbook or cookbook to restore our setup.&lt;/p&gt;

&lt;p&gt;Although DevOps engineers typically don’t create configuration scripts for a single server, it’s still a best practice to do so. It’s not only important but also incredibly useful for recreating your server configuration in various scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Fast, Fail Fast, and Enable Detailed Monitoring
&lt;/h2&gt;

&lt;p&gt;Building your code quickly, testing it promptly, and making necessary changes is essential. This approach enables you to deploy more often, reducing context switching, which is a best practice in DevOps. Regular deployments ensure that code is tested in staging and production as soon as possible.&lt;/p&gt;

&lt;p&gt;Detailed monitoring of your builds and deployments allows you to quickly spot issues and address them directly, minimizing guesswork. Trust me, this will save you a significant amount of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some Additional Useful Tips
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Build Once&lt;/strong&gt;: Ensure you build your code once, run your tests, and deploy the same artifact to staging and production if successful. Avoid building the code separately for each stage, as this might introduce inconsistencies. You can store your artifacts or outputs in repositories like Docker, ECR, or S3. Also, make sure to version your code appropriately, ensuring that the code you deploy is the same as what you built and tested, so it will perform consistently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code, Build, and Deploy Frequently&lt;/strong&gt;: Frequent coding, building, and deployment are at the heart of DevOps. This approach ensures that mistakes and errors are spotted and corrected quickly, and it provides immediate feedback from both testing teams and customers.&lt;br&gt;
These are the tips I have for you. I have personally experienced how these practices can save you time, improve your DevOps experience, prevent unnecessary mistakes, and help you quickly remediate any errors that do occur.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By implementing these best practices, you’ll streamline your DevOps workflow, minimize costly errors, and enhance your deployment efficiency. Embrace these tips to improve your DevOps experience, ensuring faster, more reliable, and consistent software delivery.&lt;/p&gt;

&lt;p&gt;Please share any additional tips you might have, or let me know if I missed something important!&lt;/p&gt;

&lt;p&gt;I write articles and helper notes for Cloud, DevOps engineer and SYS Administrator, please check out my blog to see more of it. &lt;a href="https://digitalspeed.com.ng" rel="noopener noreferrer"&gt;https://digitalspeed.com.ng&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>devops</category>
      <category>ssh</category>
    </item>
    <item>
      <title>Creating a CICD Pipeline using Jenkins on AWS EC2, Monitoring using Prometheus, and Grafana</title>
      <dc:creator>Wilson Anorue</dc:creator>
      <pubDate>Sun, 04 Aug 2024 11:12:04 +0000</pubDate>
      <link>https://dev.to/wiley19/creating-a-cicd-pipeline-using-jenkins-on-aws-ec2-monitoring-using-prometheus-and-grafana-2p7</link>
      <guid>https://dev.to/wiley19/creating-a-cicd-pipeline-using-jenkins-on-aws-ec2-monitoring-using-prometheus-and-grafana-2p7</guid>
      <description>&lt;p&gt;In this post we will deploy a simple containerized web application through Jenkins to an EC2 server, we deploy Prometheus and Grafana as containers to monitor the web application for security, operational state, and others. Note that the web app runs on two containers MySQL and Node application.&lt;/p&gt;

&lt;p&gt;When you complete this project you should see that your EC2 instance is now running multiple containers (5 containers) which are for; Node application, MySQL, Grafana, Node Exporter, and Prometheus. All containers have been easily connected through the network we defined in our docker-compose file.&lt;/p&gt;

&lt;p&gt;I want to state if you are building this as a project with the intention to learn from it then I suggest that you build the project first as I explained then when everything is working as required you can change something about the project or try out something in a different way than how I have done it.&lt;/p&gt;

&lt;p&gt;I built the architecture myself for this project, I wasn't following any particular tutorial, I built it incrementally i.e first building with a few containers and then adding other containers along the way, so I had lots of learning while building that's why I suggest you modify a lot about the project so you can learn a lot too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisite
&lt;/h2&gt;

&lt;p&gt;You need familiarity with AWS as I won't be going into the details when implementing on AWS. For any EC2 instance please select the free tier eligible T2micro type so you don't get charged. Check out this post to &lt;a href="https://www.digitalspeed.online/all-articles/how-to-connect-to-your-aws-ec2-linux-instance-try-out-various-ways/" rel="noopener noreferrer"&gt;learn more about Amazon EC2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will use Jenkins as our CICD pipeline to automate the deployment of our containerized application to our EC2 instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Jenkins
&lt;/h2&gt;

&lt;p&gt;Jenkins is an open-source automation server that helps automate software development processes such as building, testing, and deploying code changes to production. It is widely used in the DevOps industry to achieve continuous integration and continuous delivery (CI/CD) of software applications and one of the most important attributes is that it runs on our infrastructure.&lt;/p&gt;

&lt;p&gt;Jenkins provides a user-friendly web-based interface, which allows developers to create automated jobs or tasks called pipelines. A pipeline in Jenkins is a set of instructions that define the stages of the software delivery process. It is a powerful tool that helps to streamline the software development process, improve productivity, and reduce errors.&lt;/p&gt;

&lt;p&gt;A Jenkins pipeline is a combination of plugins that supports the integration and implementation of continuous delivery pipelines using Jenkins. It provides a way to define the entire software delivery process as code, which can be easily reviewed, versioned, and shared across the team.&lt;/p&gt;

&lt;p&gt;A pipeline can be thought of as a sequence of stages that represent the steps in the software delivery process such as build, test, and deploy. The pipeline is written in a domain-specific language called Groovy, which makes it easy to define complex software delivery workflows. With the Jenkins pipeline, teams can automate the entire software delivery process, making it more efficient, reliable, and scalable.&lt;/p&gt;

&lt;p&gt;The workflow in this case is simple, Jenkins gets triggered by GitHub webhook whenever there's an accepted change in the codebase, Jenkins pulls the repository and builds it, it carries out any necessary tests and deploys the code to our EC2 instance through the public key-pair file we attached.&lt;/p&gt;

&lt;p&gt;We would start with our GitHub repository, Clone the repository to your local computer&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/willie191998/to-do-app-with-docker-jerkins-prometheus.git

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialize the repository git init&lt;/p&gt;

&lt;p&gt;Check the current branch git branch&lt;/p&gt;

&lt;p&gt;Make small changes and commit changes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git commit -m &amp;lt;change details&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Log in to your GitHub account and create a new repository. Copy the link to your new repository (it should end in .git)&lt;/p&gt;

&lt;p&gt;Connect your local repository to its remote GitHub repository, use the command &lt;strong&gt;git remote add origin &lt;/strong&gt;&lt;br&gt;
Push your files or changes to your GitHub repository, run the command git push origin master&lt;br&gt;
 &lt;br&gt;
Ensure you use the correct branch should be likely master or main.&lt;/p&gt;

&lt;p&gt;You should now see the files in your GitHub repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installation and Set Up of Jenkins&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create an EC2 instance, I used an Amazon Linux 2 OS but you can use Ubuntu or any other. Ensure you allow HTTP and SSH traffic for the EC2 instance also allow a custom port (port 8080) so we can access the Jenkins interface running on our server. Please check out the previous post to learn &lt;a href="https://www.digitalspeed.online/all-articles/how-to-connect-to-your-aws-ec2-linux-instance-try-out-various-ways/" rel="noopener noreferrer"&gt;how to connect up an EC2 instance&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Install Java on our EC2 instance, connect your EC2 instance through SSH, see how to connect to your EC2 instance through SSH, and run the following command through SSH to install Java on your EC2 instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dnf install -y java-11-amazon-corretto
sudo wget -O /etc/yum.repos.d/jenkins.repo  https://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The GPG key ensures that the packages you're downloading are from the Jenkins repository and haven't been tampered with.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Install Jenkins&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that the GPG key is imported, you can proceed with the Jenkins installation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dnf install -y Jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Start and Enable Jenkins&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once Jenkins is installed, you can start the service and enable it to start on boot.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start jenkins
sudo systemctl enable Jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Check Jenkins Status&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify that Jenkins is running correctly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status jenkins

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynin634nbnf42miarnwt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynin634nbnf42miarnwt.png" alt="Image description" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install docker and docker-compose&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum update -y
sudo amazon-linux-extras install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Configure Jenkins&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open Jenkins Web Interface&lt;br&gt;
Visit &lt;strong&gt;&lt;a href="http://your-ec2-public-ip:8080" rel="noopener noreferrer"&gt;http://your-ec2-public-ip:8080&lt;/a&gt; in your web browser.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9377kjkj2e8rwfsym54.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9377kjkj2e8rwfsym54.png" alt="Image description" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Retrieve the Initial Admin Password form SSH CLI&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cat /var/lib/jenkins/secrets/initialAdminPassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Complete the Jenkins Setup Wizard;&lt;/p&gt;

&lt;p&gt;Enter the admin password when prompted.&lt;br&gt;
Install suggested plugins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mjmvtgia3f41qcf59yp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mjmvtgia3f41qcf59yp.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create the first admin user and complete the setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Plugins&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Jenkins plugins extend functionality, enabling seamless integration with tools, enhancing CI/CD pipelines, and automating tasks. Over 1,700 plugins support various stages of the development lifecycle.&lt;/p&gt;

&lt;p&gt;These are the plugins you will need for this project; &lt;strong&gt;Git plugin&lt;/strong&gt;, &lt;strong&gt;GitHub&lt;/strong&gt;, &lt;strong&gt;Git Pipeline&lt;/strong&gt;, &lt;strong&gt;Build Timeout&lt;/strong&gt;, and &lt;strong&gt;Docker Pipeline&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set up environmental variables for Jenkins&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwidtd7i52rr6xijbwyqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwidtd7i52rr6xijbwyqb.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go through this route in your Jenkins dashboard; &lt;strong&gt;Jenkins&lt;/strong&gt; &amp;gt;&amp;gt; &lt;strong&gt;Manage Jenkins&lt;/strong&gt; &amp;gt;&amp;gt; &lt;strong&gt;Manage Credentials&lt;/strong&gt; &amp;gt;&amp;gt; &lt;strong&gt;Global&lt;/strong&gt; &amp;gt;&amp;gt; &lt;strong&gt;New Credentials&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create new credentials, select username and password for the credentials type, and put in your GitHub username and password, this will be used for the GitHub access&lt;/p&gt;

&lt;p&gt;Create another credential Select the public key, and put your AWS keypair file public key (You can get your public key from your ec2 instance key pair by printing the content of the keypair using the cat command).&lt;/p&gt;

&lt;p&gt;You should copy the whole content as printed and paste it on the selected credentials public key field. Remember that you would use this same key pair when creating the other EC2 instance.&lt;br&gt;
Create another credential, this time it is the username and password for your Docker Hub account, put in your correct Docker Hub details.&lt;/p&gt;

&lt;p&gt;Also don't forget to use a good ID for your credentials appropriately as it is what you will use to access it on your Jenkins pipeline.&lt;/p&gt;

&lt;p&gt;You can also set environmental variables which can hold in details like your Docker Hub username but we won't do that now.&lt;br&gt;
Set up Jenkins Pipeline&lt;/p&gt;

&lt;p&gt;Select - &lt;strong&gt;Manage Jenkins&lt;/strong&gt; &amp;gt;&amp;gt; &lt;strong&gt;Select Create Pipeline&lt;/strong&gt; &amp;gt;&amp;gt;&lt;br&gt;
Under GitHub project select it and add the link of your GitHub repository without the .git&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jy5mfy5jdaj7ljqmyua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jy5mfy5jdaj7ljqmyua.png" alt="Image description" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under Definition &lt;strong&gt;select Pipeline script from SCM&lt;/strong&gt;, under &lt;strong&gt;SCM&lt;/strong&gt; select &lt;strong&gt;Git&lt;/strong&gt;, put your &lt;strong&gt;GitHub repository link&lt;/strong&gt; (the HTTPS form ending with .git) into the git repository field, and select the credentials you created earlier that have your GitHub username and password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ec8snlqy4sasx9g2nmt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ec8snlqy4sasx9g2nmt.png" alt="Image description" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the Jenkins file (script) to be executed on your pipeline as Jenkinsfile which is what we have on our repo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7r00vt7x4b5v2r204y3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7r00vt7x4b5v2r204y3.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ensure you Select the GitHub hook trigger for GITSCM polling.&lt;br&gt;
Apply the details and Save the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgtihb5jq1x1e4xq141o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgtihb5jq1x1e4xq141o.png" alt="Image description" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Connect GitHub to Jenkins
&lt;/h2&gt;

&lt;p&gt;Log in to your GitHub and locate your repository, select settings from the list of options for that repository, and select Add Webhook.&lt;/p&gt;

&lt;p&gt;Select Add payload and add the following URL &lt;strong&gt;http:///github-webhook/&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Set the &lt;strong&gt;Content type&lt;/strong&gt; to &lt;strong&gt;application/json&lt;/strong&gt;&lt;br&gt;
Choose Just the push event&lt;br&gt;
Click Add webhook&lt;/p&gt;

&lt;p&gt;Remember to change the URL to use your Jenkins EC2 instance IP&lt;br&gt;
Create your Hosting EC2 instance Create a new EC2 instance as you have done before but with some minor changes but ensure you use the same keypair as before;&lt;/p&gt;

&lt;p&gt;Expose the following custom port for your instance SG group 1000–9090 TCP traffic&lt;/p&gt;

&lt;p&gt;Expose the SSH 443, and HTTP port 80 as before.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqiwgjwr3txl1hd7cu2e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqiwgjwr3txl1hd7cu2e.png" alt="Image description" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to advance settings while creating the instance add the following script under user-data script to set a docker/docker-compose environment on your new EC2 instance&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
# Update package information
sudo yum update -y
# Install Docker
sudo yum install -y docker
# Start Docker service
sudo systemctl start docker
sudo systemctl enable docker
# Add ec2-user to the docker group to run docker without Sudo
sudo usermod -a -G docker ec2-user
# Install Python 3 and pip
sudo yum install -y python3 python3-pip
# Install Docker Compose using pip
sudo pip3 install docker-compose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No worries if you can't do this when you create your ec2 instance you can connect to it through SSH and run each of these commands individually and still get the same result.&lt;/p&gt;

&lt;p&gt;You can connect to your instance through SSH and confirm your environment has been configured i.e. docker and docker-compose are installed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker --version
docker-compose --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Modify your Jenkinsfile&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open your Jenkinsfile from your local repository and modify the following content to your own details&lt;br&gt;
The credentials ID for Github, Docker Hub, and public key for EC2 instance.&lt;br&gt;
Modify the following parameters values; DOCKER_USERNAME, AWS_REGION, EC2_USER, EC2_IP, DOCKER_IMAGE_NAME, DOCKER_REPO.&lt;/p&gt;

&lt;p&gt;Ensure you use the appropriate values for the following variables.&lt;/p&gt;

&lt;p&gt;Make changes on your local repo and push the changes to GitHub following these commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add .
git commit -m &amp;lt;changes details&amp;gt;
git push origin master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see your code build, it will run tests (if you defined any), it will push the image to your Docker Hub, and deploy the containers to your other EC2 instance, it automatically starts the process by stopping any running containers and deleting the current image before starting the new one so your instance runs your latest software.&lt;/p&gt;

&lt;p&gt;You can SSH to your EC2 instance and confirm the container is running through the command docker ps you should see all the containers (5 in total) running on your EC2 instance. &lt;/p&gt;

&lt;p&gt;If any one container is not there then you can check the logs to know why the container is not running *&lt;em&gt;docker logs &lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Recall I mentioned that I built the containers incrementally, first with two containers; web app and MySQL then Prometheus and node exporter then Grafana but currently you have all the containers running on your instance, you can simply modify the docker-compose file if you want to start with any of those.&lt;/p&gt;

&lt;p&gt;Access your containers running on your EC2 instance through the instance IP and the port the container is running on.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Web App - :3000&lt;/li&gt;
&lt;li&gt;Prometheus - :9090&lt;/li&gt;
&lt;li&gt;Grafana - :4000&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that Node Exporter and MySQL do not have a web interface so you can't access them directly on your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8cj7hhsy5njul75q49u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8cj7hhsy5njul75q49u.png" alt="Your containerised web application running on EC2" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting Prometheus as a Data Source to Grafana
&lt;/h2&gt;

&lt;p&gt;Access your Prometheus interface on one window and your Grafana interface on another window&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru5xxg90vwwrgh70zpng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru5xxg90vwwrgh70zpng.png" alt="Prometheus Dashboard running as container" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On Grafana, use the default username and password both are admin to log in then you will be asked to create a new password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqeb5p9t1d74lvu5tzwy4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqeb5p9t1d74lvu5tzwy4.png" alt="Grafana Dashboard running as container" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you finally log in, you can explore the interface before connecting Prometheus as a data source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add Prometheus Data Source&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Click on the gear icon (Configuration) in the left sidebar. Select Data Sources from the dropdown menu&lt;br&gt;
Click on the Add data source button&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure the Prometheus Data Source
&lt;/h2&gt;

&lt;p&gt;From the list of available data sources, select Prometheus.&lt;br&gt;
In the URL field, enter the address of your Prometheus server. If Prometheus is running on the same host as Grafana, &lt;strong&gt;&lt;a href="http://prometheus:9090" rel="noopener noreferrer"&gt;http://prometheus:9090&lt;/a&gt;&lt;/strong&gt; as you are using docker containers else you can use &lt;strong&gt;:9090&lt;/strong&gt; if running directly on the VM.&lt;br&gt;
Scroll down and click on the Save &amp;amp; Test button to ensure Grafana can connect to Prometheus&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblzb0ayd52z2492t9lg9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblzb0ayd52z2492t9lg9.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After clicking Save &amp;amp; Test you should see a message indicating that the data source was successfully added and is working.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create and save a Dashboard in Grafana&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;strong&gt;+&lt;/strong&gt; icon in the left sidebar and select &lt;strong&gt;Dashboard&lt;/strong&gt;, Click on &lt;strong&gt;Add new panel&lt;/strong&gt;.&lt;br&gt;
In the Query section, select the Prometheus data source you just added. &lt;br&gt;
Enter a Prometheus query to fetch the metrics you want to visualize.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2xsh1a4v7bs9k5rd1vk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2xsh1a4v7bs9k5rd1vk.png" alt="Image description" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, &lt;strong&gt;node_cpu_seconds_total&lt;/strong&gt; to visualize CPU usage and &lt;strong&gt;node_memory_Active_bytes&lt;/strong&gt; to visualize memory usage. Customize the panel settings, including visualization type (e.g. graph, gauge, table). &lt;br&gt;
Click on the Save button (disk icon) in the top right corner.&lt;br&gt;
Provide a name for your dashboard and save it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While replicating this project, it is expected that you make a few mistakes especially if you are using this service for the first time so here are some useful commands you can use for docker containers;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;docker pull: Downloads an image from a registry.&lt;/li&gt;
&lt;li&gt;docker build: Builds an image from a Dockerfile.&lt;/li&gt;
&lt;li&gt;docker run: Runs a container from an image.&lt;/li&gt;
&lt;li&gt;docker push: Uploads an image to a registry.&lt;/li&gt;
&lt;li&gt;docker ps: Lists running containers.&lt;/li&gt;
&lt;li&gt;docker stop: Stops a running container.&lt;/li&gt;
&lt;li&gt;docker rm: Removes a stopped container.&lt;/li&gt;
&lt;li&gt;docker rmi: Removes an image from the local repository.&lt;/li&gt;
&lt;li&gt;docker logs: Fetches logs of a container.&lt;/li&gt;
&lt;li&gt;docker exec: Runs a command in a running container.&lt;/li&gt;
&lt;li&gt;docker-compose up: Starts and runs containers defined in a docker-compose.yml file.&lt;/li&gt;
&lt;li&gt;docker-compose down: Stops and removes containers, networks, and volumes defined in a docker-compose.yml file.&lt;/li&gt;
&lt;li&gt;docker-compose build: Builds or rebuilds services defined in a docker-compose.yml file.&lt;/li&gt;
&lt;li&gt;docker-compose logs: Displays logs from services defined in a docker-compose.yml file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I supposed you have some basic Linux skills to change directory, create directory, delete files, check and change file permission and ownership, etc. Because you will need those.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Repo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/willie191998/to-do-app-with-docker-jerkins-prometheus.git" rel="noopener noreferrer"&gt;https://github.com/willie191998/to-do-app-with-docker-jerkins-prometheus.git&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Relevant links
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.jenkins.io/doc/tutorials/tutorial-for-installing-jenkins-on-AWS/" rel="noopener noreferrer"&gt;https://www.jenkins.io/doc/tutorials/tutorial-for-installing-jenkins-on-AWS/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.cloudbees.com/blog/how-to-schedule-a-jenkins-job" rel="noopener noreferrer"&gt;https://www.cloudbees.com/blog/how-to-schedule-a-jenkins-job&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev.to/shersi32/how-deploy-a-containerized-app-on-aws-using-jenkins-3eje"&gt;https://dev.to/shersi32/how-deploy-a-containerized-app-on-aws-using-jenkins-3eje&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My personal blog&lt;br&gt;
&lt;a href="https://www.digitalspeed.com.ng/all-articles/how-to-create-cicd-pipeline-with-jenkins-setup-prometheus-and-grafana-for-monitoring-10-steps/" rel="noopener noreferrer"&gt;https://www.digitalspeed.com.ng/all-articles/how-to-create-cicd-pipeline-with-jenkins-setup-prometheus-and-grafana-for-monitoring-10-steps/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>cicd</category>
      <category>devops</category>
      <category>prometheus</category>
    </item>
    <item>
      <title>How to deploy a containerized app on AWS EKS clusters with Ingress-NGINX enabled </title>
      <dc:creator>Wilson Anorue</dc:creator>
      <pubDate>Wed, 15 May 2024 07:42:39 +0000</pubDate>
      <link>https://dev.to/wiley19/how-to-deploy-a-containerized-app-on-aws-eks-clusters-with-ingress-nginx-enabled-15dh</link>
      <guid>https://dev.to/wiley19/how-to-deploy-a-containerized-app-on-aws-eks-clusters-with-ingress-nginx-enabled-15dh</guid>
      <description>&lt;h2&gt;
  
  
  Prerequisite
&lt;/h2&gt;

&lt;p&gt;You have an AWS account with billing enabled. &lt;/p&gt;

&lt;p&gt;You have installed the AWS CLI on your local computer, except if you use the AWS cloud shell from your management console.  Install and update AWS CLI&lt;/p&gt;

&lt;p&gt;We have an existing containerized application setup in the GitHub link below. This app is intended to run 2 client servers, 2 backend servers, 1 worker server where calculations for the application will be made, 1 Redis server, and 1 Postgres server.&lt;/p&gt;

&lt;p&gt;Don't worry, you won't be building the application from scratch; the application files and modules have been created, and the application image has been deployed to Docker Hub. The files available in the GitHub repo are just the cluster configuration files for Kubernetes. &lt;/p&gt;

&lt;p&gt;We will be working with AWS CLI, although you can use the cloudshell to do the configuration just as illustrated in this article.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fghgv4gbr3qs7ca5dsdr3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fghgv4gbr3qs7ca5dsdr3.png" alt="AWS cloudshell button" width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's get started&lt;/p&gt;

&lt;p&gt;Clone the repo to your local computer or the console environment if you're using the console command line.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/willie191998/Kubernetes-cluster-ingress-nginx/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Set up roles and user 
&lt;/h2&gt;

&lt;p&gt;You need to set up a role for your node group, a role for your EKS cluster, and a role for your local command-line user access. If you are using Cloudshell, you need to ensure your user has the minimum permissions to carry out this activity.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup Local Command Line user
&lt;/h2&gt;

&lt;p&gt;I assume you already have a command line user for using AWS CLI. However, depending on the permissions you've granted this user, you might want to review them to ensure they have the necessary access to create, modify, and interact with the resources used in this tutorial. &lt;/p&gt;

&lt;p&gt;Personally, I couldn't figure out the exact permissions to add to my user, so I temporarily gave them administrative access and removed it once I was done. Giving administrative permissions to a command line user is risky, so be sure to limit these permissions whenever possible. &lt;/p&gt;
&lt;h2&gt;
  
  
  Setup EKS role 
&lt;/h2&gt;
&lt;h2&gt;
  
  
  Navigate to your IAM dashboard from the search box 
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;Roles&lt;/strong&gt; from the IAM dashboard, and click &lt;strong&gt;Create Role&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose AWS Service as the trusted entity type, select Elastic Kubernetes Service (EKS) from the list of services, and click Next. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Attach permissions policies section, search for and select the appropriate policies for your EKS cluster, AmazonEKSClusterPolicy, AmazonEKSServicePolicy, and Click Next. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Review and click &lt;strong&gt;Create role&lt;/strong&gt;. &lt;/p&gt;
&lt;h2&gt;
  
  
  Setup EC2 Nodegroup role 
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Navigate to your IAM dashboard from the search box &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Roles from the IAM dashboard, and click &lt;strong&gt;Create Role&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose AWS Service as the trusted entity type, select Elastic Compute Cloud (EC2) from the list of services, and click Next.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Attach permissions policies section, search for and select the appropriate policies for your EKS cluster, &lt;strong&gt;AmazonEKSWorkerNodePolicy&lt;/strong&gt;, &lt;strong&gt;AmazonEKS_CNI_Policy&lt;/strong&gt;, &lt;strong&gt;AmazonEC2ContainerRegistryReadOnly&lt;/strong&gt;, Click &lt;strong&gt;Next&lt;/strong&gt;. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review and click &lt;strong&gt;Create role&lt;/strong&gt;. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Set up infrastructure and platform
&lt;/h2&gt;

&lt;p&gt;After installing the AWS CLI, configure it by running aws configure. Enter your user access key ID and secret access key. You'll be prompted to set a default region; use the short representation for the region, such as &lt;strong&gt;eu-west-3&lt;/strong&gt; for Paris.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e67ghj02wvg26lbh7jx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e67ghj02wvg26lbh7jx.png" alt="Set up AWS CLI" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create the EKS cluster from your command line; otherwise, you won't be able to deploy files to it using your command line user. Changing the user later is possible but stressful and tedious. Here’s a forum discussion of the issue: &lt;a href="https://repost.aws/knowledge-center/eks-api-server-unauthorized-error" rel="noopener noreferrer"&gt;https://repost.aws/knowledge-center/eks-api-server-unauthorized-error&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;aws eks create-cluster --name  --role-arn role-arn  --resources-vpc-config "subnetIds=subnet1,subnet2,securityGroupIds=sg-1"&lt;/p&gt;

&lt;p&gt;Replace cluster-name, role-arn, subnet1, subnet2, sg-1 with appropriate values.&lt;/p&gt;

&lt;p&gt;Confirm that your cluster has been provisioned and active before you continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcdnyi1j282en8teyzp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcdnyi1j282en8teyzp1.png" alt="Confirm EKS cluster is up" width="800" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set Kubectl to be based on your cluster when you use it. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws eks --region &amp;lt;your-region&amp;gt; update-kubeconfig --name &amp;lt;your-cluster-name&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Confirm the config context for kubectl rightnow, run,&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl config get-contexts&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The context should be the arn of your EKS cluster.&lt;/p&gt;

&lt;p&gt;Set up a namespace file, you can name it anything you want, and save it as a file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1 
kind: Namespace 
metadata:
  name: my-namespace 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the name in the file appropriately, Apply namespace.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f my-namespace.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I didn't use a namespace because I was only working on one cluster, so all my instances and deployments were stored in the default namespace. However, I urge you to use namespaces to prevent future errors.&lt;/p&gt;

&lt;p&gt;Create your node group if it hasn't been created yet. To check if a node group already exists, open your EKS cluster, select Compute from the options at the bottom, then select Nodegroup. If there's none, click Create.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Give your node group a name, select the role you created for the EC2 node group, and select the instance type for it, T2.micro if you want to benefit from the free tier, set the maximum, required, the minimum number of nodes, mine was 4, 14, 15 respectively. Now you can create your node group and confirm that it’s active.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3krzkbe57bi9luzf7nj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3krzkbe57bi9luzf7nj.png" alt="Node Group creation confirmation" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy your cluster configuration files. These files contain definitions for deployments and services in the cluster. As long as you haven't added any new files to the folder, you can run the command below to create the deployments and services as defined on one run. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;Kubectl apply –f ./k8s&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Else you create all services and deployment individually, like this &lt;/p&gt;

&lt;p&gt;&lt;code&gt;Kubectl apply –f client-cluster-ip-service.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Kubectl apply –f client-deployment.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Do the same for Redis, postgres, worker, and server files.&lt;/p&gt;

&lt;p&gt;Confirm your deployments, services are running with the command below.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get all –n &amp;lt;namespace&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You would see all the deployed pods, deployments, services, etc. Note that you should replace the namespace in the command above with your actual namespace. &lt;/p&gt;

&lt;p&gt;Confirm that the pods are ready and not just created as shown in the image below else run the command to recreate the deployment or service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc04k6xeac3di5b4ye9ul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc04k6xeac3di5b4ye9ul.png" alt="Confirm that the pods are running" width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now your cluster is running but you cannot access them via an IP address or DNS link because they were not configured to receive traffic from external sources directly.&lt;/p&gt;

&lt;p&gt;To receive traffic you need to enable Ingress-NGINX, an open source project developed by the Kubernetes community. &lt;/p&gt;

&lt;p&gt;Ingress-NGINX controller will create an external load balancer outside your cluster, a NGINX server which we will configure very soon to send traffic to our deployments. &lt;/p&gt;

&lt;p&gt;Configure the Ingress-NGINX controller, download the file from the terminal&lt;/p&gt;

&lt;p&gt;&lt;code&gt;wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml &lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I used Windows so this command did not work from the command prompt so I used WSL terminal to download the content into this folder k8s. &lt;/p&gt;

&lt;p&gt;Various ways of configuring Ingress-NGINX depend on your local OS, cloud provider (AWS, Azure, GCP), local testing, etc. See the main documentation page here (&lt;a href="https://kubernetes.github.io/ingress-nginx/deploy/)%C2%A0" rel="noopener noreferrer"&gt;https://kubernetes.github.io/ingress-nginx/deploy/) &lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After downloading the file, we need to configure it as stated in the documentation for AWS. Open the file downloaded, deploy.yaml, and get ready to edit.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Locate proxy-real-ip-cidr at about line 329 for me, should be around that line for you. You are required to change that value there to the CIDR block of your VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw47amgbw6omvxt3vdo3b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw47amgbw6omvxt3vdo3b.png" alt="Change IP-CIDR of Ingress-nginx file" width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Locate service.beta.kubernetes.io/aws-load-balancer-ssl-cert at around line 348. Remove the dummy value there arn❌x:... to the arn of an AWS verified domain certificate. The certificate I use is associated with another server/website entirely, but it worked fine provided it was verified.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuuaq4l2nqluelm7pov2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuuaq4l2nqluelm7pov2f.png" alt="Change SSl certificate of Ingress-nginx file" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save your changes and create the ingress-nginx controller&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Kubectl apply –f deploy.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see that different pods and services are created successfully. &lt;/p&gt;

&lt;p&gt;Confirm Ingress-NGINX is running &lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get all -n ingress-nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see details of a newly deployed load balancer which you can confirm when you go to your EC2 dashboard then, select load balancer at the bottom left, you should see one created by your Ingress-NGINX.&lt;/p&gt;

&lt;p&gt;Wait for your load balancer to become active then copy its DNS address and enter it on a browser tab, if everything has worked correctly you will see a default NGINX 404 page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2uwx2jumly74bpgvmku.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2uwx2jumly74bpgvmku.png" alt="EKS deployed load balancer" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open the LB link.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgd2mxl8mit7u33zggqgn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgd2mxl8mit7u33zggqgn.png" alt="Default NGINX page" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The default NGINX page you see is a default end page created by ingress-nginx to serve as the endpoint when you have yet to configure the NGINX server. &lt;/p&gt;

&lt;p&gt;To configure the ingress-nginx controller, you need an ingress file which is given below. It was made to enable NGINX networking for our project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-service
  namespace: default
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: 'true'
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    # Ensure LoadBalancer creation
    nginx.ingress.kubernetes.io/service-type: "LoadBalancer"
spec:
  ingressClassName: nginx
  rules:
    - HTTP:
        paths:
          - path: /?(.*)
            pathType: ImplementationSpecific
            backend:
              service:
                name: client-cluster-ip-service
                port:
                  number: 3000
          - path: /api/?(.*)
            pathType: ImplementationSpecific
            backend:
              service:
                name: server-cluster-ip-service
                port:
                  number: 5000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy and paste the content above on a new file, name it ingress-service.yaml, and apply it to our cluster.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply –f ingress-service.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To refresh the load balancer DNS link and should now have access to our web app running on the EKS cluster served via Ingress-NGINX.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvera9iurwx1hf25o7ks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvera9iurwx1hf25o7ks.png" alt="EKS hosted multi deployment react app" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can test things around the website but remember that you might be billed for EKS and load balancer. &lt;/p&gt;

&lt;p&gt;Delete your resources in this order&lt;/p&gt;

&lt;p&gt;Nodegroup &lt;/p&gt;

&lt;p&gt;load balancer &lt;/p&gt;

&lt;p&gt;EKS cluster &lt;/p&gt;

&lt;p&gt;iam roles and maybe the user&lt;/p&gt;

&lt;p&gt;You don’t get charged for Iam and user access so you might not need to delete it. &lt;/p&gt;

&lt;p&gt;Let us hear from you if you did this.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;You have deployed a containerized app to an EKS cluster and configured it to use EC2 instances in a node group. You also set up an Ingress-NGINX to forward external traffic to the cluster through a load balancer. This is quite a standard setup for a Kubernetes cluster right now.&lt;/p&gt;

&lt;p&gt;Share your comment below.&lt;br&gt;
See the article from my blog &lt;a href="https://www.digitalspeed.com.ng/all-articles/how-to-deploy-a-containerized-app-on-aws-eks-clusters-with-ingress-nginx-enabled/" rel="noopener noreferrer"&gt;DigitalSpeed&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>kubernetes</category>
      <category>eks</category>
      <category>nginx</category>
    </item>
    <item>
      <title>How to create a publicly available bucket on AWS using Terraform</title>
      <dc:creator>Wilson Anorue</dc:creator>
      <pubDate>Tue, 14 May 2024 09:46:20 +0000</pubDate>
      <link>https://dev.to/wiley19/how-to-create-a-publicly-available-bucket-on-aws-using-terraform-38g5</link>
      <guid>https://dev.to/wiley19/how-to-create-a-publicly-available-bucket-on-aws-using-terraform-38g5</guid>
      <description>&lt;p&gt;It's really common to create publicly accessible S3 bucket from the the management console, this time we want to create an S3 bucket, upload an object to it, and make it publicly accessible using Terraform. &lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You have an AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have AWS CLI installed on your computer, test by typing aws on your command line and see a reply telling you how to structure a command else follow this link to install AWS CLI Install or update to the latest version of the AWS CLI – &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;AWS Command Line Interface (amazon.com)&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Terraform
&lt;/h2&gt;

&lt;p&gt;Terraform is a service that lets you define infrastructure using code, offering a structured way to represent your desired states while benefiting from features like versioning, collaboration, and automation.&lt;/p&gt;

&lt;p&gt;This code-centric approach makes infrastructure changes more reliable and easier to track.&lt;/p&gt;

&lt;p&gt;Moreover, Terraform integrates seamlessly with AWS, providing comprehensive support for provisioning and managing a wide array of AWS resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why use Terraform
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Terraform allows you to define infrastructure configurations using code, providing a consistent, automated approach to managing resources. This makes it easier to apply best practices, track changes, and collaborate with others, reducing the risk of manual errors. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Easy Deployment of Infrastructure&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Terraform streamlines infrastructure deployment through a simple command-line interface. By defining your desired state in code, you can deploy new environments or make changes to existing ones with a few commands, making it efficient and straightforward. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Recreate Infrastructure Consistently&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Terraform’s code-based approach enables you to create identical infrastructure multiple times. This is useful for spinning up identical test, staging, or production environments, ensuring consistency across deployments. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error-Free Deployment Once Configured&lt;/strong&gt;&lt;br&gt;
Once you’ve properly defined your Terraform configuration, deploying infrastructure becomes more predictable and less prone to errors. Terraform’s declarative nature ensures that the desired state is achieved without unexpected behavior, making deployments reliable. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Dependencies and Orchestration&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Terraform manages dependencies between infrastructure resources, ensuring that they are created, modified, or destroyed in the correct order. This built-in orchestration reduces the risk of misconfigurations and simplifies complex infrastructure setups, allowing you to focus on the overall design rather than manual sequencing. &lt;/p&gt;

&lt;h2&gt;
  
  
  Create an AWS IAM user
&lt;/h2&gt;

&lt;p&gt;You need to create an AWS user that you can use to create or deploy AWS resources through the command line. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the AWS Console, go to the “Services” menu and select “IAM” (Identity and Access Management). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;strong&gt;Users&lt;/strong&gt; in the IAM dashboard, Click the &lt;strong&gt;Add users&lt;/strong&gt; button to create a new IAM user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide a username for the new user. &lt;br&gt;
Select &lt;strong&gt;Access key – Programmatic access&lt;/strong&gt; to create access keys for CLI access, &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Next: Permissions&lt;/strong&gt;.&lt;br&gt;
Under &lt;strong&gt;Set permissions&lt;/strong&gt;, choose &lt;strong&gt;Attach policies directly&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the search box, type &lt;strong&gt;AmazonS3FullAccess&lt;/strong&gt; and select the &lt;strong&gt;AmazonS3FullAccess&lt;/strong&gt; policy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Next: Review&lt;/strong&gt;. &lt;br&gt;
After creating the user, you’ll be presented with the user’s access key ID and secret access key. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download the CSV file or copy the keys and store them securely. These keys are essential for CLI access and cannot be retrieved after this step. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Close&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open a terminal and run the command aws configure. You should see a user input show up if you have the AWS CLI installed on your computer.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enter the access key ID, secret access key, desired AWS region, and output format (e.g., JSON).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhgw3gm62wjnkeglccil.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhgw3gm62wjnkeglccil.png" alt="Using your IAM user on AWS CLI" width="768" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Test the setup by running a simple AWS CLI command, like aws s3 &lt;strong&gt;ls&lt;/strong&gt;, to verify that the user has access to S3. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ensure you have Terraform installed on your computer. Type &lt;strong&gt;terraform version&lt;/strong&gt; to confirm this if You don’t have Terraform installed, then follow this link&lt;br&gt;
&lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli" rel="noopener noreferrer"&gt;https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create your Terraform configuration file, here’s mine, and I will explain the file’s content below.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
    region = "eu-west-2"
}

resource "aws_s3_bucket" "my_public_bucket" {
    bucket = "public-documents"
    tags = {
        Name = "public-documents"
    }
}

resource "aws_s3_object" "test_object" {
    bucket = aws_s3_bucket.my_public_bucket.id
    key = "test.txt"
    content = "Just a test file"
}

resource "aws_s3_bucket_ownership_controls" "public_access" {
  bucket = aws_s3_bucket.my_public_bucket.id
  rule {
    object_ownership = "BucketOwnerPreferred"
  }
}

resource "aws_s3_bucket_public_access_block" "public_access" {
    bucket = aws_s3_bucket.my_public_bucket.id

    block_public_acls       = false
    block_public_policy     = false
    ignore_public_acls      = false
    restrict_public_buckets = false
}

resource "aws_s3_bucket_acl" "public-read" {
    depends_on = [
        aws_s3_bucket_ownership_controls.public_access,
        aws_s3_bucket_public_access_block.public_access,
    ]

    bucket = aws_s3_bucket.my_public_bucket.id
    acl    = "public-read"
}

resource "aws_s3_bucket_policy" "bucket_public_policy" {
    bucket = aws_s3_bucket.my_public_bucket.id
    policy = data.aws_iam_policy_document.public_bucket_policy_statement.json
}

data "aws_iam_policy_document" "public_bucket_policy_statement" {
  statement {
    effect = "Allow"

    principals {
      type        = "AWS"
      identifiers = ["*"]  # Allowing public access
    }

    actions = [
      "s3:GetObject",
    ]

    resources = [
      "${aws_s3_bucket.my_public_bucket.arn}/*",  # Apply the policy to all objects in the bucket
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The provider block is a plugin that allows Terraform to interact with different service APIs, like AWS, Azure, or Google Cloud Platform. When you define a provider, you’re setting the context for Terraform to work with a specific cloud service. &lt;/p&gt;

&lt;p&gt;Aws as the provider specifies that the configuration is for interacting with Amazon Web Services (AWS) and you just simply set the region to eu-west-2. &lt;/p&gt;

&lt;p&gt;The resource block in Terraform defines the actual infrastructure elements that Terraform creates, manages, modifies, or deletes. This block contains the parameters and settings associated with the resource type, as specified in the Terraform documentation. A resource may have mandatory attributes that must be set, along with optional attributes that can be configured as needed. &lt;/p&gt;

&lt;p&gt;In this particular configuration file, there are five resources, each dependent on the others to achieve the desired outcome. These dependencies are crucial for ensuring proper orchestration and sequencing during deployment.&lt;/p&gt;

&lt;p&gt;Although the resource has a name in the Terraform code to keep track of it, this name does not necessarily correspond to the name of the infrastructure component that Terraform creates on the cloud (AWS in this case).&lt;/p&gt;

&lt;p&gt;Instead, it’s a unique identifier within the Terraform configuration to manage dependencies and relationships between resources. &lt;/p&gt;

&lt;p&gt;The data block allows you to retrieve or reference existing infrastructure or external information within your configuration.&lt;/p&gt;

&lt;p&gt;This is useful when you need to work with resources that are not created by Terraform but are still part of your deployment process. &lt;/p&gt;

&lt;p&gt;Now you have your file, follow the commands below to start the creation and configuration of your infrastructure.&lt;/p&gt;

&lt;p&gt;Execute &lt;strong&gt;terraform init&lt;/strong&gt;&lt;br&gt;
This command initializes your Terraform working directory, downloading provider plugins and preparing your workspace.&lt;/p&gt;

&lt;p&gt;To view the proposed changes to your infrastructure, ensuring there are no errors in your configuration file relative to the provider’s specifications.&lt;/p&gt;

&lt;p&gt;Execute &lt;strong&gt;terraform plan&lt;/strong&gt;&lt;br&gt;
This step gives you an overview of what Terraform will do, allowing you to confirm the expected actions before making any changes.&lt;br&gt;
To deploy the specified infrastructure according to your configuration file.&lt;/p&gt;

&lt;p&gt;Execute &lt;strong&gt;terraform apply&lt;/strong&gt;.&lt;br&gt;
This command carries out the plan, creating or updating your infrastructure as described in your Terraform configuration.&lt;br&gt;
You should see the response Apply complete as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5gxm2uo0iaoc7j7acuk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5gxm2uo0iaoc7j7acuk.png" alt="Terraform apply successful response" width="768" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Locate your S3 page in your AWS account, and confirm that the bucket has been created and an object was uploaded to it. &lt;/p&gt;

&lt;p&gt;Locate the public link of the object and confirm that you can download the object from any computer using the public URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futhrb66s2ga8qzv9eezq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futhrb66s2ga8qzv9eezq.png" alt="Access S3 dashboard to see your bucket as deployed on Terraform" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To remove this particular infrastructure you have created using Terraform, ensure you are in the right directory where the config file is present, run the command &lt;strong&gt;terraform destroy&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Below is the relevant Terraform documentation I used when writing this config file, it contains more details for setting AWS S3 resources using Terraform. &lt;/p&gt;

&lt;p&gt;aws_s3_bucket_policy | Resources | hashicorp/aws | Terraform | Terraform Registry &lt;/p&gt;

&lt;p&gt;aws_s3_bucket_acl | Resources | hashicorp/aws | Terraform | Terraform Registry &lt;/p&gt;

&lt;p&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket" rel="noopener noreferrer"&gt;Terraform Registry S3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_object" rel="noopener noreferrer"&gt;https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_object&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We have successfully set up S3 bucket and made it public using Terraform, we did not need to use the AWS management console for anything except to confirm that the resources have been set up as required. &lt;/p&gt;

&lt;p&gt;Thanks for reading through to the end, Let me hear your comment.&lt;/p&gt;

&lt;p&gt;Read the story from my blog&lt;br&gt;
&lt;a href="https://www.digitalspeed.com.ng/all-articles/how-to-create-a-publicly-accessible-bucket-on-aws-using-terraform/" rel="noopener noreferrer"&gt;DigitalSpeed&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>serverless</category>
      <category>storage</category>
    </item>
  </channel>
</rss>
