<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hamza Nasir</title>
    <description>The latest articles on DEV Community by Hamza Nasir (@hamza_nasir_06a03aac148a4).</description>
    <link>https://dev.to/hamza_nasir_06a03aac148a4</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hamza_nasir_06a03aac148a4"/>
    <language>en</language>
    <item>
      <title>Accessing RDS from phpadmin Container</title>
      <dc:creator>Hamza Nasir</dc:creator>
      <pubDate>Fri, 30 Aug 2024 15:03:49 +0000</pubDate>
      <link>https://dev.to/hamza_nasir_06a03aac148a4/accessing-rds-from-phpadmin-container-36fh</link>
      <guid>https://dev.to/hamza_nasir_06a03aac148a4/accessing-rds-from-phpadmin-container-36fh</guid>
      <description>&lt;p&gt;&lt;strong&gt;I Built an AWS LAB to access RDS from phpadmin container.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's a breakdown of my recent AWS project:&lt;/p&gt;

&lt;p&gt;𝗩𝗣𝗖 𝗦𝗲𝘁𝘂𝗽: I configured a VPC with two Availability Zones to enhance fault tolerance. Within each zone, I created both public and private subnets, even though initially only one zone was required. This decision was made with future scalability in mind.&lt;br&gt;
𝗥𝗗𝗦 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁:MySQL RDS instance was created and placed in the private subnet of Zone 1 to isolate the database from public internet access.&lt;br&gt;
𝗘𝗖𝗦 𝗦𝗲𝗿𝘃𝗶𝗰𝗲 𝗖𝗿𝗲𝗮𝘁𝗶𝗼𝗻: Given the free tier benefits, I opted for an EC2 launch type for my ECS service and attaching with the public subnet of same AZ where the database is placed.&lt;br&gt;
𝗣𝗛𝗣𝗔𝗱𝗺𝗶𝗻 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿 𝗧𝗮𝘀𝗸: I defined a task for a PHPAdmin container, specifying parameters such as CPU, memory, port mappings, and the Docker image. This task was then run on the ECS machines.&lt;br&gt;
𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗚𝗿𝗼𝘂𝗽 𝗖𝗼𝗻𝗳𝗶𝗴𝘂𝗿𝗮𝘁𝗶𝗼𝗻: I adjusted the VPC security group to permit traffic from the ECS to enable communication. ECS machine's security group was configured to accept incoming connections from my local machine.&lt;/p&gt;

&lt;p&gt;𝗢𝘂𝘁𝗰𝗼𝗺𝗲: Successful implementation of the above steps allowed me to access my RDS instance's dashboard through the PHPAdmin container&lt;/p&gt;

</description>
      <category>database</category>
      <category>phpadmin</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Implementing an AWS Client VPN Solution</title>
      <dc:creator>Hamza Nasir</dc:creator>
      <pubDate>Fri, 30 Aug 2024 15:00:46 +0000</pubDate>
      <link>https://dev.to/hamza_nasir_06a03aac148a4/implementing-an-aws-client-vpn-solution-566l</link>
      <guid>https://dev.to/hamza_nasir_06a03aac148a4/implementing-an-aws-client-vpn-solution-566l</guid>
      <description>&lt;p&gt;Implement a secure AWS Client VPN solution, aimed at providing seamless, secure remote access to resources within a Virtual Private Cloud (VPC). This project involved several complex steps, from setting up authentication mechanisms to configuring VPN endpoints and managing certificates. Here’s a detailed breakdown of the approach I took, the tools and services I utilized, and the knowledge I gained throughout the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecting a Secure Network Environment&lt;/strong&gt;&lt;br&gt;
The primary objective was to create a secure network environment that allowed authorized users to connect to the internal resources of the VPC securely. To achieve this, I began by defining the architecture of the VPN solution and identifying the key AWS services required for the implementation.&lt;/p&gt;

&lt;p&gt;Establishing AWS Directory Service for User Authentication&lt;br&gt;
One of the critical components of the VPN setup was establishing a reliable user authentication mechanism. I opted for AWS Directory Service, which offers a managed, scalable directory solution that integrates seamlessly with AWS Client VPN. I created a new directory and configured it to manage user identities, leveraging Active Directory's existing security protocols.&lt;/p&gt;

&lt;p&gt;Managing Certificate Authorities and Configuring AWS Certificate Manager&lt;br&gt;
Secure communication over a VPN requires proper management of certificates to authenticate and encrypt connections. To manage this aspect, I used AWS Certificate Manager (ACM) to create and manage public and private certificates needed for the VPN endpoint and clients.&lt;/p&gt;

&lt;p&gt;Additionally, I utilized easy-rsa CLI, an easy-to-use command-line tool, to create a private certificate authority (CA). This step involved generating server and client certificates and keys, which were later imported into AWS Certificate Manager. Managing certificates in this way ensured that all data transmitted through the VPN was encrypted, protecting it from unauthorized access or interception.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring AWS VPN Endpoints&lt;/strong&gt;&lt;br&gt;
The next critical step was configuring the VPN endpoints. I created an AWS Client VPN endpoint within the VPC, which served as the gateway for remote clients to connect securely to the internal network. This configuration involved defining the CIDR range for the VPN clients, associating the endpoint with the appropriate subnets, and attaching the security groups to control traffic flow.&lt;/p&gt;

&lt;p&gt;Once the VPN endpoint was configured, I ensured that routing was correctly set up to allow traffic from VPN clients to reach the necessary VPC resources. I also configured authorization rules to define which clients could access specific network resources, based on user identity and group membership in the AWS Directory Service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying VPN Clients&lt;/strong&gt;&lt;br&gt;
With the VPN endpoint in place, the next step was to deploy VPN clients. I created and distributed configuration files to authorized users, allowing them to connect to the VPN using compatible client applications. These configuration files contained all necessary details, such as the endpoint address, authentication method, and client certificates.&lt;/p&gt;

&lt;p&gt;To streamline the deployment process, I provided step-by-step instructions for users on how to install and configure the VPN client software, ensuring that they could securely connect to the VPC without any technical difficulties.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpn</category>
      <category>cloud</category>
      <category>security</category>
    </item>
    <item>
      <title>Building a Pipeline-as-Code Infrastructure in Jenkins: A Learning Journey</title>
      <dc:creator>Hamza Nasir</dc:creator>
      <pubDate>Fri, 30 Aug 2024 14:42:56 +0000</pubDate>
      <link>https://dev.to/hamza_nasir_06a03aac148a4/building-a-pipeline-as-code-infrastructure-in-jenkins-a-learning-journey-3l0b</link>
      <guid>https://dev.to/hamza_nasir_06a03aac148a4/building-a-pipeline-as-code-infrastructure-in-jenkins-a-learning-journey-3l0b</guid>
      <description>&lt;p&gt;Recently, I completed a comprehensive project focused on building a pipeline-as-code infrastructure in Jenkins. This was an exciting endeavour that involved a wide range of DevOps tools and practices to create a streamlined, automated development and deployment process. Here's a closer look at the key steps I took and the technologies I utilized to bring this project to life.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying Infrastructure with Terraform on AWS&lt;/strong&gt;&lt;br&gt;
The first step was to set up the underlying infrastructure on AWS using Terraform, an Infrastructure as Code (IaC) tool. Terraform allowed me to define and provision the infrastructure in a consistent, automated way, ensuring that all resources were deployed and managed efficiently. By writing reusable and version-controlled Terraform configurations, I automated the creation of resources such as EC2 instances, VPCs, subnets, security groups, and S3 buckets.&lt;/p&gt;

&lt;p&gt;This approach not only saved time but also reduced human errors, providing a solid and repeatable infrastructure foundation for the project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring Jenkins for CI/CD&lt;/strong&gt;&lt;br&gt;
With the infrastructure in place, I configured Jenkins to serve as the core CI/CD tool. Jenkins was set up to automate the build, test, and deployment processes. To achieve this, I integrated Jenkins with several essential tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maven was used as the build automation tool, handling dependencies, compiling code, and packaging the application.&lt;/li&gt;
&lt;li&gt;Git was integrated to enable seamless version control and source code management.&lt;/li&gt;
&lt;li&gt;OpenJDK was installed to ensure compatibility with Java-based applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Setting Up SonarQube for Code Quality Analysis&lt;/strong&gt;&lt;br&gt;
To maintain and improve code quality, I integrated SonarQube into the Jenkins pipeline. SonarQube is a powerful tool that performs static code analysis, identifying code smells, bugs, and security vulnerabilities early in the development process. By configuring SonarQube as part of the CI/CD pipeline, I ensured that every build was automatically analyzed, providing actionable feedback to the development team to continuously improve code quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a Robust Pipeline-as-Code Workflow&lt;/strong&gt;&lt;br&gt;
One of the highlights of this project was creating a pipeline-as-code workflow in Jenkins. This involved writing Jenkinsfiles, which are declarative or scripted files that define the entire CI/CD process as code. By using Jenkinsfiles, I was able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Version control the CI/CD pipeline, allowing for better collaboration and transparency.&lt;/li&gt;
&lt;li&gt;Make the pipeline more flexible and adaptable to changes in requirements or infrastructure.&lt;/li&gt;
&lt;li&gt;Implement complex workflows with stages for building, testing, and deploying the application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pipeline-as-code approach made it easier to manage and maintain the CI/CD process, reducing the risk of configuration drift and making it simpler to replicate the pipeline in different environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reflecting on the Journey&lt;/strong&gt;&lt;br&gt;
This project was a challenging but immensely rewarding experience. It required learning new tools, troubleshooting unexpected issues, and continuously iterating on the setup to achieve the desired outcome. Along the way, I gained deeper insights into the best practices for building scalable, maintainable, and efficient CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;If you have any questions about this project or would like to learn more about any specific aspect of it, feel free to leave a comment. I'm always happy to share what I’ve learned and help others on their DevOps journey!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS CI/CD Pipeline for Node.js Application</title>
      <dc:creator>Hamza Nasir</dc:creator>
      <pubDate>Fri, 30 Aug 2024 14:32:15 +0000</pubDate>
      <link>https://dev.to/hamza_nasir_06a03aac148a4/aws-cicd-pipeline-for-nodejs-application-og8</link>
      <guid>https://dev.to/hamza_nasir_06a03aac148a4/aws-cicd-pipeline-for-nodejs-application-og8</guid>
      <description>&lt;p&gt;Successfully implemented an AWS CI/CD pipeline! 🚀&lt;br&gt;
I have gained hands-on experience with Elastic Beanstalk, AWS CodeBuild, and AWS CodePipeline. Here's a project overview:&lt;/p&gt;

&lt;p&gt;𝗚𝗶𝘁 𝗥𝗲𝗽𝗼𝘀𝗶𝘁𝗼𝗿𝘆 𝗦𝗲𝘁𝘂𝗽:&lt;br&gt;
-Forked Node.js app repo to GitHub&lt;br&gt;
-Cloned repo locally using Git Bash&lt;br&gt;
-Development and Testing:&lt;br&gt;
-Made changes to app.js&lt;br&gt;
-Committed changes using Git (add, commit, push)&lt;br&gt;
𝗘𝗹𝗮𝘀𝘁𝗶𝗰 𝗕𝗲𝗮𝗻𝘀𝘁𝗮𝗹𝗸 𝗖𝗼𝗻𝗳𝗶𝗴𝘂𝗿𝗮𝘁𝗶𝗼𝗻:&lt;br&gt;
-Created application and environment&lt;br&gt;
-Selected Node.js platform&lt;br&gt;
-Configured network settings (VPC, subnet)&lt;br&gt;
𝗔𝗪𝗦 𝗖𝗼𝗱𝗲𝗕𝘂𝗶𝗹𝗱 𝗦𝗲𝘁𝘂𝗽:&lt;br&gt;
-Created build project&lt;br&gt;
-Connected GitHub repo via OAuth&lt;br&gt;
-Used Amazon Linux as the operating system&lt;br&gt;
-Created a build spec file&lt;br&gt;
𝐀𝐖𝐒 𝐂𝐨𝐝𝐞𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 𝐒𝐞𝐭𝐮𝐩:&lt;br&gt;
-Created pipeline with Source, Build, and Deploy stages&lt;br&gt;
-Integrated GitHub, CodeBuild, and Elastic Beanstalk&lt;br&gt;
-Added a Review stage for enhanced quality control&lt;/p&gt;

&lt;p&gt;Looking forward to applying these skills to future projects and automating deployments!&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>github</category>
      <category>git</category>
      <category>aws</category>
    </item>
    <item>
      <title>Kubernetes Volumes</title>
      <dc:creator>Hamza Nasir</dc:creator>
      <pubDate>Wed, 28 Aug 2024 14:50:14 +0000</pubDate>
      <link>https://dev.to/hamza_nasir_06a03aac148a4/kubernetes-volumes-2dp7</link>
      <guid>https://dev.to/hamza_nasir_06a03aac148a4/kubernetes-volumes-2dp7</guid>
      <description>&lt;p&gt;While studying Kubernetes, I found the section on Volumes a bit confusing. As it is crucial, I delved into it and simplified it as much as possible. I want to share the following concepts and hope it will be helpful for others learning about Kubernetes as well.&lt;/p&gt;

&lt;p&gt;𝗪𝗵𝗮𝘁 𝗶𝘀 𝗩𝗼𝗹𝘂𝗺𝗲 𝗶𝗻 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀?&lt;br&gt;
At its core, a volume is a directory, possibly with some data in it, which is accessible to the containers in a pod. Multiple types of volumes can be used. i.e. (AWS-EBS, Azure Disk).&lt;/p&gt;

&lt;p&gt;𝗪𝗵𝗮𝘁 𝗶𝘀 𝗦𝘁𝗼𝗿𝗴𝗮𝗖𝗟𝗮𝘀𝘀?&lt;br&gt;
StorageClass in Kubernetes is a blueprint that tells the system how to provide storage for your applications automatically. It simplifies the process by managing the type and creation of storage behind the scenes.&lt;/p&gt;

&lt;p&gt;𝗪𝗵𝗮𝘁 𝗶𝘀 𝗣𝗲𝗿𝘀𝗶𝘀𝘁𝗲𝗻𝘁 𝗩𝗼𝗹𝘂𝗺𝗲?&lt;br&gt;
Persistent volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource.&lt;/p&gt;

&lt;p&gt;𝗪𝗵𝗮𝘁 𝗶𝘀 𝗮 𝗣𝗲𝗿𝘀𝗶𝘀𝘁𝗲𝗻𝘁 𝗩𝗼𝗹𝘂𝗺𝗲 𝗖𝗹𝗮𝗶𝗺?&lt;br&gt;
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany, ReadWriteMany, or ReadWriteOncePod, see AccessModes)&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
