<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Binoy</title>
    <description>The latest articles on DEV Community by Binoy (@binoy_59380e698d318).</description>
    <link>https://dev.to/binoy_59380e698d318</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/binoy_59380e698d318"/>
    <language>en</language>
    <item>
      <title>Create the Admin Terraform Module and Terraform Admin User</title>
      <dc:creator>Binoy</dc:creator>
      <pubDate>Sun, 20 Jul 2025 17:34:59 +0000</pubDate>
      <link>https://dev.to/binoy_59380e698d318/create-the-admin-terraform-module-and-terraform-admin-user-il6</link>
      <guid>https://dev.to/binoy_59380e698d318/create-the-admin-terraform-module-and-terraform-admin-user-il6</guid>
      <description>&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt;&lt;br&gt;
This document outlines the architectural decision to create a dedicated &lt;strong&gt;Terraform admin module&lt;/strong&gt; and an &lt;strong&gt;admin user&lt;/strong&gt;, which are responsible for provisioning administrative resources, roles, and users. These components are foundational and will support other Terraform modules across different functional areas.&lt;br&gt;
The key objective is to &lt;strong&gt;enable modular, independently deployable infrastructure&lt;/strong&gt; components that avoid cross-environment impact and align with best practices in security, maintainability, and compliance.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Scope and Context&lt;/strong&gt;&lt;br&gt;
The Terraform admin module will provision privileged users and roles that are categorized to support the following functional areas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shared Resources&lt;/strong&gt;: Networking components such as VPCs, subnets, domains, firewall rules, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DevOps Resources&lt;/strong&gt;: Tools and services needed for development pipelines and deployment operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring Resources&lt;/strong&gt;: Centralized logging, alerting, and observability for management, development, and production environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application Environments&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Development&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Production&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;By segregating users and roles by domain and environment, we reduce the risk of cross-impact between critical systems. For example, changes in the development environment must not affect production.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Security and Compliance Objectives&lt;/strong&gt;&lt;br&gt;
This approach supports the following best practices and compliance requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Least Privilege Principle&lt;/strong&gt;

&lt;ol&gt;
&lt;li&gt;Users and roles are scoped with minimal necessary access, and can be reused across modules responsible for different operational domains.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Maintainability, Testability, and Deployability&lt;/strong&gt;

&lt;ol&gt;
&lt;li&gt;Each module can be developed, tested, and deployed in isolation.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Resource Clean-up&lt;/strong&gt;

&lt;ol&gt;
&lt;li&gt;Modularization simplifies identifying and removing obsolete or unused resources.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Documentation of Administrative Tasks&lt;/strong&gt;

&lt;ol&gt;
&lt;li&gt;Automation replaces manual configuration, ensuring consistency and clarity.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Auditability and Monitoring&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;All administrative activities are captured via cloud audit logs.&lt;/li&gt;
&lt;li&gt;Role-based access control enforces accountability.&lt;/li&gt;
&lt;li&gt;Environment separation aids incident analysis and root cause tracing.&lt;/li&gt;
&lt;li&gt;Enable the log anomaly detection for log review&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Single Responsibility and Bounded Context&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Each module, user, and role has a clearly defined scope and responsibility.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ol&gt;




&lt;p&gt;&lt;strong&gt;ISO &amp;amp; Regulatory Compliance Alignment&lt;/strong&gt;&lt;br&gt;
This design supports alignment with &lt;strong&gt;ISO/IEC 27001&lt;/strong&gt; and &lt;strong&gt;27002&lt;/strong&gt; standards, as well as industry benchmarks such as &lt;strong&gt;CIS&lt;/strong&gt;. Specifically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ISO/IEC 27001&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;✔️ Segregation of duties (A.6.1.2)&lt;/li&gt;
&lt;li&gt;✔️ Access control policy (A.9.1)&lt;/li&gt;
&lt;li&gt;✔️ Secure user access management (A.9.2)&lt;/li&gt;
&lt;li&gt;✔️ Monitoring and logging (A.12.4, A.12.7)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;ISO/IEC 27002&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;✔️ Emphasis on least privilege, auditability, accountability&lt;/li&gt;
&lt;li&gt;✔️ Secure administration of systems and services&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Audit capability must cover &lt;strong&gt;who&lt;/strong&gt;, &lt;strong&gt;when&lt;/strong&gt;, &lt;strong&gt;where&lt;/strong&gt;, and &lt;strong&gt;which changes&lt;/strong&gt;—which is addressed by enabling cloud-native audit logging (e.g., CloudTrail, Activity Logs) and alerting on sensitive operations.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Root User Setup&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Root user should only be used to provision the initial &lt;code&gt;admin-terraform-deployer&lt;/code&gt; user and role:

&lt;ul&gt;
&lt;li&gt;User: &lt;code&gt;admin-terraform-deployer&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Role: &lt;code&gt;admin-terraform-deployer-role&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Root Usage Mitigation&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Avoid using the root account in ongoing operations, per &lt;strong&gt;CIS security benchmarks&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Admin Module Responsibilities&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provision the &lt;strong&gt;admin-terraform-deployer&lt;/strong&gt; user/role.&lt;/li&gt;
&lt;li&gt;Create and manage the &lt;strong&gt;KMS key&lt;/strong&gt; (&lt;code&gt;admin-terraform-kms-key&lt;/code&gt;) for encryption and rotation.&lt;/li&gt;
&lt;li&gt;Provision the &lt;strong&gt;Terraform state GCP bucket bucket&lt;/strong&gt; with:

&lt;ul&gt;
&lt;li&gt;Versioning enabled (retain last 5 versions)&lt;/li&gt;
&lt;li&gt;Retention and deletion policies&lt;/li&gt;
&lt;li&gt;At-rest encryption using &lt;code&gt;admin-terraform-kms-key&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Separate buckets or folders for &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;prod&lt;/code&gt;, and &lt;code&gt;management&lt;/code&gt; states&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Store secrets in a &lt;strong&gt;Secrets Manager&lt;/strong&gt; with encryption using the KMS key&lt;/li&gt;

&lt;li&gt;Enable &lt;strong&gt;Cloud Audit Logging&lt;/strong&gt; for all admin operations&lt;/li&gt;

&lt;li&gt;Define a &lt;strong&gt;retention policy&lt;/strong&gt; for logs (e.g., 365 days)&lt;/li&gt;

&lt;li&gt;Configure &lt;strong&gt;alerting and monitoring&lt;/strong&gt; for:

&lt;ul&gt;
&lt;li&gt;Changes to GCP bucket state files&lt;/li&gt;
&lt;li&gt;Modifications to the KMS key or key rotations&lt;/li&gt;
&lt;li&gt;Changes to IAM roles and user privileges&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Mitigation Strategies&lt;/strong&gt;&lt;br&gt;
To address risks and challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain clear documentation of:

&lt;ul&gt;
&lt;li&gt;Terraform modules&lt;/li&gt;
&lt;li&gt;User and role definitions&lt;/li&gt;
&lt;li&gt;Execution flows and state handling&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Enforce &lt;strong&gt;Terraform state security&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Use encrypted remote backends&lt;/li&gt;
&lt;li&gt;Implement state locking and restricted access policies&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Implement robust &lt;strong&gt;monitoring and alerting&lt;/strong&gt;
&lt;/li&gt;

&lt;li&gt;Schedule regular &lt;strong&gt;security audits&lt;/strong&gt; to ensure compliance with least privilege principles&lt;/li&gt;

&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Data Compliance (PII and PHI) Network Architecture</title>
      <dc:creator>Binoy</dc:creator>
      <pubDate>Mon, 14 Jul 2025 18:25:07 +0000</pubDate>
      <link>https://dev.to/binoy_59380e698d318/pii-compliant-network-architecture-184d</link>
      <guid>https://dev.to/binoy_59380e698d318/pii-compliant-network-architecture-184d</guid>
      <description>&lt;p&gt;This architecture is designed to maintain PII (Personally Identifiable Information) data compliance, adhering to strict security and regulatory requirements. It leverages a multi-VPC, shared networking approach with granular subnet isolation and robust security controls.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmza82gii3y5jq1gyjegf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmza82gii3y5jq1gyjegf.png" alt=" " width="800" height="624"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. VPC Structure and Isolation&lt;/strong&gt;&lt;br&gt;
The architecture employs a logical separation of environments through dedicated Virtual Private Clouds (VPCs):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Development VPC:&lt;/strong&gt; Hosts development and QA environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Management (DevOps &amp;amp; Monitoring) VPC:&lt;/strong&gt; Centralized VPC for DevOps tooling, monitoring, and administrative tasks across all environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Staging VPC:&lt;/strong&gt; (Implicitly similar to Production VPC configuration) A pre-production environment for rigorous testing before deploying changes to production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production VPC:&lt;/strong&gt; Dedicated for live production environments handling critical applications and PII data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This multi-VPC strategy ensures strong isolation, preventing unauthorized access and impact between different stages of the software development lifecycle.&lt;br&gt;
&lt;strong&gt;2. Shared Networking Approach&lt;/strong&gt;&lt;br&gt;
The design adopts a shared networking approach, enabling the deployment of multiple projects within the same network infrastructure. This promotes reusability and simplifies network management while maintaining segregation through strict access controls at the project and subnet levels.&lt;br&gt;
&lt;strong&gt;3. Application Deployment (Development &amp;amp; Production VPCs)&lt;/strong&gt;&lt;br&gt;
Applications deployed within the Development and Production VPCs adhere to a highly segmented subnet strategy with strict Network Access Control Lists (NACLs) to enforce an "air gap" and minimize the attack surface. This granular isolation ensures data compliance and auditability.&lt;br&gt;
Subnet Isolation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Public Subnet:&lt;/strong&gt; Hosts internet-facing resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frontend Subnet:&lt;/strong&gt; Runs all frontend-related services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Microservices Subnet:&lt;/strong&gt; Dedicated for application-related microservices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Controller Subnet:&lt;/strong&gt; Manages Kubernetes master nodes, VM discovery services, and other cluster control resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DB Subnet:&lt;/strong&gt; Contains database resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PII Service Subnet:&lt;/strong&gt; Specifically for services handling PII data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PII DB Subnet:&lt;/strong&gt; Stores PII databases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tools &amp;amp; Monitoring Subnet:&lt;/strong&gt; For application-specific tools and monitoring resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;External Service Subnet:&lt;/strong&gt; For external API services requiring internet access. All other private subnets are strictly isolated without direct internet access.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This stringent network segmentation ensures logical separation and single responsibility for each subnet, simplifying monitoring and auditing.&lt;br&gt;
PII Data Compliance Controls:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Strict Access Controls:&lt;/strong&gt; PII resources have stringent roles and permissions, controlling which services and ports can connect. For example, PII services have access only with the Microservices Subnet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication and Authorization:&lt;/strong&gt; Strictly monitored via ID tokens, enabling detailed auditing of "who, where, and when" access, critical for data compliance and regularity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Mesh Architecture:&lt;/strong&gt; Implemented for centralized end-to-end encryption (transit-level encryption) and comprehensive monitoring of requests and responses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PII Data Classification:&lt;/strong&gt; PII data is classified, and access is understood (e.g., email and phone numbers decrypted only when sending emails or SMS).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encryption at Rest and Column-Level Encryption:&lt;/strong&gt; PII service and DB implement both column-level and encryption at rest.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PII DB Visibility:&lt;/strong&gt; PII DB provides visibility into data access, recording which columns are accessed by which microservices during queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Management VPC (DevOps &amp;amp; Monitoring)&lt;/strong&gt;&lt;br&gt;
The Management VPC centralizes resources for monitoring all environments and managing DevOps operations, including software and application deployment to development and production.&lt;br&gt;
Subnet Isolation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Public Subnet:&lt;/strong&gt; Hosts internet-facing management resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DevOps Subnet:&lt;/strong&gt; Runs DevOps tools (e.g., Jenkins, Packer, Puppet).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Artifact Subnet:&lt;/strong&gt; Manages artifact repositories (e.g., JFrog, Nexus, Harbor).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring Subnet:&lt;/strong&gt; Hosts monitoring services (e.g., Prometheus, Thanos, Grafana, Elastic Service).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Controller Subnet:&lt;/strong&gt; Manages Kubernetes master nodes and other control resources for the management cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DB Subnet:&lt;/strong&gt; Contains database resources for management tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;High-Level Responsibilities of Management VPC:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Golden Image Creation:&lt;/strong&gt; Generates base images for development and production VMs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vulnerability Checks:&lt;/strong&gt; Performs vulnerability scanning of software and libraries before deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SAST and DAST Execution:&lt;/strong&gt; Conducts Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Synthetic Testing:&lt;/strong&gt; Executes synthetic transactions to monitor application availability and performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Artifact Maintenance:&lt;/strong&gt; Manages central artifact repositories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Monitoring:&lt;/strong&gt; Monitors resources and services across all environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Provisioning:&lt;/strong&gt; Provisions resources in development and production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source Code Management:&lt;/strong&gt; Manages source code repositories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Pipeline:&lt;/strong&gt; Manages CI/CD pipelines for software, patching, application, container, and configuration deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temporary Agents/VMs/Containers:&lt;/strong&gt; Creates temporary resources for building applications, golden images, and vulnerability checks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Software Management Tools:&lt;/strong&gt; Runs tools like SaltStack, Puppet, and Ansible for configuration management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Patching &amp;amp; Versioning:&lt;/strong&gt; Manages security patching and versioning of golden images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralized Artifact Management:&lt;/strong&gt; Manages artifacts for central repositories (e.g., Maven, Pip).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Common Networking Security and Monitoring&lt;/strong&gt;&lt;br&gt;
This architecture incorporates common security and monitoring controls across all environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SSH Tunnel:&lt;/strong&gt; For secure access to private VMs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Common Firewall Rules:&lt;/strong&gt; Consistent firewall policies across environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flow Logs:&lt;/strong&gt; To monitor network traffic and identify anomalies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log Monitoring:&lt;/strong&gt; Centralized logging for comprehensive auditing and troubleshooting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alerting:&lt;/strong&gt; Based on monitoring logs for proactive incident response.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web Application Firewall (WAF):&lt;/strong&gt; To protect internet-facing applications from common web exploits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Common IAM Roles:&lt;/strong&gt; For consistent management of networking resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared Resources:&lt;/strong&gt; Secure sharing of resources via buckets between environments.
## High-Level Quality Attributes and Compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This networking architecture is designed with the following quality attributes and compliance standards in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ISO Standard Meeting:&lt;/strong&gt; The stringent controls, documentation, and auditing capabilities inherent in this architecture contribute significantly to meeting various ISO standards, particularly ISO 27001 for Information Security Management Systems. The focus on data classification, access control, and continuous monitoring aligns with ISO 27001 principles.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Air Gap Architecture:&lt;/strong&gt; The strict network segmentation, particularly the absence of direct internet access for most private subnets and the isolation of PII data, effectively creates an "air gap" for sensitive data. This significantly reduces the potential attack surface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced Surface Attack:&lt;/strong&gt; Through micro-segmentation, minimal open ports, strict NACLs, and the isolation of PII data, the attack surface is drastically minimized. This limits the potential entry points for malicious actors and contains the blast radius of any successful attack.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring:&lt;/strong&gt; Comprehensive monitoring is central to this architecture.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Network Flow Logs:&lt;/strong&gt; Provide visibility into network traffic patterns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralized Logging:&lt;/strong&gt; For all application, system, and security events.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application Performance Monitoring (APM):&lt;/strong&gt; Via service mesh and dedicated monitoring tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Information and Event Management (SIEM):&lt;/strong&gt; (Implied by log monitoring and alerting) for correlation and analysis of security events.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alerting:&lt;/strong&gt; Proactive notification of security incidents and operational issues.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Data Compliance (PII and PHI) and Data Regularity (GDPR, CCPA):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PII/PHI Segregation:&lt;/strong&gt; Dedicated subnets for PII services and databases are a cornerstone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encryption:&lt;/strong&gt; In-transit (service mesh) and at-rest (column-level and disk encryption) for PII/PHI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access Control:&lt;/strong&gt; Granular, audited access controls to PII/PHI resources, enforced through IAM and service mesh.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auditability:&lt;/strong&gt; Detailed logging of all access to PII/PHI, enabling full audit trails for compliance reporting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Classification:&lt;/strong&gt; Understanding and classifying PII/PHI to apply appropriate controls.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regulatory Alignment:&lt;/strong&gt; The architecture's emphasis on data minimization, access control, encryption, and audit trails directly supports compliance with regulations like GDPR (General Data Protection Regulation) and CCPA (California Consumer Privacy Act).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Security:&lt;/strong&gt; Multi-layered security approach:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Network Security:&lt;/strong&gt; VPCs, subnets, NACLs, Firewall Rules, WAF.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identity and Access Management (IAM):&lt;/strong&gt; Least privilege principle, ID token-based authentication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Encryption:&lt;/strong&gt; In-transit and at-rest.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vulnerability Management:&lt;/strong&gt; SAST, DAST, Golden Image vulnerability checks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Monitoring:&lt;/strong&gt; Flow logs, centralized logging, alerting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Patch Management:&lt;/strong&gt; Automated patching of golden images.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Auditing:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Comprehensive Logging:&lt;/strong&gt; All network traffic, system events, application logs, and access attempts are logged.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Immutable Logs:&lt;/strong&gt; Logs are designed to be tamper-proof for integrity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralized Log Management:&lt;/strong&gt; For easy access and analysis during audits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit Trails:&lt;/strong&gt; Detailed records of all changes, deployments, and access attempts.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Testability:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Staging Environment:&lt;/strong&gt; A replica of production for rigorous testing of changes before deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Isolated Development/QA Environments:&lt;/strong&gt; Allow for independent testing without impacting production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Synthetic Testing:&lt;/strong&gt; Automated checks for application health and performance.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Deployment:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated CI/CD Pipelines:&lt;/strong&gt; Managed by the Management VPC for consistent and repeatable deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Golden Images:&lt;/strong&gt; Ensures consistent and secure base environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code (IaC):&lt;/strong&gt; (Implied by automated provisioning and configuration management tools) for repeatable and version-controlled infrastructure.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Performance:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Distributed Architecture:&lt;/strong&gt; Microservices and segregated databases can be scaled independently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Design:&lt;/strong&gt; Efficient routing within VPCs and subnets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring:&lt;/strong&gt; Proactive identification and resolution of performance bottlenecks.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Cost:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shared Networking:&lt;/strong&gt; Can lead to cost efficiencies by centralizing certain network services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Optimization:&lt;/strong&gt; Monitoring and management tools help optimize resource utilization.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability:&lt;/strong&gt; Allows for scaling resources up or down based on demand, potentially reducing unnecessary expenditure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational Efficiency:&lt;/strong&gt; Automation of deployment and management tasks reduces operational costs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>architecture</category>
      <category>networking</category>
    </item>
    <item>
      <title>DevOps - Software Deployment Pipeline</title>
      <dc:creator>Binoy</dc:creator>
      <pubDate>Sun, 13 Jul 2025 16:55:35 +0000</pubDate>
      <link>https://dev.to/binoy_59380e698d318/devops-software-deployment-pipeline-m7g</link>
      <guid>https://dev.to/binoy_59380e698d318/devops-software-deployment-pipeline-m7g</guid>
      <description>&lt;h1&gt;
  
  
  DevOps - Software Deployment Pipeline
&lt;/h1&gt;

&lt;p&gt;This diagram outlines a robust DevOps software deployment pipeline, seemingly designed for a GCP-K8s environment, with a strong emphasis on security, performance, and maintainability. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1av4fjhjvg1w815mkwyw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1av4fjhjvg1w815mkwyw.png" width="581" height="621"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  DevOps Software Deployment Pipeline Explanation:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Code Version [Git]&lt;/strong&gt;: The pipeline begins with version-controlled source code, likely stored in a Git repository. This is the single source of truth for all software assets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Pipeline [Jenkins]&lt;/strong&gt;: Jenkins acts as the central orchestration engine for the entire pipeline. It triggers builds and manages the flow of the software through various stages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Path 1: Infrastructure as Code (IaC) for Temporary VMs&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create Temp VM [Terraform]&lt;/strong&gt;: Jenkins initiates the creation of temporary Virtual Machines (VMs) using Terraform. This indicates an IaC approach, ensuring consistency and repeatability in VM provisioning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vulnerabilities check Softwares and Tools [VM Agent Temporary]&lt;/strong&gt;: Once a temporary VM is up, an agent on the VM performs security checks, including vulnerability scanning of installed software and tools. This is a crucial security control early in the pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Artifact [Sonatype Nexus Repository]&lt;/strong&gt;: After the security checks, artifacts (potentially vetted software components or dependencies) are stored in Sonatype Nexus, an artifact repository. This ensures that only approved and scanned components are used downstream.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Path 2: VM Image Building and Configuration Management&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Puppet &amp;amp; SaltStack&lt;/strong&gt;: These tools represent configuration management. They are used to configure and provision the VMs with necessary software and settings, ensuring consistency across environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Packer - Build Image&lt;/strong&gt;: Packer is used to automate the creation of machine images (e.g., AMI for AWS, custom images for GCP). It takes the configured VMs (from Puppet/SaltStack or a temporary VM) and bakes them into reusable images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Temp VM [Terraform]&lt;/strong&gt;: Another instance of Terraform creating temporary VMs, likely to facilitate the image building process with Packer. This temporary VM might be where Puppet/SaltStack configurations are applied before Packer creates the final image.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Golden Images [VM Agent Temporary]&lt;/strong&gt;: This is a critical step where "Golden Images" are created. These are hardened, pre-configured, and tested VM images that contain the application and all its dependencies, along with security configurations. The "VM Agent Temporary" here might imply final validation or hardening steps on a temporary VM before the image is finalized. This also likely incorporates the vetted artifacts from Sonatype Nexus.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Upload Golden Image - Artifact [Sonatype Nexus Repository]&lt;/strong&gt;: The Golden Images are then uploaded to a version-enabled &lt;strong&gt;Artifact&lt;/strong&gt;. Versioning is crucial for rollbacks and maintaining a history of images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy Golden Image on Dev / Production VMs&lt;/strong&gt;: Finally, the Golden Images are deployed onto Development, Staging, and Production VMs. The "VPCs" (Virtual Private Clouds) mentioned in the prompt are implied here, indicating separate, isolated network environments for each stage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Pipeline [Jenkins]&lt;/strong&gt;: Jenkins again orchestrates this final deployment step, ensuring that the correct Golden Image is deployed to the appropriate environment.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Quality Attributes Explanation:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Security (Secure Complaints where No Air Gap Architecture):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vulnerability Scanning (Early Detection)&lt;/strong&gt;: The "Vulnerabilities check Softwares and Tools [VM Agent Temporary]" step is crucial. By performing these checks early on temporary VMs, the pipeline identifies and addresses security flaws before they are baked into Golden Images. This is vital in a "no air gap" architecture where direct internet exposure might be present.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Golden Images for Reduced Attack Surface&lt;/strong&gt;: Golden Images are inherently more secure than building VMs from scratch for each deployment. They are pre-hardened, patched, and include only necessary components, significantly reducing the attack surface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Artifact Repository (Sonatype Nexus)&lt;/strong&gt;: Using Nexus ensures that all external dependencies and internal artifacts are scanned and approved before use. This prevents the introduction of known vulnerabilities from third-party libraries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration Management (Puppet &amp;amp; SaltStack)&lt;/strong&gt;: These tools enforce desired state configurations, preventing configuration drift and ensuring security baselines are consistently applied.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IaC (Terraform)&lt;/strong&gt;: Terraform defines infrastructure as code, which can be version-controlled and peer-reviewed, reducing manual errors that often lead to security misconfigurations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Patching&lt;/strong&gt;: The architecture facilitates easy security patching. By updating the base image or specific components within the Packer/Puppet/SaltStack stages, a new Golden Image can be created and rolled out, ensuring all VMs receive the latest patches. This process is automated, reducing the time to patch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring VPC&lt;/strong&gt;: The mention of a "monitoring VPC" implies dedicated infrastructure for security monitoring, intrusion detection, and log analysis, providing continuous visibility into the environment's security posture.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Performance for Deploying Software:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated Pipeline&lt;/strong&gt;: The entire pipeline is automated, significantly reducing deployment time compared to manual processes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Golden Images for Rapid Deployment&lt;/strong&gt;: Deploying pre-built Golden Images is much faster than provisioning and configuring VMs from scratch for each deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spot Instances for Temporary VMs&lt;/strong&gt;: Utilizing "spot instances" for temporary VMs (as mentioned) is a cost-effective strategy for burstable workloads like image building and vulnerability scanning. While spot instances can be interrupted, the temporary nature of these VMs and the idempotent nature of the image building process make them suitable for this use case.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parallelism (Implicit)&lt;/strong&gt;: Jenkins can orchestrate parallel execution of certain steps (e.g., multiple temporary VMs for different vulnerability checks), further speeding up the process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated and Repeatable&lt;/strong&gt;: The pipeline ensures consistent and repeatable deployments across all environments (Dev, Staging, Production).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Versioned Deployments&lt;/strong&gt;: Golden Images are versioned, enabling easy rollbacks to previous stable versions if issues arise in a new deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Environment Support&lt;/strong&gt;: The pipeline explicitly supports deployment to Dev, Staging, and Production VPCs, allowing for a phased release strategy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Maintainability:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code (Terraform)&lt;/strong&gt;: Managing infrastructure through code makes it easier to understand, modify, and maintain.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration Management (Puppet &amp;amp; SaltStack)&lt;/strong&gt;: Centralized configuration management simplifies updating and maintaining the software and configurations on VMs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modular Design&lt;/strong&gt;: The pipeline appears modular, with distinct stages for code, image building, security checks, and deployment, making it easier to troubleshoot and update specific parts without affecting the entire pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Versioned Artifacts&lt;/strong&gt;: Storing artifacts in Nexus and Golden Images in a version-enabled storage bucket simplifies tracking and managing different versions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Cost:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Spot Instances for Temporary VMs&lt;/strong&gt;: As mentioned, using spot instances for temporary VMs significantly reduces the cost of compute resources during image building and scanning phases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Processes&lt;/strong&gt;: Automation reduces manual effort and associated labor costs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimized Resource Utilization&lt;/strong&gt;: By using temporary VMs only when needed, the pipeline avoids continuously running expensive instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Golden Images for Efficient Resource Use&lt;/strong&gt;: Golden Images reduce the need for extensive post-deployment configuration, leading to faster startup times and potentially less compute time needed for new instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. Monitoring:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring VPC&lt;/strong&gt;: The explicit mention of a "monitoring VPC" indicates a dedicated environment for collecting metrics, logs, and traces from all other VPCs and pipeline components. This allows for comprehensive operational monitoring and alerting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VM Agent Temporary&lt;/strong&gt;: The "VM Agent Temporary" suggests agents are deployed on temporary VMs, which can collect logs and metrics during the build and scan phases, providing insights into the pipeline's health and security.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Jenkins Dashboard&lt;/strong&gt;: Jenkins provides a central dashboard for monitoring the status of pipeline runs, build failures, and deployment successes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;7. Testability:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated Security Checks&lt;/strong&gt;: The "Vulnerabilities check Softwares and Tools" step represents automated security testing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Golden Image Creation (Implicit Testing)&lt;/strong&gt;: The creation of Golden Images implies that these images undergo rigorous testing (functional, performance, security) before being deemed "golden."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Staging Environment&lt;/strong&gt;: The deployment to "Dev / Production VMs" implicitly includes a staging environment (often between Dev and Production) where more extensive testing (e.g., integration, performance, user acceptance testing) can occur before deploying to production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reproducible Environments (IaC)&lt;/strong&gt;: Terraform ensures that environments are reproducible, making it easier to isolate and debug issues during testing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ISO Standard
&lt;/h2&gt;

&lt;p&gt;The architecture have provided demonstrates many practices and controls that are highly aligned with the requirements of these ISO standards:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Strong Security Controls (ISO 27001 &amp;amp; 27701):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vulnerability Scanning:&lt;/strong&gt; "Vulnerabilities check Softwares and Tools" directly addresses a key control for identifying and mitigating security weaknesses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Golden Images:&lt;/strong&gt; Creating hardened, pre-configured images reduces the attack surface, improves consistency, and ensures that security configurations are applied uniformly, which are all critical for ISO 27001.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration Management (Puppet, SaltStack):&lt;/strong&gt; These tools ensure consistent and secure configurations, preventing drift and enforcing security baselines as required by ISO.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code (Terraform):&lt;/strong&gt; Version-controlled and auditable infrastructure definitions contribute to the "plan-do-check-act" cycle of ISO 27001 by making infrastructure changes transparent and reviewable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Artifact Repository (Nexus):&lt;/strong&gt; Managing and scanning artifacts helps ensure the integrity and security of software components, preventing the introduction of vulnerable dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Segregated VPCs and Monitoring VPC:&lt;/strong&gt; Network segmentation and dedicated monitoring infrastructure are fundamental for isolating environments and detecting security incidents, both core to ISO 27001.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Patching:&lt;/strong&gt; The ability to easily perform security patching via Golden Images is crucial for maintaining security posture and addressing vulnerabilities in a timely manner.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access Controls (Implicit):&lt;/strong&gt; While not explicitly detailed, the pipeline implies proper access controls for Jenkins, Git, Nexus, and cloud environments, which are central to ISO 27001.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>architecture</category>
      <category>devops</category>
    </item>
    <item>
      <title>Run the K8s in the private VPC setup</title>
      <dc:creator>Binoy</dc:creator>
      <pubDate>Sat, 14 Jun 2025 18:54:24 +0000</pubDate>
      <link>https://dev.to/binoy_59380e698d318/run-the-k8s-in-the-private-vpc-setup-4d6a</link>
      <guid>https://dev.to/binoy_59380e698d318/run-the-k8s-in-the-private-vpc-setup-4d6a</guid>
      <description>&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;This section explains how to set up a Kubernetes (K8s) cluster on Google Cloud Platform (GCP) using Terraform. Instead of using GCP's managed Kubernetes service, we will install open-source Kubernetes manually inside virtual machines (VMs).&lt;/p&gt;

&lt;h1&gt;
  
  
  Objective
&lt;/h1&gt;

&lt;p&gt;In software and microservice architecture design, it is essential to consider the scalability, availability, and performance of the application. I am proposing the following design considerations when selecting Kubernetes as part of the software architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When the application requires high performance, availability, scalability, and security&lt;/li&gt;
&lt;li&gt;When greater control over and maintenance of the infrastructure is needed&lt;/li&gt;
&lt;li&gt;When the application architecture involves complex microservices&lt;/li&gt;
&lt;li&gt;When opting for open-source solutions (optional)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When Kubernetes may not be the right choice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the application is small and can be supported by a cloud provider’s managed services, such as AWS ECS with Fargate (serverless technology)&lt;/li&gt;
&lt;li&gt;If the application does not require high throughput&lt;/li&gt;
&lt;li&gt;If the goal is to minimize IT operational costs and reduce infrastructure management responsibilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this setup, I considered the following points while building Kubernetes (K8s) on GCP:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform: Used to easily deploy, manage, and destroy the cluster infrastructure.&lt;/li&gt;
&lt;li&gt;HAProxy: Implemented an open-source load balancer instead of relying on GCP’s native load balancer. HAProxy provides high-performance load balancing.&lt;/li&gt;
&lt;li&gt;Consul: Implemented VM discovery to automatically register Kubernetes worker nodes with the HAProxy load balancer.&lt;/li&gt;
&lt;li&gt;GCP Auto Scaling and VM Health Checks: Set up an autoscaling group with TCP-based health checks to ensure the availability and reliability of virtual machines.&lt;/li&gt;
&lt;li&gt;GCP RBAC: Leveraged GCP’s Role-Based Access Control to simplify node joining, manage Kubernetes-related files in GCP Buckets, and associate service accounts with bucket roles and virtual machines.&lt;/li&gt;
&lt;li&gt;Minimal Permissions: As a best practice, configured minimal necessary roles for infrastructure components to enhance security.&lt;/li&gt;
&lt;li&gt;Firewall Rules: Define and control inbound (ingress) and outbound (egress) network traffic to secure communication between resources.&lt;/li&gt;
&lt;li&gt;Private VPC: Create a dedicated Virtual Private Cloud (VPC) to isolate and secure resources, avoiding use of the default VPC. Resources in the private VPC, such as VMs, do not require external IP addresses and are accessed via internal IPs.&lt;/li&gt;
&lt;li&gt;IAP (Identity-Aware Proxy) TCP Forwarding: Enable secure SSH access to virtual machines running in private subnets without exposing them to the public internet.&lt;/li&gt;
&lt;li&gt;Private Google Access: Allow resources in private subnets to access Google services (e.g., Cloud Storage) over internal IPs by enabling this option at the subnet level.&lt;/li&gt;
&lt;li&gt;Public and Private Subnets: Segregate application components and manage network traffic more securely by deploying resources into distinct public and private subnets.&lt;/li&gt;
&lt;li&gt;Impersonate Service Account: Set up a service account that can be impersonated, granting it the necessary permissions to create and manage resources in Google Cloud Platform (GCP).

&lt;ul&gt;
&lt;li&gt;Advantage: This approach enhances security by allowing controlled, auditable access without needing to share long-lived credentials. It also enables role-based delegation, allowing users or services to act with limited, predefined permissions.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Network Archiecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrqd8543lennwi0frt9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrqd8543lennwi0frt9t.png" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure Archiecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1207ga63yrvkfxr3sgj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1207ga63yrvkfxr3sgj.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This is not a production-ready architecture. In production environments, you should create your own VPCs, such as separate networks for Management and Production environments (for eg: following HIPAA-compliant recommended network architecture practices).&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;We have to setup the following tools and accounts&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create GCP account [we can create free tier account]&lt;/li&gt;
&lt;li&gt;Install Terraform in your machine [Min Terraform v1.9.3]&lt;/li&gt;
&lt;li&gt;Install gcloud command line in your machine
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create the servie account and permission in GCP
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Enable Identity and Access Management (IAM) API

&lt;ol&gt;
&lt;li&gt;Navigate to "APIs &amp;amp; Services": In the left-hand navigation menu, click on APIs &amp;amp; Services.&lt;/li&gt;
&lt;li&gt;Go to the "Library": On the "APIs &amp;amp; Services" overview page, click on + ENABLE APIS AND SERVICES or select Library from the sub-menu on the left.&lt;/li&gt;
&lt;li&gt;Search for the IAM API: In the search bar, type "Identity and Access Management (IAM) API".&lt;/li&gt;
&lt;li&gt;Select the API: Click on the "Identity and Access Management (IAM) API" from the search results. You'll be taken to the API's overview page.&lt;/li&gt;
&lt;li&gt;Enable the API: On the API overview page, you will see an Enable button if the API is not already enabled for your project. Click this button.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Select the project&lt;/li&gt;

&lt;li&gt;Select GCP-&amp;gt;Service Account

&lt;ol&gt;
&lt;li&gt;Click "Create service account"&lt;/li&gt;
&lt;li&gt;Give service account name is "terraform-deployer"&lt;/li&gt;
&lt;li&gt;Give service account id is "terraform-deployer"&lt;/li&gt;
&lt;li&gt;Give service descripton is "Terraform deployer for creating the resources and networking"

&lt;ol&gt;
&lt;li&gt;Output will be Email Id : terraform-deployer@{project}-{project_id}.iam.gserviceaccount.com&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;li&gt;Add grand for root user

&lt;ol&gt;
&lt;li&gt;GCP-&amp;gt;IAM &lt;/li&gt;
&lt;li&gt;Click "Grand Access"&lt;/li&gt;
&lt;li&gt;Give new principle name "&lt;a href="mailto:root@email.com"&gt;root@email.com&lt;/a&gt;"&lt;/li&gt;
&lt;li&gt;Assign Roles: Click “Select a role” dropdown.

&lt;ol&gt;
&lt;li&gt;Common roles:&lt;/li&gt;
&lt;li&gt;Service Account Token Creator - Generating access tokens, Calling GCP services on behalf of the service account&lt;/li&gt;
&lt;li&gt;IAP Role

&lt;ol&gt;
&lt;li&gt;IAP-secured Tunnel User - This is the essential role. It allows a user or service account to connect to resources (like Compute Engine VMs) through IAP's TCP forwarding feature.&lt;/li&gt;
&lt;li&gt;Compute OS Admin Login - (basic user access, no sudo)&lt;/li&gt;
&lt;li&gt;Compute OS Login - (admin access, with sudo)&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;li&gt;Add grand for service account of terraform deployer of management to access shared service resources

&lt;ol&gt;
&lt;li&gt;GCP-&amp;gt;IAM &lt;/li&gt;
&lt;li&gt;Click "Grand Access"&lt;/li&gt;
&lt;li&gt;Give new principle name "terraform-deployer@{project}-{project_id}.iam.gserviceaccount.com"&lt;/li&gt;
&lt;li&gt;Assign Roles: Click “Select a role” dropdown.

&lt;ol&gt;
&lt;li&gt;Common roles:&lt;/li&gt;
&lt;li&gt;Viewer – for read-only access&lt;/li&gt;
&lt;li&gt;Editor – for general write access&lt;/li&gt;
&lt;li&gt;Service Account Admin – for service account admin access&lt;/li&gt;
&lt;li&gt;Compute Network Admin – for networking admin access&lt;/li&gt;
&lt;li&gt;Compute Security Admin – for security admin access&lt;/li&gt;
&lt;li&gt;Service Account Token Creator - Generating access tokens, Calling GCP services on behalf of the service account&lt;/li&gt;
&lt;li&gt;Compute Instance Admin (v1) - for create instance&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;/ol&gt;

&lt;h2&gt;
  
  
  Login GCP account
&lt;/h2&gt;

&lt;p&gt;Login the GCP account through gcloud CLI command. Execute following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud auth login 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the configuration gcloud to separate/isolate the configuration for the project&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud config configurations create devops-k8s
gcloud config configurations activate devops-k8s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Authenticate for a Specific Project To log in and set a GCP project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud config &lt;span class="nb"&gt;set &lt;/span&gt;project &amp;lt;PROJECT_ID&amp;gt;  &lt;span class="nt"&gt;--configuration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;devops-k8s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;GCP PROJECT_ID&lt;/code&gt; with your actual project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Please follow the procedure if we are facing quota issues&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;*&lt;/span&gt; WARNING: Your active project does not match the quota project &lt;span class="k"&gt;in &lt;/span&gt;your &lt;span class="nb"&gt;local &lt;/span&gt;Application Default Credentials file. This might result &lt;span class="k"&gt;in &lt;/span&gt;unexpected quota issues.
&lt;span class="k"&gt;*&lt;/span&gt; Update the quota project associated with ADC:

gcloud auth application-default set-quota-project &amp;lt;PROJECT_ID&amp;gt; 
gcloud config &lt;span class="nb"&gt;set &lt;/span&gt;project &amp;lt;PROJECT_ID&amp;gt; &lt;span class="nt"&gt;--configuration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;devops-k8s
&lt;span class="c"&gt;#If we are using env variable CLOUDSDK_CORE_PROJECT in .bash_profile, need to unset&lt;/span&gt;
&lt;span class="nb"&gt;unset &lt;/span&gt;CLOUDSDK_CORE_PROJECT
gcloud auth login &lt;span class="nt"&gt;--configuration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;devops-k8s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure the &lt;code&gt;env&lt;/code&gt; variables
&lt;/h3&gt;

&lt;p&gt;Following environment variables need to configure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;GOOGLE_IMPERSONATE_SERVICE_ACCOUNT&lt;/code&gt;: This variable configure the impersonate service account &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TF_VAR_gcp_project_id&lt;/code&gt;: This variable to configure the GCP project id to use the terraform
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vi ~/.bash_profile
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GOOGLE_IMPERSONATE_SERVICE_ACCOUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform-deployer@&lt;span class="o"&gt;{&lt;/span&gt;project&lt;span class="o"&gt;}&lt;/span&gt;-&lt;span class="o"&gt;{&lt;/span&gt;project_id&lt;span class="o"&gt;}&lt;/span&gt;.iam.gserviceaccount.com
&lt;span class="nb"&gt;export set &lt;/span&gt;&lt;span class="nv"&gt;TF_VAR_terrafor_impersonate_service_account&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform-deployer@&lt;span class="o"&gt;{&lt;/span&gt;project&lt;span class="o"&gt;}&lt;/span&gt;-&lt;span class="o"&gt;{&lt;/span&gt;project_id&lt;span class="o"&gt;}&lt;/span&gt;.iam.gserviceaccount.com
&lt;span class="nb"&gt;export set &lt;/span&gt;&lt;span class="nv"&gt;TF_VAR_gcp_project_id&lt;/span&gt;&lt;span class="o"&gt;={&lt;/span&gt;your-gcp-project-id&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;source&lt;/span&gt; ~/.bash_profile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the Terraform workspace
&lt;/h3&gt;

&lt;p&gt;The following command is used to create a Terraform workspace. A workspace in Terraform functions similarly to managing different environments in software development, such as &lt;code&gt;dev&lt;/code&gt; and &lt;code&gt;prod&lt;/code&gt;. In this setup, the workspace helps differentiate resources for each environment within the same GCP project.&lt;/p&gt;

&lt;p&gt;We are following a consistent naming convention for resource creation in the GCP account:&lt;br&gt;
Naming Pattern: &lt;code&gt;{project-id}-{env}-{resource-name}&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Examples for the &lt;code&gt;dev&lt;/code&gt; environment:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;myp-dev-secure-bucket&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;myp-dev-k8s-master-node&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;myp-dev-k8s-worker-node-1&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Examples for the &lt;code&gt;prod&lt;/code&gt; environment:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;myp-prod-secure-bucket&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;myp-prod-k8s-master-node&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;myp-prod-k8s-worker-node-1&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; As a best practice, production workloads should be managed in a separate GCP project. This approach improves production performance, enhances security, and ensures complete isolation between development and production environments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform workspace new dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Terraform Configuration
&lt;/h3&gt;

&lt;p&gt;Resource configurations can be defined in a &lt;code&gt;dev.tfvars&lt;/code&gt; variable file. Different variable files can be maintained for different environments (e.g., &lt;code&gt;dev.tfvars&lt;/code&gt;, &lt;code&gt;prod.tfvars&lt;/code&gt;).&lt;br&gt;
For example, project-specific values such as the project ID and project name for each environment can be configured in these files.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#--------------------- Development Project Configuration ---------------------&lt;/span&gt;
&lt;span class="c"&gt;#Development Project configuration, this project configuration is used to maintain resources for this project. eg: project_id will be used to create the GCP resources&lt;/span&gt;
project_id   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"myp"&lt;/span&gt;
project_name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"My Project"&lt;/span&gt;
&lt;span class="c"&gt;# --------------------- GCP Project and Regsion Configuration ---------------------&lt;/span&gt;
gcp_region &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east1"&lt;/span&gt;
gcp_zone   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east1-b"&lt;/span&gt;
&lt;span class="c"&gt;#--------------------- Network Configuration ---------------------&lt;/span&gt;
nw_network_name          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"default"&lt;/span&gt;
nw_subnet_public_address_range &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.0/24"&lt;/span&gt;
nw_subnet_private_address_range &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.1.0/24"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setup the resources in GCP
&lt;/h3&gt;

&lt;p&gt;Run the following Terraform command to create the resources in the GCP .&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init &lt;span class="nt"&gt;--upgrade&lt;/span&gt;
terraform plan &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dev.tfvars"&lt;/span&gt;
terraform apply &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dev.tfvars"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup and connect IAP TCP Forwarding
&lt;/h2&gt;

&lt;p&gt;To establish an SSH tunnel using IAP TCP Forwarding, we use the following command. During the initial execution, a password will be generated for the SSH connection. This password should be securely stored, as it will be required for future SSH access to the VM through the IAP tunnel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute ssh &amp;lt;YOUR_PRIVATE_VM_NAME&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;YOUR_VM_ZONE&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--tunnel-through-iap&lt;/span&gt;

User-Pro:infra user&lt;span class="nv"&gt;$ &lt;/span&gt;gcloud compute ssh myp-dev-k8s-master-node &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-east1-b &lt;span class="nt"&gt;--tunnel-through-iap&lt;/span&gt;
WARNING: The private SSH key file &lt;span class="k"&gt;for &lt;/span&gt;gcloud does not exist.
WARNING: The public SSH key file &lt;span class="k"&gt;for &lt;/span&gt;gcloud does not exist.
WARNING: You &lt;span class="k"&gt;do &lt;/span&gt;not have an SSH key &lt;span class="k"&gt;for &lt;/span&gt;gcloud.
WARNING: SSH keygen will be executed to generate a key.
This tool needs to create the directory &lt;span class="o"&gt;[&lt;/span&gt;/Users/user/.ssh] before being able to generate SSH keys.
Do you want to &lt;span class="k"&gt;continue&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;Y/n&lt;span class="o"&gt;)&lt;/span&gt;?  Y
Generating public/private rsa key pair.
Enter passphrase &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="s2"&gt;"/Users/user/.ssh/google_compute_engine"&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;empty &lt;span class="k"&gt;for &lt;/span&gt;no passphrase&lt;span class="o"&gt;)&lt;/span&gt;: xxxxxxxx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Service Verification
&lt;/h2&gt;

&lt;p&gt;Ensure that the virtual machines (VMs) are properly created, linked, and connected. This step helps identify any issues during VM provisioning or while installing necessary services. The following Git repository contains detailed debugging notes and documentation: (Debug Notes)[&lt;a href="https://github.com/developerhelperhub/setup-k8s-cluster-on-gcp/tree/main/debugs" rel="noopener noreferrer"&gt;https://github.com/developerhelperhub/setup-k8s-cluster-on-gcp/tree/main/debugs&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Key Verification Steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Confirm VM creation and successful setup&lt;/li&gt;
&lt;li&gt;Validate storage configuration&lt;/li&gt;
&lt;li&gt;Check Consul installation and status&lt;/li&gt;
&lt;li&gt;Verify the Kubernetes master node&lt;/li&gt;
&lt;li&gt;Ensure HAProxy is properly configured and running&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Install Web Applications K8s cluster
&lt;/h2&gt;

&lt;p&gt;We deployed &lt;code&gt;nginx-web&lt;/code&gt; as a sample application on the Kubernetes worker nodes and accessed it through the HAProxy load balancer. As HAProxy is the chosen load balancer in our architecture, we configured an HAProxy Ingress Controller during the master node setup. This controller efficiently routes incoming traffic to the appropriate applications running within the pods.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install the HAProxy ingress controller into master VM in the GCP console
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Login into ubuntu user and changed into home directory
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute ssh myp-dev-k8s-master-node &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-east1-b &lt;span class="nt"&gt;--tunnel-through-iap&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;su ubuntu
&lt;span class="nb"&gt;cd&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Ensure nodes are joined with master nodes
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes

&lt;span class="c"&gt;#Output&lt;/span&gt;
NAME                           STATUS   ROLES           AGE     VERSION
myp-dev-k8s-master-node        Ready    control-plane   6m50s   v1.32.4
myp-dev-k8s-worker-node-0q3j   Ready    &amp;lt;none&amp;gt;          5m26s   v1.32.4
myp-dev-k8s-worker-node-c3js   Ready    &amp;lt;none&amp;gt;          5m28s   v1.32.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Ensure HAProxy pods running properly in all worker nodes
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; haproxy-controller &lt;span class="nt"&gt;-o&lt;/span&gt; wide

&lt;span class="c"&gt;#Output&lt;/span&gt;
NAME                                        READY   STATUS      RESTARTS   AGE     IP                NODE                           NOMINATED NODE   READINESS GATES
haproxy-kubernetes-ingress-27lwj            1/1     Running     0          5m27s   192.168.100.1     myp-dev-k8s-worker-node-c3js   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
haproxy-kubernetes-ingress-crdjob-1-5rwd4   0/1     Completed   0          6m25s   192.168.100.2     myp-dev-k8s-worker-node-c3js   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
haproxy-kubernetes-ingress-khrxt            1/1     Running     0          4m56s   192.168.209.129   myp-dev-k8s-worker-node-0q3j   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Ensure HAProxy service running properly in 30080 port
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#Verify the service&lt;/span&gt;
kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; haproxy-controller haproxy-kubernetes-ingress

&lt;span class="c"&gt;#Output&lt;/span&gt;
NAME                         TYPE       CLUSTER-IP     EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                                                   AGE
haproxy-kubernetes-ingress   NodePort   10.97.64.228   &amp;lt;none&amp;gt;        80:30080/TCP,443:30443/TCP,443:30443/UDP,1024:30002/TCP   7m35s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Ensure Kube-proxy is Running Properly If HAProxy is running on the worker node, but the port is not open, it could be due to issues with kube-proxy, which manages the network routing for services in Kubernetes. Check if kube-proxy is running on the worker nodes:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;-o&lt;/span&gt; wide | &lt;span class="nb"&gt;grep &lt;/span&gt;kube-proxy

&lt;span class="c"&gt;#Output&lt;/span&gt;
kube-proxy-9hqbv                                  1/1     Running   0          7m15s   10.0.1.4          myp-dev-k8s-worker-node-c3js   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-proxy-cnlbl                                  1/1     Running   0          8m29s   10.0.1.3          myp-dev-k8s-master-node        &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-proxy-jrh2p                                  1/1     Running   0          7m13s   10.0.1.5          myp-dev-k8s-worker-node-0q3j   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install sample nginx server in the K8s and create ingress resource of HAProxy
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Install &lt;code&gt;nginx-web&lt;/code&gt; server on K8s and expose &lt;code&gt;80&lt;/code&gt; port
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create deployment nginx-web &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80
kubectl get deployments.apps &lt;span class="nt"&gt;-o&lt;/span&gt; wide
kubectl expose deployment nginx-web &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nt"&gt;--target-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ClusterIP
kubectl get  svc

&lt;span class="c"&gt;#Output&lt;/span&gt;
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;   AGE
kubernetes   ClusterIP   10.96.0.1        &amp;lt;none&amp;gt;        443/TCP   60m
nginx-web    ClusterIP   10.110.173.152   &amp;lt;none&amp;gt;        80/TCP    16m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;To Actually Spread Pods Across Multiple Nodes
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl scale deployment nginx-web &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2
kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx-web

Output:

NAME                         READY   STATUS    RESTARTS   AGE   IP                NODE                           NOMINATED NODE   READINESS GATES
nginx-web-8684b95849-j59tv   1/1     Running   0          11s   192.168.209.130   myp-dev-k8s-worker-node-0q3j   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
nginx-web-8684b95849-wn224   1/1     Running   0          32s   192.168.100.3     myp-dev-k8s-worker-node-c3js   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create the ingress resource to rout the request into &lt;code&gt;nginx-web:80&lt;/code&gt; application
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#Install VI editor to create the resource file&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;vim &lt;span class="nt"&gt;-y&lt;/span&gt;
vi nginx-ingress.yaml

&lt;span class="c"&gt;#Copy following configuration insuide `nginx-ingress.yaml`&lt;/span&gt;
&lt;span class="c"&gt;#nginx-ingress.yaml&lt;/span&gt;
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: default
  annotations:
    ingress.class: &lt;span class="s2"&gt;"haproxy"&lt;/span&gt;
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-web
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Apply the ingress resource
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx-ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Identify the worker node Ip and test access of app through Node Port of worker node
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
curl http://&amp;lt;worker-node1-ip&amp;gt;:30080
curl http://&amp;lt;worker-node2-ip&amp;gt;:30080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will see following output of Nginx server home page&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Welcome to nginx!&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;For online documentation and support please refer to
&lt;span class="nt"&gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;"http://nginx.org/"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;nginx.org&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&lt;/span&gt;.&lt;span class="nt"&gt;&amp;lt;br/&amp;gt;&lt;/span&gt;
Commercial support is available at
&lt;span class="nt"&gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;"http://nginx.com/"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;nginx.com&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&lt;/span&gt;.&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;&lt;/span&gt;Thank you for using nginx.&lt;span class="nt"&gt;&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Identify the &lt;code&gt;haporxay&lt;/code&gt; external Ip address from GCP console and execute following command on browser
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;http://&amp;lt;haproxy-vm-external-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Destroy the resources in GCP account
&lt;/h2&gt;

&lt;p&gt;Execute following commands to destroy / clean the resources which are created on GCP&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform destroy &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dev.tfvars"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Source Github&lt;/strong&gt; - &lt;a href="https://github.com/developerhelperhub/setup-k8s-cluster-on-gcp" rel="noopener noreferrer"&gt;https://github.com/developerhelperhub/setup-k8s-cluster-on-gcp&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>terraform</category>
      <category>kubernetes</category>
      <category>architecture</category>
    </item>
    <item>
      <title>K8s cluster setup in GCP with worker nodes autoscale and its discovery</title>
      <dc:creator>Binoy</dc:creator>
      <pubDate>Thu, 08 May 2025 03:46:43 +0000</pubDate>
      <link>https://dev.to/binoy_59380e698d318/k8s-cluster-setup-in-gcp-with-worker-nodes-autoscale-and-its-discovery-262</link>
      <guid>https://dev.to/binoy_59380e698d318/k8s-cluster-setup-in-gcp-with-worker-nodes-autoscale-and-its-discovery-262</guid>
      <description>&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;This section explains how to set up a Kubernetes (K8s) cluster on Google Cloud Platform (GCP) using Terraform. Instead of using GCP's managed Kubernetes service, we will install open-source Kubernetes manually inside virtual machines (VMs).&lt;/p&gt;

&lt;h1&gt;
  
  
  Objective
&lt;/h1&gt;

&lt;p&gt;In software and microservice architecture design, it is essential to consider the scalability, availability, and performance of the application. I am proposing the following design considerations when selecting Kubernetes as part of the software architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When the application requires high performance, availability, scalability, and security&lt;/li&gt;
&lt;li&gt;When greater control over and maintenance of the infrastructure is needed&lt;/li&gt;
&lt;li&gt;When the application architecture involves complex microservices&lt;/li&gt;
&lt;li&gt;When opting for open-source solutions (optional)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When Kubernetes may not be the right choice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the application is small and can be supported by a cloud provider’s managed services, such as AWS ECS with Fargate (serverless technology)&lt;/li&gt;
&lt;li&gt;If the application does not require high throughput&lt;/li&gt;
&lt;li&gt;If the goal is to minimize IT operational costs and reduce infrastructure management responsibilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this setup, I considered the following points while building Kubernetes (K8s) on GCP:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform: Used to easily deploy, manage, and destroy the cluster infrastructure.&lt;/li&gt;
&lt;li&gt;HAProxy: Implemented an open-source load balancer instead of relying on GCP’s native load balancer. HAProxy provides high-performance load balancing.&lt;/li&gt;
&lt;li&gt;Consul: Implemented VM discovery to automatically register Kubernetes worker nodes with the HAProxy load balancer.&lt;/li&gt;
&lt;li&gt;GCP Auto Scaling and VM Health Checks: Set up an autoscaling group with TCP-based health checks to ensure the availability and reliability of virtual machines.&lt;/li&gt;
&lt;li&gt;GCP RBAC: Leveraged GCP’s Role-Based Access Control to simplify node joining, manage Kubernetes-related files in GCP Buckets, and associate service accounts with bucket roles and virtual machines.&lt;/li&gt;
&lt;li&gt;Minimal Permissions: As a best practice, configured minimal necessary roles for infrastructure components to enhance security.&lt;/li&gt;
&lt;li&gt;Firewall Rules: Configured the following rules: 

&lt;ul&gt;
&lt;li&gt;HAProxy &lt;/li&gt;
&lt;li&gt;Ingress rule

&lt;ul&gt;
&lt;li&gt;Port: 80&lt;/li&gt;
&lt;li&gt;Public access of HAPRoxy endpoint&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;HAProxy → Kubernetes Worker Nodes (port 30080).&lt;/li&gt;

&lt;li&gt;Consul rule &lt;/li&gt;

&lt;li&gt;Ingress and Egress

&lt;ul&gt;
&lt;li&gt;For all agents based on VM tag (HAProxy, K8s Worker Nodes)&lt;/li&gt;
&lt;li&gt;Port: 8301, 8600, 8300 for Ingress and Egress rule&lt;/li&gt;
&lt;li&gt;Private and public network&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;K8s Rule&lt;/li&gt;

&lt;li&gt;For master node and all worker nodes based on VM tag 

&lt;ul&gt;
&lt;li&gt;Ingress &lt;/li&gt;
&lt;li&gt;Port: 10250, 30000-32767, 10255, 6443&lt;/li&gt;
&lt;li&gt;Private network only&lt;/li&gt;
&lt;li&gt;Egress&lt;/li&gt;
&lt;li&gt;Port: 443, 6443&lt;/li&gt;
&lt;li&gt;Private network only&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;For all worker nodes connecting to HProxy based on VM tag

&lt;ul&gt;
&lt;li&gt;Ingress &lt;/li&gt;
&lt;li&gt;Port: 30080&lt;/li&gt;
&lt;li&gt;Public network only&lt;/li&gt;
&lt;li&gt;Egress&lt;/li&gt;
&lt;li&gt;Port: 30080&lt;/li&gt;
&lt;li&gt;Private network only&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Health Check API of GCP based on k8s nodes tag

&lt;ul&gt;
&lt;li&gt;Ingress &lt;/li&gt;
&lt;li&gt;Port: 10250&lt;/li&gt;
&lt;li&gt;GCP Health check netowrk&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Public and Private Subnets: Separated application workloads and network traffic by isolating resources into public and private subnets.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22ybmbmen1jd69ld7a3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22ybmbmen1jd69ld7a3k.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This is not a production-ready architecture. In production environments, the default network should not be used. Instead, you should create your own VPCs, such as separate networks for Management, Development, and Production environments (for eg: following HIPAA-compliant recommended network architecture practices).&lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;We have to setup the following tools and accounts&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create GCP account [we can create free tier account]&lt;/li&gt;
&lt;li&gt;Install Terraform in your machine [Min Terraform v1.9.3]&lt;/li&gt;
&lt;li&gt;Install gcloud command line in your machine
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Login GCP account
&lt;/h3&gt;

&lt;p&gt;Login the GCP account through gcloud CLI command. Execute following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud auth login 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Optional) Re-authenticate for a Specific Project&lt;br&gt;
To log in and set a GCP project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud config &lt;span class="nb"&gt;set &lt;/span&gt;project PROJECT_ID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;GCP PROJECT_ID&lt;/code&gt; with your actual project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure the &lt;code&gt;env&lt;/code&gt; variables
&lt;/h3&gt;

&lt;p&gt;Following environment variables need to configure. Replace the GCP project id with &lt;code&gt;{your-gcp-project-id}&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CLOUDSDK_CORE_PROJECT&lt;/code&gt;: This variable configure the GCP project id to use the &lt;code&gt;gcloud&lt;/code&gt; cli command&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TF_VAR_gcp_project_id&lt;/code&gt;: This variable to configure the GCP project id to use the terraform
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CLOUDSDK_CORE_PROJECT&lt;/span&gt;&lt;span class="o"&gt;={&lt;/span&gt;your-gcp-project-id&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;export set &lt;/span&gt;&lt;span class="nv"&gt;TF_VAR_gcp_project_id&lt;/span&gt;&lt;span class="o"&gt;={&lt;/span&gt;your-gcp-project-id&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Terraform Structure
&lt;/h3&gt;

&lt;p&gt;Following the terraform module maintain to deploy the resources in the GCP account&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxx4mnqgclcrzie0pth0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxx4mnqgclcrzie0pth0.png" alt=" " width="510" height="982"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Terraform workspace
&lt;/h3&gt;

&lt;p&gt;The following command is used to create a Terraform workspace. A workspace in Terraform functions similarly to managing different environments in software development, such as &lt;code&gt;dev&lt;/code&gt; and &lt;code&gt;prod&lt;/code&gt;. In this setup, the workspace helps differentiate resources for each environment within the same GCP project.&lt;/p&gt;

&lt;p&gt;We are following a consistent naming convention for resource creation in the GCP account:&lt;br&gt;
Naming Pattern: &lt;code&gt;{project-id}-{env}-{resource-name}&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Examples for the &lt;code&gt;dev&lt;/code&gt; environment:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;myp-dev-secure-bucket&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;myp-dev-k8s-master-node&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;myp-dev-k8s-worker-node-1&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Examples for the &lt;code&gt;prod&lt;/code&gt; environment:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;myp-prod-secure-bucket&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;myp-prod-k8s-master-node&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;myp-prod-k8s-worker-node-1&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; As a best practice, production workloads should be managed in a separate GCP project. This approach improves production performance, enhances security, and ensures complete isolation between development and production environments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform workspace new dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Terraform Configuration
&lt;/h3&gt;

&lt;p&gt;Resource configurations can be defined in a &lt;code&gt;dev.tfvars&lt;/code&gt; variable file. Different variable files can be maintained for different environments (e.g., &lt;code&gt;dev.tfvars&lt;/code&gt;, &lt;code&gt;prod.tfvars&lt;/code&gt;).&lt;br&gt;
For example, project-specific values such as the project ID and project name for each environment can be configured in these files.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#--------------------- Development Project Configuration ---------------------&lt;/span&gt;
&lt;span class="c"&gt;#Development Project configuration, this project configuration is used to maintain resources for this project. eg: project_id will be used to create the GCP resources&lt;/span&gt;
project_id   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"myp"&lt;/span&gt;
project_name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"My Project"&lt;/span&gt;
&lt;span class="c"&gt;# --------------------- GCP Project and Regsion Configuration ---------------------&lt;/span&gt;
gcp_region &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east1"&lt;/span&gt;
gcp_zone   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east1-b"&lt;/span&gt;
&lt;span class="c"&gt;#--------------------- Network Configuration ---------------------&lt;/span&gt;
nw_network_name          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"default"&lt;/span&gt;
nw_subnet_public_address_range &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.0/24"&lt;/span&gt;
nw_subnet_private_address_range &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.1.0/24"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setup the resources in GCP
&lt;/h3&gt;

&lt;p&gt;Run the following Terraform command to create the resources in the GCP .&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init &lt;span class="nt"&gt;--upgrade&lt;/span&gt;
terraform plan &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dev.tfvars"&lt;/span&gt;
terraform apply &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dev.tfvars"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It creates the following resources in GCP project account&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Service Account:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="mailto:myp-k8s-master-sa@xxxxxx.gserviceaccount.com"&gt;myp-k8s-master-sa@xxxxxx.gserviceaccount.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="mailto:myp-k8s-worker-sa@xxxxxx.gserviceaccount.com"&gt;myp-k8s-worker-sa@xxxxxx.gserviceaccount.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="mailto:myp-consul-sa@xxxxxx.gserviceaccount.com"&gt;myp-consul-sa@xxxxxx.gserviceaccount.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="mailto:myp-lb-haproxy-sa@xxxxxx.gserviceaccount.com"&gt;myp-lb-haproxy-sa@xxxxxx.gserviceaccount.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;VPC Network (default)→Subnet:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;myp-dev-private-subnet&lt;/li&gt;
&lt;li&gt;myp-dev-public-subnet&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;VPC Network (default)→Firewall:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;myp-dev-consol-incoming-from-consol-server&lt;/li&gt;
&lt;li&gt;myp-dev-consol-outgoing-to-consol-server&lt;/li&gt;
&lt;li&gt;myp-dev-haproxy-lb-allow-http&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-incoming-from-haproxy-lb&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-wroker-incoming-from-k8s-master&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-wroker-egress-to-k8s-master&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-outgoing-to-haproxy-lb-allow&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-incoming-from-gcp-helath-service&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Buckets:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Bucket name: myp-dev-{uniqueid}-secure-bukcet&lt;/li&gt;
&lt;li&gt;Unique id can be configured in the dev.tfvars&lt;/li&gt;
&lt;li&gt;Folder name: /k8s/master-node&lt;/li&gt;
&lt;li&gt;Permission&lt;/li&gt;
&lt;li&gt;&lt;a href="mailto:myp-k8s-master-sa@xxxxxx.gserviceaccount.com"&gt;myp-k8s-master-sa@xxxxxx.gserviceaccount.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="mailto:myp-k8s-worker-sa@xxxxxx.gserviceaccount.com"&gt;myp-k8s-worker-sa@xxxxxx.gserviceaccount.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="mailto:myp-consul-sa@xxxxxx.gserviceaccount.com"&gt;myp-consul-sa@xxxxxx.gserviceaccount.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="mailto:myp-lb-haproxy-sa@xxxxxx.gserviceaccount.com"&gt;myp-lb-haproxy-sa@xxxxxx.gserviceaccount.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;VM Instances&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;myp-dev-consul-server&lt;/li&gt;
&lt;li&gt;myp-dev-haproxy-lb-1&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-master-node&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-worker-node-mig&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-worker-node-bxsp
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Instance Templates&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;myp-dev-k8s-worker-node-template&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Instance Groups&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;myp-dev-k8s-worker-node-mig&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Health Checks&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;myp-dev-k8s-worker-node-health-check&lt;/li&gt;
&lt;li&gt;All health of nodes should be 100% healthy&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Install Web Applications K8s cluster
&lt;/h2&gt;

&lt;p&gt;We deployed &lt;code&gt;nginx-web&lt;/code&gt; as a sample application on the Kubernetes worker nodes and accessed it through the HAProxy load balancer.&lt;br&gt;
As HAProxy is the chosen load balancer in our architecture, we configured an HAProxy Ingress Controller during the master node setup. This controller efficiently routes incoming traffic to the appropriate applications running within the pods.&lt;/p&gt;
&lt;h3&gt;
  
  
  Install the HAProxy ingress controller into master VM in the GCP console
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Login into ubuntu user and changed into home directory
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;su ubuntu
&lt;span class="nb"&gt;cd&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Ensure nodes are joined with master nodes
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes

&lt;span class="c"&gt;#Output&lt;/span&gt;
NAME                           STATUS   ROLES           AGE     VERSION
myp-dev-k8s-master-node        Ready    control-plane   6m50s   v1.32.4
myp-dev-k8s-worker-node-0q3j   Ready    &amp;lt;none&amp;gt;          5m26s   v1.32.4
myp-dev-k8s-worker-node-c3js   Ready    &amp;lt;none&amp;gt;          5m28s   v1.32.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Ensure Hproxy pods running properly in all worker nodes
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; haproxy-controller &lt;span class="nt"&gt;-o&lt;/span&gt; wide

&lt;span class="c"&gt;#Output&lt;/span&gt;
NAME                                        READY   STATUS      RESTARTS   AGE     IP                NODE                           NOMINATED NODE   READINESS GATES
haproxy-kubernetes-ingress-27lwj            1/1     Running     0          5m27s   192.168.100.1     myp-dev-k8s-worker-node-c3js   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
haproxy-kubernetes-ingress-crdjob-1-5rwd4   0/1     Completed   0          6m25s   192.168.100.2     myp-dev-k8s-worker-node-c3js   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
haproxy-kubernetes-ingress-khrxt            1/1     Running     0          4m56s   192.168.209.129   myp-dev-k8s-worker-node-0q3j   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Ensure Hproxy service running properly in 30080 port
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#Verify the service&lt;/span&gt;
kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; haproxy-controller haproxy-kubernetes-ingress

&lt;span class="c"&gt;#Output&lt;/span&gt;
NAME                         TYPE       CLUSTER-IP     EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                                                   AGE
haproxy-kubernetes-ingress   NodePort   10.97.64.228   &amp;lt;none&amp;gt;        80:30080/TCP,443:30443/TCP,443:30443/UDP,1024:30002/TCP   7m35s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Ensure Kube-proxy is Running Properly
If HAProxy is running on the worker node, but the port is not open, it could be due to issues with kube-proxy, which manages the network routing for services in Kubernetes.
Check if kube-proxy is running on the worker nodes:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;-o&lt;/span&gt; wide | &lt;span class="nb"&gt;grep &lt;/span&gt;kube-proxy

&lt;span class="c"&gt;#Output&lt;/span&gt;
kube-proxy-9hqbv                                  1/1     Running   0          7m15s   10.0.1.4          myp-dev-k8s-worker-node-c3js   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-proxy-cnlbl                                  1/1     Running   0          8m29s   10.0.1.3          myp-dev-k8s-master-node        &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-proxy-jrh2p                                  1/1     Running   0          7m13s   10.0.1.5          myp-dev-k8s-worker-node-0q3j   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Install sample nginx server in the K8s and create ingress resource of HAProxy
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Install &lt;code&gt;nginx-web&lt;/code&gt; server on K8s and expose &lt;code&gt;80&lt;/code&gt; port
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create deployment nginx-web &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80
kubectl get deployments.apps &lt;span class="nt"&gt;-o&lt;/span&gt; wide
kubectl expose deployment nginx-web &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nt"&gt;--target-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ClusterIP
kubectl get  svc

&lt;span class="c"&gt;#Output&lt;/span&gt;
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;   AGE
kubernetes   ClusterIP   10.96.0.1        &amp;lt;none&amp;gt;        443/TCP   9m3s
nginx-web    ClusterIP   10.111.110.192   &amp;lt;none&amp;gt;        80/TCP    1s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;To Actually Spread Pods Across Multiple Nodes
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl scale deployment nginx-web &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2

kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx-web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME                         READY   STATUS    RESTARTS   AGE   IP                NODE                           NOMINATED NODE   READINESS GATES
nginx-web-8684b95849-j59tv   1/1     Running   0          11s   192.168.209.130   myp-dev-k8s-worker-node-0q3j   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
nginx-web-8684b95849-wn224   1/1     Running   0          32s   192.168.100.3     myp-dev-k8s-worker-node-c3js   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create the ingress resource to rout the request into &lt;code&gt;nginx-web:80&lt;/code&gt; application
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#Install VI editor to create the resource file&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;vim &lt;span class="nt"&gt;-y&lt;/span&gt;
vi nginx-ingress.yaml

&lt;span class="c"&gt;#Copy following configuration insuide `nginx-ingress.yaml`&lt;/span&gt;
&lt;span class="c"&gt;#nginx-ingress.yaml&lt;/span&gt;
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: default
  annotations:
    ingress.class: &lt;span class="s2"&gt;"haproxy"&lt;/span&gt;
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-web
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Apply the ingress resource
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx-ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Identify the worker node Ip and test access of app through Node Port of worker node
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
curl http://&amp;lt;worker-node1-ip&amp;gt;:30080
curl http://&amp;lt;worker-node2-ip&amp;gt;:30080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will see following output of Nginx server home page&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Welcome to nginx!&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;For online documentation and support please refer to
&lt;span class="nt"&gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;"http://nginx.org/"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;nginx.org&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&lt;/span&gt;.&lt;span class="nt"&gt;&amp;lt;br/&amp;gt;&lt;/span&gt;
Commercial support is available at
&lt;span class="nt"&gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;"http://nginx.com/"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;nginx.com&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&lt;/span&gt;.&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;&lt;/span&gt;Thank you for using nginx.&lt;span class="nt"&gt;&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Identify the &lt;code&gt;haporxay&lt;/code&gt; external Ip address from GCP console and execute following command on browser
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;http://&amp;lt;haproxy-vm-external-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Destroy the resources in GCP account
&lt;/h2&gt;

&lt;p&gt;Execute following commands to destroy / clean the resources which are created on GCP&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform destroy &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dev.tfvars"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Source Code : (setup-k8s-cluster-on-gcp)[&lt;a href="https://github.com/developerhelperhub/setup-k8s-cluster-on-gcp" rel="noopener noreferrer"&gt;https://github.com/developerhelperhub/setup-k8s-cluster-on-gcp&lt;/a&gt;]&lt;/p&gt;

&lt;h1&gt;
  
  
  Debug the services
&lt;/h1&gt;

&lt;p&gt;Following git repo contains the all debug documents to debug the issues &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;(Debug Notes)[&lt;a href="https://github.com/developerhelperhub/setup-k8s-cluster-on-gcp/debugs" rel="noopener noreferrer"&gt;https://github.com/developerhelperhub/setup-k8s-cluster-on-gcp/debugs&lt;/a&gt;]&lt;/li&gt;
&lt;li&gt;Verify VMS installation&lt;/li&gt;
&lt;li&gt;Verify storage&lt;/li&gt;
&lt;li&gt;Verify the consul&lt;/li&gt;
&lt;li&gt;Verify the master node&lt;/li&gt;
&lt;li&gt;Verify the HAProxy&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>architecture</category>
      <category>kubernetes</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Setup K8s cluster on GCP VMs</title>
      <dc:creator>Binoy</dc:creator>
      <pubDate>Mon, 28 Apr 2025 17:07:13 +0000</pubDate>
      <link>https://dev.to/binoy_59380e698d318/setup-k8s-cluster-on-gcp-vms-4lip</link>
      <guid>https://dev.to/binoy_59380e698d318/setup-k8s-cluster-on-gcp-vms-4lip</guid>
      <description>&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;This section explains how to set up a Kubernetes (K8s) cluster on Google Cloud Platform (GCP) using Terraform. Instead of using GCP's managed Kubernetes service, we will install open-source Kubernetes manually inside virtual machines (VMs).&lt;/p&gt;

&lt;h1&gt;
  
  
  Objective
&lt;/h1&gt;

&lt;p&gt;In software and microservice architecture design, it is essential to consider the scalability, availability, and performance of the application. I am proposing the following design considerations when selecting Kubernetes as part of the software architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When the application requires high performance, availability, scalability, and security&lt;/li&gt;
&lt;li&gt;When greater control over and maintenance of the infrastructure is needed&lt;/li&gt;
&lt;li&gt;When the application architecture involves complex microservices&lt;/li&gt;
&lt;li&gt;When opting for open-source solutions (optional)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When Kubernetes may not be the right choice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the application is small and can be supported by a cloud provider’s managed services, such as AWS ECS with Fargate (serverless technology)&lt;/li&gt;
&lt;li&gt;If the application does not require high throughput&lt;/li&gt;
&lt;li&gt;If the goal is to minimize IT operational costs and reduce infrastructure management responsibilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this setup, I considered the following points while building Kubernetes (K8s) on GCP:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform: Used to easily deploy, manage, and destroy the cluster infrastructure.&lt;/li&gt;
&lt;li&gt;HAProxy: Implemented an open-source load balancer instead of relying on GCP’s native load balancer. HAProxy provides high-performance load balancing.&lt;/li&gt;
&lt;li&gt;GCP RBAC: Leveraged GCP’s Role-Based Access Control to simplify node joining, manage Kubernetes-related files in GCP Buckets, and associate service accounts with bucket roles and virtual machines.&lt;/li&gt;
&lt;li&gt;Minimal Permissions: As a best practice, configured minimal necessary roles for infrastructure components to enhance security.&lt;/li&gt;
&lt;li&gt;Firewall Rules: Configured the following rules: Public → HAProxy (port 80) and HAProxy → Kubernetes Worker Nodes (port 30080).&lt;/li&gt;
&lt;li&gt;Public and Private Subnets: Separated application workloads and network traffic by isolating resources into public and private subnets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zql56a6iby89f3pe2ho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zql56a6iby89f3pe2ho.png" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This is not a production-ready architecture. In production environments, the default network should not be used. Instead, you should create your own VPCs, such as separate networks for Management, Development, and Production environments (for eg: following HIPAA-compliant recommended network architecture practices).&lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;We have to setup the following tools and accounts&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create GCP account [we can create free tier account]&lt;/li&gt;
&lt;li&gt;Install Terraform in your machine [Min Terraform v1.9.3]&lt;/li&gt;
&lt;li&gt;Install gcloud command line in your machine
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Login GCP account
&lt;/h3&gt;

&lt;p&gt;Login the GCP account through gcloud CLI command. Execute following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud auth login 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Optional) Re-authenticate for a Specific Project&lt;br&gt;
To log in and set a GCP project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud config &lt;span class="nb"&gt;set &lt;/span&gt;project PROJECT_ID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;GCP PROJECT_ID&lt;/code&gt; with your actual project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure the &lt;code&gt;env&lt;/code&gt; variables
&lt;/h3&gt;

&lt;p&gt;Following environment variables need to configure. Replace the GCP project id with &lt;code&gt;{your-gcp-project-id}&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CLOUDSDK_CORE_PROJECT&lt;/code&gt;: This variable configure the GCP project id to use the &lt;code&gt;gcloud&lt;/code&gt; cli command&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TF_VAR_gcp_project_id&lt;/code&gt;: This variable to configure the GCP project id to use the terraform
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CLOUDSDK_CORE_PROJECT&lt;/span&gt;&lt;span class="o"&gt;={&lt;/span&gt;your-gcp-project-id&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;export set &lt;/span&gt;&lt;span class="nv"&gt;TF_VAR_gcp_project_id&lt;/span&gt;&lt;span class="o"&gt;={&lt;/span&gt;your-gcp-project-id&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Terraform Structure
&lt;/h3&gt;

&lt;p&gt;Following the terraform module maintain to deploy the resources in the GCP account&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favqd4q2bk759er94pguo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favqd4q2bk759er94pguo.png" width="730" height="1358"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create the Terraform workspace
&lt;/h3&gt;

&lt;p&gt;The following command is used to create a Terraform workspace. A workspace in Terraform functions similarly to managing different environments in software development, such as &lt;code&gt;dev&lt;/code&gt; and &lt;code&gt;prod&lt;/code&gt;. In this setup, the workspace helps differentiate resources for each environment within the same GCP project.&lt;/p&gt;

&lt;p&gt;We are following a consistent naming convention for resource creation in the GCP account:&lt;br&gt;
Naming Pattern: &lt;code&gt;{project-id}-{env}-{resource-name}&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Examples for the &lt;code&gt;dev&lt;/code&gt; environment:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;myp-dev-secure-bucket&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;myp-dev-k8s-master-node&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;myp-dev-k8s-worker-node-1&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Examples for the &lt;code&gt;prod&lt;/code&gt; environment:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;myp-prod-secure-bucket&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;myp-prod-k8s-master-node&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;myp-prod-k8s-worker-node-1&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; As a best practice, production workloads should be managed in a separate GCP project. This approach improves production performance, enhances security, and ensures complete isolation between development and production environments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform workspace new dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Terraform Configuration
&lt;/h3&gt;

&lt;p&gt;Resource configurations can be defined in a &lt;code&gt;dev.tfvars&lt;/code&gt; variable file. Different variable files can be maintained for different environments (e.g., &lt;code&gt;dev.tfvars&lt;/code&gt;, &lt;code&gt;prod.tfvars&lt;/code&gt;).&lt;br&gt;
For example, project-specific values such as the project ID and project name for each environment can be configured in these files.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#--------------------- Development Project Configuration ---------------------&lt;/span&gt;
&lt;span class="c"&gt;#Development Project configuration, this project configuration is used to maintain resources for this project. eg: project_id will be used to create the GCP resources&lt;/span&gt;
project_id   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"myp"&lt;/span&gt;
project_name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"My Project"&lt;/span&gt;
&lt;span class="c"&gt;# --------------------- GCP Project and Regsion Configuration ---------------------&lt;/span&gt;
gcp_region &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east1"&lt;/span&gt;
gcp_zone   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east1-b"&lt;/span&gt;
&lt;span class="c"&gt;#--------------------- Network Configuration ---------------------&lt;/span&gt;
nw_network_name          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"default"&lt;/span&gt;
nw_subnet_public_address_range &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.0/24"&lt;/span&gt;
nw_subnet_private_address_range &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.1.0/24"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setup the resources in GCP
&lt;/h3&gt;

&lt;p&gt;Run the following Terraform command to create the resources in the GCP .&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init &lt;span class="nt"&gt;--upgrade&lt;/span&gt;
terraform plan &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dev.tfvars"&lt;/span&gt;
terraform apply &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dev.tfvars"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It creates the following resources in GCP project account&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service Account:

&lt;ul&gt;
&lt;li&gt;&lt;a href="mailto:k8s-master-node-sa@xxxxxx.gserviceaccount.com"&gt;k8s-master-node-sa@xxxxxx.gserviceaccount.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="mailto:k8s-worker-node-sa@xxxxxx.gserviceaccount.com"&gt;k8s-worker-node-sa@xxxxxx.gserviceaccount.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;VPC Network (default)→Subnet:

&lt;ul&gt;
&lt;li&gt;myp-dev-private-subnet&lt;/li&gt;
&lt;li&gt;myp-dev-public-subnet&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;VPC Network (default)→Firewall:

&lt;ul&gt;
&lt;li&gt;myp-dev-haproxy-lb-allow-http&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-incoming-from-haproxy-lb&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-wroker-incoming-from-k8s-master&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-wroker-egress-to-k8s-master&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-outgoing-to-haproxy-lb-allow&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Buckets:

&lt;ul&gt;
&lt;li&gt;Bucket name: myp-dev-{uniqueid}-secure-bukcet

&lt;ul&gt;
&lt;li&gt;Unique id can be configured in the &lt;code&gt;dev.tfvars&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Folder name: /k8s/master-node&lt;/li&gt;

&lt;li&gt;Permission

&lt;ul&gt;
&lt;li&gt;&lt;a href="mailto:k8s-master-node-sa@xxxxxx.gserviceaccount.com"&gt;k8s-master-node-sa@xxxxxx.gserviceaccount.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="mailto:k8s-worker-node-sa@xxxxxx.gserviceaccount.com"&gt;k8s-worker-node-sa@xxxxxx.gserviceaccount.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;VM Instances

&lt;ul&gt;
&lt;li&gt;myp-dev-haproxy-lb-1&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-master-node&lt;/li&gt;
&lt;li&gt;myp-dev-k8s-worker-node-1&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Verify the services are installed in the K8s VM and HAProxy VM
&lt;/h2&gt;

&lt;p&gt;We can verify whether the services have been installed successfully. The following steps can help monitor and validate the status of the services.&lt;/p&gt;

&lt;h3&gt;
  
  
  SSH into Master VM in the GCP console
&lt;/h3&gt;

&lt;p&gt;Verify all initial script execute properly while start the VM&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#For Google Cloud VM Instances:&lt;/span&gt;
&lt;span class="c"&gt;#When using the metadata_startup_script in a GCP VM, the startup script output is logged to:&lt;/span&gt;
&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-500f&lt;/span&gt; /var/log/syslog

&lt;span class="nb"&gt;cat&lt;/span&gt; /var/log/syslog | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Intalled the tools!"&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; /var/log/syslog | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Container configure default configuration!"&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; /var/log/syslog | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Enabiling IP forwarding"&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; /var/log/syslog | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Configured the paramters of VM for K8s!"&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; /var/log/syslog | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Kubernetes has initialized successfully!"&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; /var/log/syslog | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Uploaded token into bucket!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  SSH into Worker VM in the GCP console
&lt;/h3&gt;

&lt;p&gt;Verify all initial script execute properly while start the VM&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-500f&lt;/span&gt; /var/log/syslog
&lt;span class="nb"&gt;cat&lt;/span&gt; /var/log/syslog | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Joined the master node successfully!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  SSH into HAProxy VM in the GCP console
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Verify all initial script execute properly while start the VM
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-500f&lt;/span&gt; /var/log/syslog
    &lt;span class="nb"&gt;cat&lt;/span&gt; /var/log/syslog | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Restart HAPorxy!"&lt;/span&gt;

    &lt;span class="c"&gt;##Output:&lt;/span&gt;
    startup-script: Restart HAPorxy!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Verify HAProxy configuration and checking internal IP configure of k8s backend service
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /etc/haproxy/haproxy.cfg

&lt;span class="c"&gt;#Following information will be added in the configuration, Output:&lt;/span&gt;
frontend http_front
        &lt;span class="nb"&gt;bind&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;:80
        mode http
        default_backend haproxy_ingress_backend
backend haproxy_ingress_backend
    mode http
    balance roundrobin
    server k8s-node1 &amp;lt;worker-node-internip&amp;gt;:30080 check maxconn 32
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Verify the tcp connection 30080 port
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;telnet &lt;span class="nt"&gt;-y&lt;/span&gt;
telnet &amp;lt;worker-node-internip&amp;gt; 30080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verify the Bucket
&lt;/h3&gt;

&lt;p&gt;Check the bucket storage to verify whether the file &lt;code&gt;/k8s/master-node/join_node.sh&lt;/code&gt; has been uploaded.&lt;br&gt;
This shell script is used to join worker nodes to the Kubernetes cluster. It will be downloaded and executed on each worker node during the setup process.&lt;/p&gt;
&lt;h3&gt;
  
  
  Verify the nodes join
&lt;/h3&gt;

&lt;p&gt;We can verify the worker nodes are joined with master node. Following step helps monitor the services&lt;br&gt;
SSH into Master VM in the GCP console&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verify the nodes are joined in the K8s cluster. Execute following command
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#Login into ubuntu user&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;su ubuntu
&lt;span class="c"&gt;##Check the nodes&lt;/span&gt;
kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Following Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME                            STATUS   ROLES           AGE   VERSION
myp-dev-k8s-master-node     Ready    control-plane   26m   v1.32.3
myp-dev-k8s-worker-node-1   Ready    &amp;lt;none&amp;gt;          25m   v1.32.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Verify the pods are installed eg: network plugin (Calico in this case)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;##Check the pods&lt;/span&gt;
kubectl get pods &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;

&lt;span class="c"&gt;##Following Output:&lt;/span&gt;
NAMESPACE     NAME                                                  READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7498b9bb4c-44pf8              1/1     Running   0          29m
kube-system   calico-node-b4bnh                                     1/1     Running   0          28m
kube-system   calico-node-bhpv8                                     1/1     Running   0          29m
kube-system   coredns-668d6bf9bc-8tdsb                              1/1     Running   0          29m
kube-system   coredns-668d6bf9bc-j6q4p                              1/1     Running   0          29m
kube-system   etcd-myp-default-k8s-master-node                      1/1     Running   0          29m
kube-system   kube-apiserver-myp-default-k8s-master-node            1/1     Running   0          29m
kube-system   kube-controller-manager-myp-default-k8s-master-node   1/1     Running   0          29m
kube-system   kube-proxy-4qgkc                                      1/1     Running   0          28m
kube-system   kube-proxy-zvlsb                                      1/1     Running   0          29m
kube-system   kube-scheduler-myp-default-k8s-master-node            1/1     Running   0          29m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install Web Applications K8s cluster
&lt;/h2&gt;

&lt;p&gt;We are deploying &lt;code&gt;nginx-web&lt;/code&gt; as a sample application on the worker nodes and connecting to it through the HAProxy load balancer.&lt;br&gt;
Since we are using HAProxy as the load balancer in our architecture, we need to configure an HAProxy Ingress Controller to properly route incoming requests to the applications running inside the pods.&lt;/p&gt;
&lt;h3&gt;
  
  
  Install the HAProxy ingress controller into master VM in the GCP console
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Install helm
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#Login to ubuntu user&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;su ubuntu

&lt;span class="c"&gt;#Install helm&lt;/span&gt;
curl https://baltocdn.com/helm/signing.asc | &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-key add -
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;apt-transport-https &lt;span class="nt"&gt;--yes&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb https://baltocdn.com/helm/stable/debian/ all main"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/helm-stable-debian.list
&lt;span class="c"&gt;# Install&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;helm

&lt;span class="c"&gt;#Check the version&lt;/span&gt;
helm version

&lt;span class="c"&gt;#Output:&lt;/span&gt;
version.BuildInfo&lt;span class="o"&gt;{&lt;/span&gt;Version:&lt;span class="s2"&gt;"v3.17.2"&lt;/span&gt;, GitCommit:&lt;span class="s2"&gt;"cc0bbbd6d6276b83880042c1ecb34087e84d41eb"&lt;/span&gt;, GitTreeState:&lt;span class="s2"&gt;"clean"&lt;/span&gt;, GoVersion:&lt;span class="s2"&gt;"go1.23.7"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Add HAProxy repo
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add haproxytech https://haproxytech.github.io/helm-charts
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Install the HAProxy ingress controller and expose the http and https Node Port to connect the HAProxy Load Balancer
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;haproxy-kubernetes-ingress haproxytech/kubernetes-ingress &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--namespace&lt;/span&gt; haproxy-controller &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; controller.service.type&lt;span class="o"&gt;=&lt;/span&gt;NodePort &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; controller.service.nodePorts.http&lt;span class="o"&gt;=&lt;/span&gt;30080 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; controller.service.nodePorts.https&lt;span class="o"&gt;=&lt;/span&gt;30443 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; controller.service.nodePorts.stat&lt;span class="o"&gt;=&lt;/span&gt;30002 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; controller.service.nodePorts.prometheus&lt;span class="o"&gt;=&lt;/span&gt;30003

&lt;span class="c"&gt;#Verify the service&lt;/span&gt;
kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; haproxy-controller haproxy-kubernetes-ingress
&lt;span class="c"&gt;#Output&lt;/span&gt;
haproxy-kubernetes-ingress   NodePort   10.109.135.187   &amp;lt;none&amp;gt;        80:30080/TCP,443:30443/TCP,443:30443/UDP,1024:30002/TCP   103s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Install sample nginx server in the K8s and create ingress resource of HAProxy
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Install &lt;code&gt;nginx-web&lt;/code&gt; server on K8s and expose &lt;code&gt;80&lt;/code&gt; port
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create deployment nginx-web &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80
kubectl get deployments.apps &lt;span class="nt"&gt;-o&lt;/span&gt; wide
kubectl expose deployment nginx-web &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nt"&gt;--target-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ClusterIP
kubectl get  svc

&lt;span class="c"&gt;#Output&lt;/span&gt;
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;   AGE
kubernetes   ClusterIP   10.96.0.1        &amp;lt;none&amp;gt;        443/TCP   60m
nginx-web    ClusterIP   10.110.173.152   &amp;lt;none&amp;gt;        80/TCP    16m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Create the ingress resource to rout the request into &lt;code&gt;nginx-web:80&lt;/code&gt; application
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#Install VI editor to create the resource file&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;vim &lt;span class="nt"&gt;-y&lt;/span&gt;
vi nginx-ingress.yaml

&lt;span class="c"&gt;#Copy following configuration insuide `nginx-ingress.yaml`&lt;/span&gt;
&lt;span class="c"&gt;#nginx-ingress.yaml&lt;/span&gt;
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
    name: nginx-ingress
    namespace: default
    annotations:
    ingress.class: &lt;span class="s2"&gt;"haproxy"&lt;/span&gt;
spec:
    rules:
    - http:
        paths:
        - path: /
        pathType: Prefix
        backend:
            service:
            name: nginx-web
            port:
                number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Apply the ingress resource
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx-ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Identify the worker node Ip and test access of app through Node Port of worker node
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
curl http://&amp;lt;worker-node-ip&amp;gt;:30080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We will see following output of Nginx server home page&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Welcome to nginx!&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;For online documentation and support please refer to
&lt;span class="nt"&gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;"http://nginx.org/"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;nginx.org&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&lt;/span&gt;.&lt;span class="nt"&gt;&amp;lt;br/&amp;gt;&lt;/span&gt;
Commercial support is available at
&lt;span class="nt"&gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;"http://nginx.com/"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;nginx.com&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&lt;/span&gt;.&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;&lt;/span&gt;Thank you for using nginx.&lt;span class="nt"&gt;&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Identify the &lt;code&gt;haporxay&lt;/code&gt; external Ip address from GCP console and execute following command on browser
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;http://&amp;lt;haproxy-vm-external-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Destroy the resources in GCP account
&lt;/h2&gt;

&lt;p&gt;Execute following commands to destroy / clean the resources which are created on GCP&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform destroy &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dev.tfvars"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Git Repo : (setup-k8s-cluster-on-gcp)[&lt;a href="https://github.com/developerhelperhub/setup-k8s-cluster-on-gcp" rel="noopener noreferrer"&gt;https://github.com/developerhelperhub/setup-k8s-cluster-on-gcp&lt;/a&gt;]&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>terraform</category>
      <category>kubernetes</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Spring Boot 3.4.1 Keycloak introspect and method level security</title>
      <dc:creator>Binoy</dc:creator>
      <pubDate>Tue, 21 Jan 2025 18:26:42 +0000</pubDate>
      <link>https://dev.to/binoy_59380e698d318/spring-boot-341-keycloak-introspect-and-method-level-security-530o</link>
      <guid>https://dev.to/binoy_59380e698d318/spring-boot-341-keycloak-introspect-and-method-level-security-530o</guid>
      <description>&lt;h1&gt;
  
  
  Spring Boot 3.4.1 Keycloak introspect and method level security
&lt;/h1&gt;

&lt;p&gt;This section outlines the process of implementing the Keycloak introspection flow in Spring Boot version 3.4.1 and demonstrates how to enable method-level security based on roles configured in Keycloak.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Spring Boot 3.4.1&lt;/li&gt;
&lt;li&gt;Java 22&lt;/li&gt;
&lt;li&gt;Keycloak Integration and RBAC implementation&lt;/li&gt;
&lt;li&gt;Spring Oauth2 Resource Server and Method Level Security&lt;/li&gt;
&lt;li&gt;MongoDB NoSQL DB and Auditing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installing Dependencies Services
&lt;/h2&gt;

&lt;p&gt;The following services are required dependencies before starting the Spring Boot application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keycloak Installation in Docker
&lt;/h3&gt;

&lt;p&gt;Execute the following commands to start the Keycloak service in a Docker container. The admin username and password are set to "admin" and "admin," respectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; klight-authentication-server &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:8080 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;KEYCLOAK_ADMIN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;admin &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;KEYCLOAK_ADMIN_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;admin &lt;span class="se"&gt;\&lt;/span&gt;
  quay.io/keycloak/keycloak:26.0 start-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  MongoDB installation in Docker
&lt;/h3&gt;

&lt;p&gt;Run the following commands to start the MongoDB service in a Docker container. The root username and password are both set to &lt;code&gt;klight-api-gateway&lt;/code&gt;. After installing MongoDB, you need to create a database named "klight-api-gateway" to enable connection with the Spring Boot application, as this database is referenced in the &lt;code&gt;application-dev.yaml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--name&lt;/span&gt; klight-api-gateway-mongo &lt;span class="nt"&gt;-p&lt;/span&gt; 27017:27017 &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MONGO_INITDB_ROOT_USERNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;klight-api-gateway &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MONGO_INITDB_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;klight-api-gateway &lt;span class="nt"&gt;-d&lt;/span&gt; amd64/mongo:8.0.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Spring Security Dependencies
&lt;/h2&gt;

&lt;p&gt;The following dependencies must be configured to enable the &lt;code&gt;introspection&lt;/code&gt; flow. The admin service is responsible for validating tokens with the Keycloak service. A token must be generated from Keycloak and included in the &lt;code&gt;Authorization&lt;/code&gt; header of the API request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.springframework.boot&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;spring-boot-starter-oauth2-resource-server&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Keycloak configuration in YAML file
&lt;/h2&gt;

&lt;p&gt;We have to configure the following attributes in the YAML file. This configuration uses in the &lt;code&gt;SecurityConig.java&lt;/code&gt; class to configure the introspection flow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;keycloak&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;introspection-uri&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://127.0.0.1:8080/realms/klight-api-gateway/protocol/openid-connect/token/introspect&lt;/span&gt;
  &lt;span class="na"&gt;client-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;klight-api-gateway-admin-connect&lt;/span&gt;
  &lt;span class="na"&gt;client-secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HDYMxWu2G2VWn5u59MJKwECIBd6VTO7E&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Security Configuration
&lt;/h2&gt;

&lt;p&gt;The following attributes must be configured in the YAML file. This configuration is utilized in the &lt;code&gt;SecurityConfig.java&lt;/code&gt; class to set up the introspection flow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Configuration&lt;/span&gt;
&lt;span class="nd"&gt;@EnableWebSecurity&lt;/span&gt;
&lt;span class="nd"&gt;@EnableMethodSecurity&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prePostEnabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;securedEnabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;jsr250Enabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SecurityConfig&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Value&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"${keycloak.introspection-uri}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;introspectionUrl&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;@Value&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"${keycloak.client-id}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;clientId&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;@Value&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"${keycloak.client-secret}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;clientSecret&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;@Autowired&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;ExceptionAccessDeniedHandler&lt;/span&gt; &lt;span class="n"&gt;accessDeniedHandler&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;@Autowired&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;ExcetionAuthenticationEntryPoint&lt;/span&gt; &lt;span class="n"&gt;entryPoint&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;@Bean&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;SecurityFilterChain&lt;/span&gt; &lt;span class="nf"&gt;securityFilterChain&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;HttpSecurity&lt;/span&gt; &lt;span class="n"&gt;httpSecurity&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;OpaqueTokenAuthenticationConverter&lt;/span&gt; &lt;span class="n"&gt;converter&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="kd"&gt;throws&lt;/span&gt; &lt;span class="nc"&gt;Exception&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;


        &lt;span class="n"&gt;httpSecurity&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;csrf&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;AbstractHttpConfigurer:&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="n"&gt;disable&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;authorizeHttpRequests&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
                        &lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;anyRequest&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;authenticated&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;oauth2ResourceServer&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
                        &lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;opaqueToken&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
                                &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;introspectionUri&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;introspectionUrl&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                                        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;introspectionClientCredentials&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                                                &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;clientId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                                                &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;clientSecret&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                                        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;authenticationConverter&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;converter&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                                &lt;span class="o"&gt;)&lt;/span&gt;

                                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;authenticationEntryPoint&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;entryPoint&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
                                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;accessDeniedHandler&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;accessDeniedHandler&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
                                &lt;span class="o"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;exceptionHandling&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ex&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
                        &lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;authenticationEntryPoint&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;entryPoint&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
                                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;accessDeniedHandler&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;accessDeniedHandler&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="o"&gt;()))&lt;/span&gt;
        &lt;span class="o"&gt;;&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;httpSecurity&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key configurations in the above setup are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable method-level security in the Spring Boot application using &lt;code&gt;@EnableMethodSecurity(prePostEnabled = true, securedEnabled = true, jsr250Enabled = true)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Implement the &lt;code&gt;OpaqueTokenAuthenticationConverter&lt;/code&gt; converter to process &lt;code&gt;realm_access&lt;/code&gt; and &lt;code&gt;resource_access&lt;/code&gt; from the token and create a list of &lt;code&gt;GrantedAuthority&lt;/code&gt; based on the roles configured in the Keycloak service.&lt;/li&gt;
&lt;li&gt;Disable &lt;code&gt;csrf&lt;/code&gt; protection for the REST API service.&lt;/li&gt;
&lt;li&gt;Authenticate all incoming requests using &lt;code&gt;authorizeHttpRequests&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Configure the introspection flow with &lt;code&gt;oauth2ResourceServer&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Handle authentication exceptions using &lt;code&gt;authenticationEntryPoint&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Set up an access denied handler with &lt;code&gt;accessDeniedHandler&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Method Level Configuration
&lt;/h2&gt;

&lt;p&gt;We are securing the functionality of the &lt;code&gt;ApiService.java&lt;/code&gt; methods using the following configuration. The &lt;code&gt;@PreAuthorize("hasAuthority('ROLE_ADMIN')")&lt;/code&gt; annotation should be added to service class methods to validate role permissions. We should mention &lt;code&gt;ROLE_ADMIN&lt;/code&gt; role name in the annotation which we have created in the Keycloak service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@PreAuthorize&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"hasAuthority('ROLE_ADMIN')"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ApiServiceRequest&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Service creating"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="nc"&gt;ApiServiceCreate&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;createFactory&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getObject&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

    &lt;span class="n"&gt;map&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;create&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Keycloak Configuration
&lt;/h2&gt;

&lt;p&gt;Following configurations need to setup in the Keycloak service. We have to login into Keycloak &lt;code&gt;http://localhost:8080&lt;/code&gt; in the browser with admin user and password.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Realm &lt;code&gt;klight-api-gateway&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Keycloak supports multi-tenancy, enabling us to create application-specific configurations through realm setup. For the &lt;code&gt;Klight API Gateway&lt;/code&gt;, we maintain a dedicated realm. This realm, we are using to maintain the clients, user, group and roles.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on the "Keycloak master" realm in the left corner.&lt;/li&gt;
&lt;li&gt;Click the "Create realm" button.&lt;/li&gt;
&lt;li&gt;Enter the realm name as "klight-api-gateway."&lt;/li&gt;
&lt;li&gt;Enable the realm by toggling the "Enabled" option.&lt;/li&gt;
&lt;li&gt;Click the "Create" button.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Ensure that the "klight-api-gateway" realm is selected before configuring anything, as the default realm is set to "master."&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Client &lt;code&gt;klight-api-gateway-admin-connect&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Following setup to create new client. This client is support OpenID connection where admin service can connect the Keycloak server through this client.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click "Clients" in the left sidebar menu.&lt;/li&gt;
&lt;li&gt;Click the "Create Client" button.&lt;/li&gt;
&lt;li&gt;Configure the following "General Settings":

&lt;ul&gt;
&lt;li&gt;Set the client type to "OpenID Connect"&lt;/li&gt;
&lt;li&gt;Enter the Client ID: &lt;code&gt;klight-api-gateway-admin-connect&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Enter the Client Name: &lt;code&gt;Klight API Gateway Admin Connect&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click the "Next" button.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Configure the following "Capability Configuration":

&lt;ul&gt;
&lt;li&gt;Enable "Client Authentication"&lt;/li&gt;
&lt;li&gt;Select only the "Direct access grants" flow. This flow allows generating tokens using the "username" and "password" method, which is ideal for trusted clients such as mobile apps or backend services.&lt;/li&gt;
&lt;li&gt;Click the "Next" button.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Configure the "Login Settings":

&lt;ul&gt;
&lt;li&gt;No additional configuration is needed here for now.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Click the "Save" button.&lt;/li&gt;

&lt;li&gt;After the client is created, select the "Credentials" tab.

&lt;ul&gt;
&lt;li&gt;Set the "Client Authenticator" to "Client ID and Secret"&lt;/li&gt;
&lt;li&gt;Click "Generate" to create a new "Client Secret."&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Select the "Roles" tab

&lt;ul&gt;
&lt;li&gt;Click the "Create Role" button&lt;/li&gt;
&lt;li&gt;Enter the role name &lt;code&gt;ROLE_ADMIN&lt;/code&gt;, this is the role name is configuring in the method level&lt;/li&gt;
&lt;li&gt;Click "Save" button&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: This client id and client secret are configured in the &lt;code&gt;application-dev.yaml&lt;/code&gt; file&lt;/p&gt;

&lt;p&gt;We have to create the &lt;code&gt;ROLE_ADMIN&lt;/code&gt; in the client &lt;code&gt;Roles&lt;/code&gt; tab&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the User
&lt;/h3&gt;

&lt;p&gt;In this flow, we need to create a new user to authenticate the server.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click "Users" in the left sidebar menu.&lt;/li&gt;
&lt;li&gt;Click the "Create new user" button.&lt;/li&gt;
&lt;li&gt;Enable the "Email Verified" option.&lt;/li&gt;
&lt;li&gt;Enter the following "General" information:

&lt;ul&gt;
&lt;li&gt;Username: "api-admin"&lt;/li&gt;
&lt;li&gt;Email: "&lt;a href="mailto:my-user@test.com"&gt;my-user@test.com&lt;/a&gt;"&lt;/li&gt;
&lt;li&gt;First Name: "my"&lt;/li&gt;
&lt;li&gt;Last Name: "user"&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Click the "Create" button.&lt;/li&gt;

&lt;li&gt;Select the user `api-admin' from the user list to configure password and role&lt;/li&gt;

&lt;li&gt;Click the "Credentials" tab.

&lt;ul&gt;
&lt;li&gt;Click the "Set password" button.&lt;/li&gt;
&lt;li&gt;Enter "test" for both the "Password" and "Password Confirmation" fields.&lt;/li&gt;
&lt;li&gt;Set "Temporary" to Off.&lt;/li&gt;
&lt;li&gt;Click the "Save" button.&lt;/li&gt;
&lt;li&gt;Click the "Save password" button.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Click the "Role Mapping" tab

&lt;ul&gt;
&lt;li&gt;Click the "Assign Role" button&lt;/li&gt;
&lt;li&gt;Search the &lt;code&gt;ROLE_ADMIN&lt;/code&gt; in the search box&lt;/li&gt;
&lt;li&gt;Select the check box of &lt;code&gt;ROLE_ADMIN&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click on the &lt;code&gt;Assign&lt;/code&gt; button&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Testing Endpoint
&lt;/h2&gt;

&lt;p&gt;Once configuration done, we can test the API to test the authentication and authorization&lt;/p&gt;

&lt;h3&gt;
  
  
  Get the access token
&lt;/h3&gt;

&lt;p&gt;Run the following command to get the access token from Keycloak&lt;br&gt;
&lt;code&gt;&lt;/code&gt;&lt;code&gt;shell&lt;br&gt;
curl --location 'http://127.0.0.1:8080/realms/klight-api-gateway/protocol/openid-connect/token' \&lt;br&gt;
--header 'Content-Type: application/x-www-form-urlencoded' \&lt;br&gt;
--data-urlencode 'grant_type=password' \&lt;br&gt;
--data-urlencode 'client_id=klight-api-gateway-admin-connect' \&lt;br&gt;
--data-urlencode 'client_secret=HDYMxWu2G2VWn5u59MJKwECIBd6VTO7E' \&lt;br&gt;
--data-urlencode 'scope=openid' \&lt;br&gt;
--data-urlencode 'username=api-admin' \&lt;br&gt;
--data-urlencode 'password=test'&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Test endpoint of admin service
&lt;/h3&gt;

&lt;p&gt;Run the following command to test the endpoint to create the service&lt;br&gt;
&lt;code&gt;&lt;/code&gt;&lt;code&gt;shell&lt;br&gt;
curl --location 'http://localhost:8082/admin/services' \&lt;br&gt;
--header 'Content-Type: application/json' \&lt;br&gt;
--header 'Authorization: Bearer eyJhbGciOiJSUzI1NiI...' \&lt;br&gt;
--header 'Cookie: JSESSIONID=7B6F7EB454EF0392D6EB21658A922900' \&lt;br&gt;
--data '{&lt;br&gt;
    "name": "Test Service",&lt;br&gt;
    "host": "localhost",&lt;br&gt;
    "path": "test-service",&lt;br&gt;
    "protocol": "https"&lt;br&gt;
}'&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Coding Pattern
&lt;/h2&gt;

&lt;p&gt;This source contains following coding pattern implemented as well&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single Responsible of business logic for service &lt;code&gt;ApiService.java&lt;/code&gt;, this help to maintain the readability, maintainability, reuse this business this class in different functionality, for example 

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ApiServiceCreate.java&lt;/code&gt; for creating the service&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ApiServiceUpdate.java&lt;/code&gt; for creating the service&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;code&gt;@EnableMongoAuditing&lt;/code&gt; enabled auditing &lt;code&gt;MongoDbConfig.java&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;CreatedDate, LastModifiedDate, CreatedBy, LastModifiedBy&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Added the logs with help &lt;code&gt;Slf4j&lt;/code&gt; for debugging purpose&lt;/li&gt;

&lt;li&gt;Maintain the unique error code for exception, implemented global exception handling &lt;code&gt;ExceptionControllerAdvice.java&lt;/code&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/developerhelperhub/klight-api-gateway-admin" rel="noopener noreferrer"&gt;Source Code in Github 'klight-api-gateway-admin'&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am developing a project that provides APIs for creating services for the microservice-based &lt;a href="https://github.com/developerhelperhub/klight-api-gateway" rel="noopener noreferrer"&gt;Klight API gateway&lt;/a&gt;, which I built using the OpenResty web application. This API gateway delivers high performance compared to a Spring Boot API gateway because the OpenResty framework is built on top of the NGINX server and uses Lua, a language commonly used in game programming.&lt;/p&gt;

&lt;p&gt;You might wonder why Kong API Gateway wasn't used. The reason is that the open-source version of Kong does not support OpenID Connect and pay to integrate with OpenID Connect. When building high-performance applications in a microservice architecture, it is crucial to choose an API gateway optimized for performance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/developerhelperhub/klight-api-gateway/wiki" rel="noopener noreferrer"&gt;Klight API Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/users/developerhelperhub/projects/4" rel="noopener noreferrer"&gt;Klight Development Board&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://spring.io/guides/gs/accessing-data-mongodb" rel="noopener noreferrer"&gt;https://spring.io/guides/gs/accessing-data-mongodb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://testcontainers.com/guides/testing-spring-boot-rest-api-using-testcontainers/" rel="noopener noreferrer"&gt;https://testcontainers.com/guides/testing-spring-boot-rest-api-using-testcontainers/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://howtodoinjava.com/spring-boot/oauth2-login-with-keycloak-and-spring-security/" rel="noopener noreferrer"&gt;https://howtodoinjava.com/spring-boot/oauth2-login-with-keycloak-and-spring-security/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://spring.io/blog/2024/11/24/bootiful-34-security" rel="noopener noreferrer"&gt;https://spring.io/blog/2024/11/24/bootiful-34-security&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.spring.io/spring-boot/reference/web/spring-security.html" rel="noopener noreferrer"&gt;https://docs.spring.io/spring-boot/reference/web/spring-security.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.okta.com/blog/2019/06/20/spring-preauthorize" rel="noopener noreferrer"&gt;https://developer.okta.com/blog/2019/06/20/spring-preauthorize&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://spring.io/blog/2024/11/24/bootiful-34-security" rel="noopener noreferrer"&gt;https://spring.io/blog/2024/11/24/bootiful-34-security&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.spring.io/spring-security/reference/reactive/oauth2/resource-server/bearer-tokens.html" rel="noopener noreferrer"&gt;https://docs.spring.io/spring-security/reference/reactive/oauth2/resource-server/bearer-tokens.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.spring.io/spring-data/mongodb/reference/data-commons/auditing.html" rel="noopener noreferrer"&gt;https://docs.spring.io/spring-data/mongodb/reference/data-commons/auditing.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>springsecurity</category>
      <category>keycloak</category>
      <category>architecture</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Binoy</dc:creator>
      <pubDate>Wed, 01 Jan 2025 08:28:37 +0000</pubDate>
      <link>https://dev.to/binoy_59380e698d318/-44e1</link>
      <guid>https://dev.to/binoy_59380e698d318/-44e1</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/binoy_59380e698d318" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1815622%2F3254819c-4f96-4b39-bf95-077cd26cbf88.png" alt="binoy_59380e698d318"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/binoy_59380e698d318/high-availability-and-scalability-deployment-microservices-on-kubernetes-cluster-2p51" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;High Availability and Scalability deployment Microservices on Kubernetes cluster&lt;/h2&gt;
      &lt;h3&gt;Binoy ・ Sep 20 '24&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetes&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#terraform&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#architecture&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>microservices</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Binoy</dc:creator>
      <pubDate>Sun, 29 Dec 2024 05:38:27 +0000</pubDate>
      <link>https://dev.to/binoy_59380e698d318/-47ef</link>
      <guid>https://dev.to/binoy_59380e698d318/-47ef</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/binoy_59380e698d318" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1815622%2F3254819c-4f96-4b39-bf95-077cd26cbf88.png" alt="binoy_59380e698d318"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/binoy_59380e698d318/high-availability-and-scalability-deployment-microservices-on-kubernetes-cluster-2p51" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;High Availability and Scalability deployment Microservices on Kubernetes cluster&lt;/h2&gt;
      &lt;h3&gt;Binoy ・ Sep 20 '24&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetes&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#terraform&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#architecture&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>kubernetes</category>
      <category>microservices</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Binoy</dc:creator>
      <pubDate>Sat, 28 Dec 2024 12:06:32 +0000</pubDate>
      <link>https://dev.to/binoy_59380e698d318/-245a</link>
      <guid>https://dev.to/binoy_59380e698d318/-245a</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/binoy_59380e698d318" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1815622%2F3254819c-4f96-4b39-bf95-077cd26cbf88.png" alt="binoy_59380e698d318"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/binoy_59380e698d318/setup-efficient-cicd-pipeline-jenkins-to-build-binary-and-push-docker-image-on-kubernetes-cluster-4f8d" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Setup efficient CICD Pipeline Jenkins to build binary and push docker image - Kubernetes cluster&lt;/h2&gt;
      &lt;h3&gt;Binoy ・ Sep 5 '24&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#springboot&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#terraform&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetes&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#jenkins&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>jenkins</category>
      <category>docker</category>
    </item>
    <item>
      <title>Terraform - Kong API Gateway deployment in Kubernetes</title>
      <dc:creator>Binoy</dc:creator>
      <pubDate>Sat, 28 Sep 2024 17:47:48 +0000</pubDate>
      <link>https://dev.to/binoy_59380e698d318/terraform-kong-api-gateway-deployment-in-kubernetes-3513</link>
      <guid>https://dev.to/binoy_59380e698d318/terraform-kong-api-gateway-deployment-in-kubernetes-3513</guid>
      <description>&lt;p&gt;This section helps to basic understand how can we install the Kong API Gateway in the Kubernetes Cluster with help of Terraform&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3c39g9kimebb0hmk1il3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3c39g9kimebb0hmk1il3.png" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Objective
&lt;/h1&gt;

&lt;p&gt;In a microservice architecture, one of the key components is the API Gateway, which acts as a reverse proxy. Instead of clients communicating directly with multiple services, all requests are sent to a single server (the API Gateway), which routes them to the appropriate microservices based on routing policies configured in the reverse proxy server. One might wonder if a reverse proxy server like Nginx could replace an API Gateway. However, an API Gateway offers more advanced capabilities compared to a standard Nginx server. Below are some key differences:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Plugins for authentication (JWT, OAuth2, key authentication).&lt;/li&gt;
&lt;li&gt;Rate limiting and request throttling.&lt;/li&gt;
&lt;li&gt;Logging and monitoring (integration with Prometheus, Grafana).&lt;/li&gt;
&lt;li&gt;Service discovery and health checks.&lt;/li&gt;
&lt;li&gt;Supports running in various environments, including Kubernetes and Docker.&lt;/li&gt;
&lt;li&gt;Load balancing with round-robin, least-connections, etc.&lt;/li&gt;
&lt;li&gt;Integration with third-party services like Datadog, Zipkin, and Cassandra.&lt;/li&gt;
&lt;li&gt;Manage the version API&lt;/li&gt;
&lt;li&gt;Manage the deployment strategy like “green/blue”, “canary” deployments&lt;/li&gt;
&lt;li&gt;Comes with built-in API management capabilities and is easier to set up for managing APIs. You can use its admin API to configure routes, services, and plugins dynamically.&lt;/li&gt;
&lt;li&gt;Built with scaling in mind, it easily integrates with tools like PostgreSQL and Cassandra for storing API configuration and scaling horizontally across many nodes.
## Setup local environment to build DevOps resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I use docker containers to set up work environments for multiple applications(&lt;a href="https://dev.to/binoy_59380e698d318/setup-linux-box-on-local-with-docker-container-3k8"&gt;Setup Environment&lt;/a&gt;). This approach ensures fully isolated and maintainable environments for application development, allowing us to easily start and terminate these environments. Below is the Docker command to create the environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; test-microservices-module-envornment-box &lt;span class="nt"&gt;-v&lt;/span&gt; ~/.kube/config:/work/.kube/config &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/work/.kube/config &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HOME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:/root/ &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:/work &lt;span class="nt"&gt;-w&lt;/span&gt; /work &lt;span class="nt"&gt;--net&lt;/span&gt; host developerhelperhub/kub-terr-work-env-box
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The container contains Docker, Kubectl, Helm, Terraform, Kind, Git&lt;/p&gt;

&lt;h1&gt;
  
  
  Setup Kong on Kubernetes Cluster
&lt;/h1&gt;

&lt;p&gt;I have created the terraform mdoules, which are available in the GitHub repository. You can download and set up Kong on a Kubernetes cluster, which runs locally in a Docker container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clone the repository&lt;/strong&gt; onto your local Linux machine to get started.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/developerhelperhub/kuberentes-help.git
&lt;span class="nb"&gt;cd &lt;/span&gt;kuberentes-help/terraform/sections/00008/terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;main.tf&lt;/code&gt; terraform script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;module &lt;span class="s2"&gt;"microservices"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;source&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://github.com/developerhelperhub/microservices-terraform-module.git//microservices?ref=v1.2.0"&lt;/span&gt;
    kind_cluster_name &lt;span class="o"&gt;=&lt;/span&gt; var.kind_cluster_name
    kind_http_port    &lt;span class="o"&gt;=&lt;/span&gt; 80
    kind_https_port   &lt;span class="o"&gt;=&lt;/span&gt; 443
    kubernetes_namespace &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"microservices"&lt;/span&gt;
    kong_enable            &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;true
    &lt;/span&gt;kong_admin_domain_name &lt;span class="o"&gt;=&lt;/span&gt; var.kong_admin_domain_name
    kong_proxy_domain_name &lt;span class="o"&gt;=&lt;/span&gt; var.kong_proxy_domain_name
    kong_db_user           &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mykong"&lt;/span&gt;
    kong_db_name           &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mykongdb"&lt;/span&gt;
    kong_db_password       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"MyPassword2222@"&lt;/span&gt;
    kong_db_admin_password &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"MyPassword2222@"&lt;/span&gt;
    kong_persistence_size  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"5Gi"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;variables.tf&lt;/code&gt; terraform script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#This is variable arguments while running the terraform scripts&lt;/span&gt;
variable &lt;span class="s2"&gt;"kind_cluster_name"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; string
    description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Kind cluster name"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
variable &lt;span class="s2"&gt;"kong_admin_domain_name"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; string
    description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Kong admin api domain name"&lt;/span&gt;
    default &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"admin.kong.myapp.com"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
variable &lt;span class="s2"&gt;"kong_proxy_domain_name"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; string
    description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Kong proxy domain name"&lt;/span&gt;
    default &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"api.gateway.mes.app.com"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These Terraform scripts install and configure resources in the cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create the Kubernetes cluster in docker container locally, the cluster name will be “microservices-development-cluster-control-plane”&lt;/li&gt;
&lt;li&gt;Install the ingress controller and exposes ports 80 and 443 to allow access to services from outside the cluster.&lt;/li&gt;
&lt;li&gt;Create a namespace called "microservices"&lt;/li&gt;
&lt;li&gt;Install Kong in the "microservices" namespace using a Helm chart.&lt;/li&gt;
&lt;li&gt;Kong postgres username and password default “mykong” and “MyPassword2222@”&lt;/li&gt;
&lt;li&gt;Setup the Kong admin API to create the service, route etc.&lt;/li&gt;
&lt;li&gt;Setup the Kong Ingress resource to connect the Ingress controller with the Kong Admin API and Kong Proxy services.&lt;/li&gt;
&lt;li&gt;Disabled monitoring Keycloak, Grafana and Prometheus&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cluster create terraform script under kind folder&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
terraform plan  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"kind_cluster_name=microservices-development-cluster"&lt;/span&gt;
terraform apply  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"kind_cluster_name=microservices-development-cluster"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following command verify the services&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl cluster-info &lt;span class="c"&gt;#verify cluster info&lt;/span&gt;
kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide &lt;span class="c"&gt;#verify node&lt;/span&gt;

kubectl get namespace &lt;span class="c"&gt;#verify the microservices namespace&lt;/span&gt;
kubectl get &lt;span class="nt"&gt;-n&lt;/span&gt; microservices pod &lt;span class="c"&gt;#verify server is running&lt;/span&gt;
kubectl get &lt;span class="nt"&gt;-n&lt;/span&gt; microservices svc &lt;span class="c"&gt;#verify service&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As per my experience, this Kong services take time to start and ready to use it. Make sure all services should be ready status before open the service. Eg:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; microservices get pod &lt;span class="nt"&gt;--watch&lt;/span&gt;
NAME                              READY   STATUS      RESTARTS   AGE
kong-kong-6bdd9944d-cp79n         2/2     Running     0          9m39s
kong-kong-init-migrations-fptth   0/1     Completed   0          9m39s
kong-postgresql-0                 1/1     Running     0          9m51s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following command use to login into postgres, the password will be prompt executed the command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; microservices &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; pod/kong-postgresql-0 &lt;span class="nt"&gt;--&lt;/span&gt; psql &lt;span class="nt"&gt;-U&lt;/span&gt; mykong &lt;span class="nt"&gt;-d&lt;/span&gt; mykongdb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The Terraform state file should be kept secure and encrypted (using encryption at rest) because it contains sensitive information, such as usernames, passwords, and Kubernetes cluster details etc.&lt;/p&gt;

&lt;p&gt;Add our domain to the bottom of the &lt;code&gt;/etc/hosts&lt;/code&gt; file on your local machine. This configuration should not be inside our working Linux box “test-microservices-module-envornment-box”; it should be applied to your personal machine's &lt;code&gt;/etc/hosts&lt;/code&gt; file. (you will need administrator access):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;127.0.0.1       api.gateway.mes.app.com
127.0.0.1       admin.kong.myapp.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the admin api service, this api response contains the kong API gateway details which is configured in the cluster.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --location 'http://admin.kong.myapp.com/'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Run the microservices in Kubernetes cluster
&lt;/h1&gt;

&lt;p&gt;I have created two spring boot applications which are available in the docker hub repository for testing the routing the request to multiple microservices in the cluster. The microservice services will be running 8080 port&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“developerhelperhub/mes-item-service:1.0.0.1-SNAPSHOT”&lt;/li&gt;
&lt;li&gt;“developerhelperhub/mes-order-service:1.0.0.1-SNAPSHOT”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deploy the &lt;strong&gt;item service&lt;/strong&gt; in the cluster&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; microservices &lt;span class="nt"&gt;-f&lt;/span&gt; microservices/item-service/kube-deployment.yaml apply
kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; microservices &lt;span class="nt"&gt;-f&lt;/span&gt; microservices/item-service/kube-service.yaml apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy the &lt;strong&gt;order service&lt;/strong&gt; in the cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; microservices &lt;span class="nt"&gt;-f&lt;/span&gt; microservices/order-service/kube-deployment.yaml apply
kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; microservices &lt;span class="nt"&gt;-f&lt;/span&gt; microservices/order-service/kube-service.yaml apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following command verify the pods of microservices&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME                                 READY   STATUS      RESTARTS   AGE
mes-item-service-544f54f9fd-2jkbs    1/1     Running     0          2m46s
mes-item-service-544f54f9fd-bm85t    1/1     Running     0          2m46s
mes-item-service-544f54f9fd-cddrh    1/1     Running     0          2m46s
mes-item-service-544f54f9fd-zdnjd    1/1     Running     0          2m46s
mes-order-service-5f7cf68dd9-79z48   1/1     Running     0          2m39s
mes-order-service-5f7cf68dd9-qx2dv   1/1     Running     0          2m39s
mes-order-service-5f7cf68dd9-w28nt   1/1     Running     0          2m39s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following command verify the services are running in the correct port&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; microservices get svc
NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                         AGE
kong-kong-admin                ClusterIP   10.96.215.110   &amp;lt;none&amp;gt;        8001/TCP                        24m
kong-kong-manager              NodePort    10.96.185.130   &amp;lt;none&amp;gt;        8002:32038/TCP,8445:31091/TCP   24m
kong-kong-proxy                ClusterIP   10.96.125.115   &amp;lt;none&amp;gt;        80/TCP,443/TCP                  24m
kong-kong-validation-webhook   ClusterIP   10.96.180.81    &amp;lt;none&amp;gt;        443/TCP                         24m
kong-postgresql                ClusterIP   10.96.129.217   &amp;lt;none&amp;gt;        5432/TCP                        24m
kong-postgresql-hl             ClusterIP   None            &amp;lt;none&amp;gt;        5432/TCP                        24m
mes-item-service               ClusterIP   10.96.241.39    &amp;lt;none&amp;gt;        8080/TCP                        3m32s
mes-order-service              ClusterIP   10.96.252.164   &amp;lt;none&amp;gt;        8080/TCP                        3m25s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Create Service and Route path in Kong API
&lt;/h1&gt;

&lt;p&gt;Kong Admin API provides endpoints to create the services and its route path of the services.&lt;/p&gt;

&lt;p&gt;Create the service of item-service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'http://admin.kong.myapp.com/services/'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/x-www-form-urlencoded'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-urlencode&lt;/span&gt; &lt;span class="s1"&gt;'name=mes-item-service'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-urlencode&lt;/span&gt; &lt;span class="s1"&gt;'url=http://mes-item-service.microservices.svc.cluster.local:8080'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the route path of item-service. We are routing the all request into “item-service” if the path contains “/item-service”&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'http://admin.kong.myapp.com/routes/'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/x-www-form-urlencoded'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-urlencode&lt;/span&gt; &lt;span class="s1"&gt;'service.id=&amp;lt;service_id&amp;gt;'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-urlencode&lt;/span&gt; &lt;span class="s1"&gt;'paths%5B%5D=/item-service'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : Replace the &lt;code&gt;&amp;lt;service_id&amp;gt;&lt;/code&gt; with actual service id of item-service.&lt;/p&gt;

&lt;p&gt;Create the service of order-service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'http://admin.kong.myapp.com/services/'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/x-www-form-urlencoded'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-urlencode&lt;/span&gt; &lt;span class="s1"&gt;'name=mes-order-service'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-urlencode&lt;/span&gt; &lt;span class="s1"&gt;'url=http://mes-order-service.microservices.svc.cluster.local:8080'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the route path of item-service. We are routing the all request into “order-service” if the path contains “/item-service”&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'http://admin.kong.myapp.com/routes/'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/x-www-form-urlencoded'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-urlencode&lt;/span&gt; &lt;span class="s1"&gt;'service.id=&amp;lt;service_id&amp;gt;'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-urlencode&lt;/span&gt; &lt;span class="s1"&gt;'paths%5B%5D=/order-service'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : Replace the &lt;code&gt;&amp;lt;service_id&amp;gt;&lt;/code&gt; with actual service id of item-service.&lt;/p&gt;

&lt;p&gt;Testing the endpoints of Item service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#get the list of items&lt;/span&gt;
curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'http://api.gateway.mes.app.com/item-service/items'&lt;/span&gt;
&lt;span class="c"&gt;#get the service info&lt;/span&gt;
curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'http://api.gateway.mes.app.com/item-service/actuator/info'&lt;/span&gt;
&lt;span class="c"&gt;#get the helth info&lt;/span&gt;
curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'http://api.gateway.mes.app.com/item-service/actuator/health'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Testing the endpoints of Item service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#get the list of items&lt;/span&gt;
curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'http://api.gateway.mes.app.com/order-service/items'&lt;/span&gt;
&lt;span class="c"&gt;#get the service info&lt;/span&gt;
curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'http://api.gateway.mes.app.com/order-service/actuator/info'&lt;/span&gt;
&lt;span class="c"&gt;#get the helth info&lt;/span&gt;
curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'http://api.gateway.mes.app.com/order-service/actuator/health'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note : Postman collections and environment files are available in the repo&lt;/p&gt;

&lt;h1&gt;
  
  
  Reference
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/binoy_59380e698d318/high-availability-and-scalability-deployment-microservices-on-kubernetes-cluster-2p51"&gt;High Availability and Scalability deployment Microservices on Kubernetes cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Kong/charts/blob/main/charts/kong/values.yaml" rel="noopener noreferrer"&gt;Helm Value&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Kong/kong/blob/master/kong.conf.default" rel="noopener noreferrer"&gt;Kong Config&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.konghq.com/gateway/3.8.x/production/deployment-topologies/traditional/" rel="noopener noreferrer"&gt;Kong Traditional DB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.konghq.com/gateway/3.8.x/reference/configuration/#datastore-section" rel="noopener noreferrer"&gt;Kong Database Config&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.konghq.com/gateway/api/admin-oss/latest/" rel="noopener noreferrer"&gt;Admin API Docs OSS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.konghq.com/gateway/3.8.x/support/third-party/#data-stores" rel="noopener noreferrer"&gt;Third Party Support&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.konghq.com/gateway/latest/install/kubernetes/admin/" rel="noopener noreferrer"&gt;Install in Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.konghq.com/gateway/latest/install/kubernetes/proxy/" rel="noopener noreferrer"&gt;DB Configuration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>terraform</category>
      <category>architecture</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Terraform - Keycloak install on Kubernetes cluster</title>
      <dc:creator>Binoy</dc:creator>
      <pubDate>Wed, 25 Sep 2024 18:13:56 +0000</pubDate>
      <link>https://dev.to/binoy_59380e698d318/terraform-keycloak-install-on-kubernetes-cluster-ojd</link>
      <guid>https://dev.to/binoy_59380e698d318/terraform-keycloak-install-on-kubernetes-cluster-ojd</guid>
      <description>&lt;p&gt;This section helps to basic understand how can we install the Keycloak in the Kubernetes Cluster with help of Terraform&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup local environment to build DevOps resources
&lt;/h2&gt;

&lt;p&gt;I use docker containers to set up work environments for multiple applications(&lt;a href="https://dev.to/binoy_59380e698d318/setup-linux-box-on-local-with-docker-container-3k8"&gt;Setup Environment&lt;/a&gt;). This approach ensures fully isolated and maintainable environments for application development, allowing us to easily start and terminate these environments. Below is the Docker command to create the environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; test-microservices-module-envornment-box &lt;span class="nt"&gt;-v&lt;/span&gt; ~/.kube/config:/work/.kube/config &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/work/.kube/config &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HOME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:/root/ &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:/work &lt;span class="nt"&gt;-w&lt;/span&gt; /work &lt;span class="nt"&gt;--net&lt;/span&gt; host developerhelperhub/kub-terr-work-env-box
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The container contains Docker, Kubectl, Helm, Terraform, Kind, Git&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Keycloak on Kubernetes Cluster
&lt;/h2&gt;

&lt;p&gt;I have created the terraform mdoules, which are available in the GitHub repository. You can download and set up Keycloak on a Kubernetes cluster, which runs locally in a Docker container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clone the repository&lt;/strong&gt; onto your local Linux machine to get started.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/developerhelperhub/kuberentes-help.git
&lt;span class="nb"&gt;cd &lt;/span&gt;kuberentes-help/terraform/sections/00007/terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;main.tf&lt;/code&gt; terraform script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;module &lt;span class="s2"&gt;"microservices"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;source&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://github.com/developerhelperhub/microservices-terraform-module.git//microservices?ref=v1.1.0"&lt;/span&gt;
    kind_cluster_name &lt;span class="o"&gt;=&lt;/span&gt; var.kind_cluster_name
    kind_http_port    &lt;span class="o"&gt;=&lt;/span&gt; 80
    kind_https_port   &lt;span class="o"&gt;=&lt;/span&gt; 443
    kubernetes_namespace &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"microservices"&lt;/span&gt;
    keycloak_enable      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;true
    &lt;/span&gt;keycloak_domain_name &lt;span class="o"&gt;=&lt;/span&gt; var.keycloak_domain_name
    keycloak_admin_user     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"admin"&lt;/span&gt;
    keycloak_admin_password &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"MyPassword2222@"&lt;/span&gt;
    keycloak_resources_requests_cpu    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"500m"&lt;/span&gt;
    keycloak_resources_requests_memory &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1024Mi"&lt;/span&gt;
    keycloak_resources_limit_cpu       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"500m"&lt;/span&gt;
    keycloak_resources_limit_memory    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1024Mi"&lt;/span&gt;
    keycloak_db_password               &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"MyPassword2222@"&lt;/span&gt;
    keycloak_db_admin_password         &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"MyPassword2222@"&lt;/span&gt;
    keycloak_autoscaling_min_replicas  &lt;span class="o"&gt;=&lt;/span&gt; 1
    keycloak_autoscaling_max_replicas  &lt;span class="o"&gt;=&lt;/span&gt; 1
    keycloak_persistence_size          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"8Gi"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;variables.tf&lt;/code&gt; terraform script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#This is variable arguments while running the terraform scripts&lt;/span&gt;
variable &lt;span class="s2"&gt;"kind_cluster_name"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; string
    description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Kind cluster name"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
variable &lt;span class="s2"&gt;"keycloak_domain_name"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; string
    description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Keycloak domain name"&lt;/span&gt;
    default &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"keycloak.myapp.com"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These Terraform scripts install and configure resources in the cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create the Kubernetes cluster in docker container locally, the cluster name will be “microservices-development-cluster-control-plane”&lt;/li&gt;
&lt;li&gt;Install the ingress controller and exposes ports 80 and 443 to allow access to services from outside the cluster.&lt;/li&gt;
&lt;li&gt;Create a namespace called "microservices"&lt;/li&gt;
&lt;li&gt;Install Keycloak in the "microservices" namespace using a Helm chart.&lt;/li&gt;
&lt;li&gt;Keycloak username and password default “admin” and “MyPassword2222@”&lt;/li&gt;
&lt;li&gt;Set up the Keycloak Ingress resource to connect the Ingress controller with the Keycloak service.&lt;/li&gt;
&lt;li&gt;Configure the Keycloak container to run on port 80 and expose it to port 80 through the Ingress controller.&lt;/li&gt;
&lt;li&gt;Disabled monitoring Grafana and Prometheus&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cluster create terraform script under kind folder&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
terraform plan  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"kind_cluster_name=microservices-development-cluster"&lt;/span&gt;
terraform apply  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"kind_cluster_name=microservices-development-cluster"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following command verify the Jenkins Service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl cluster-info &lt;span class="c"&gt;#verify cluster info&lt;/span&gt;
kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide &lt;span class="c"&gt;#verify node&lt;/span&gt;

kubectl get namespace &lt;span class="c"&gt;#verify the microservices namespace&lt;/span&gt;
kubectl get &lt;span class="nt"&gt;-n&lt;/span&gt; microservices pod &lt;span class="c"&gt;#verify keycloak server is running&lt;/span&gt;
kubectl get &lt;span class="nt"&gt;-n&lt;/span&gt; microservices svc &lt;span class="c"&gt;#verify keycloak service&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As per my experience, this keycloak service take time to start and ready service. Make sure all services should be ready status before open the service. Eg:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; microservices get pod &lt;span class="nt"&gt;--watch&lt;/span&gt;
NAME                    READY   STATUS    RESTARTS   AGE
keycloak-0              1/1     Running   0          6m11s
keycloak-postgresql-0   1/1     Running   0          7m6s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following command use to login into postgres, the password will be prompt executed the command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; microservices &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; pod/keycloak-postgresql-0 &lt;span class="nt"&gt;--&lt;/span&gt; psql &lt;span class="nt"&gt;-U&lt;/span&gt; keycloak &lt;span class="nt"&gt;-d&lt;/span&gt; keycloakdb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The Terraform state file should be kept secure and encrypted (using encryption at rest) because it contains sensitive information, such as usernames, passwords, and Kubernetes cluster details etc.&lt;br&gt;
Add our domain to the bottom of the &lt;code&gt;/etc/hosts&lt;/code&gt; file on your local machine. This configuration should not be inside our working Linux box “test-microservices-module-envornment-box”; it should be applied to your personal machine's &lt;code&gt;/etc/hosts&lt;/code&gt; file. (you will need administrator access):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;127.0.0.1       keycloak.myapp.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keycloak Username is "admin" and password "MyPassword2222@", URl "&lt;a href="http://keycloak.myapp.com/" rel="noopener noreferrer"&gt;http://keycloak.myapp.com&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflppb01txwgk4aiv1sj6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflppb01txwgk4aiv1sj6.png" width="800" height="663"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/developerhelperhub/kuberentes-help/tree/main/kubenretes/tutorials/sections/0011" rel="noopener noreferrer"&gt;Helm Keyloak Install Example&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>terraform</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
