<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kubernetes Community Days Chennai</title>
    <description>The latest articles on DEV Community by Kubernetes Community Days Chennai (@kcdchennai).</description>
    <link>https://dev.to/kcdchennai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kcdchennai"/>
    <language>en</language>
    <item>
      <title>Updates about KCD Chennai 2024</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Tue, 08 Oct 2024 06:13:25 +0000</pubDate>
      <link>https://dev.to/kcdchennai/updates-about-kcd-chennai-2024-18pb</link>
      <guid>https://dev.to/kcdchennai/updates-about-kcd-chennai-2024-18pb</guid>
      <description>&lt;p&gt;&lt;strong&gt;Hey KCD Chennai Community!&lt;/strong&gt; 👋 &lt;/p&gt;

&lt;p&gt;Many of you have been asking about KCD Chennai 2024. There's been some exciting developments in the KCD world this year! 🎉 &lt;/p&gt;

&lt;p&gt;Firstly, Cloud Native Computing Foundation (CNCF) has implemented a new guideline for KCD events within the same country/region. To ensure a vibrant ecosystem and avoid scheduling conflicts, KCDs need to be at least 2 months apart, and not held within 2 months of other major CNCF/LF events. 🥇 &lt;/p&gt;

&lt;p&gt;2024 has seen the incredible launch of brand new KCDs in India! KCD Kerala, KCD Pune and KCD Hyderabad. 🎈 &lt;/p&gt;

&lt;p&gt;Additionally, India is hosting its very first dedicated KubeCon event in New Delhi from December 10th-12th, 2024! This flagship event deserves the spotlight, so we've made the &lt;strong&gt;difficult decision to forgo KCD Chennai 2024.&lt;/strong&gt; ✨ &lt;/p&gt;

&lt;p&gt;This decision allows other KCD communities in India to flourish and grow, fostering collaboration and knowledge sharing – the core values of open source! 🦄 &lt;/p&gt;

&lt;p&gt;Open-source communities thrive on the principles of openness, collaboration, and cooperation. These values are fundamental to nurturing innovation, knowledge sharing and community growth. CNCF and hence the KCD Chennai community exemplifies these principles, creating a space where other communities thrive as well. Together, we're stronger! 🤝 &lt;/p&gt;

&lt;p&gt;We'll keep you updated on KCD Chennai 2025 soon. &lt;strong&gt;Stay tuned!&lt;/strong&gt; 🔊 &lt;/p&gt;

</description>
      <category>kcdchennai</category>
      <category>cncf</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Storing and Searching TeraByte Scale logs in SnappyFlow with Secondary Storage</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Wed, 19 Jul 2023 07:22:18 +0000</pubDate>
      <link>https://dev.to/kcdchennai/storing-and-searching-terabyte-scale-logs-in-snappyflow-with-secondary-storage-263h</link>
      <guid>https://dev.to/kcdchennai/storing-and-searching-terabyte-scale-logs-in-snappyflow-with-secondary-storage-263h</guid>
      <description>&lt;h2&gt;
  
  
  The premise
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Log management in modern organizations
&lt;/h3&gt;

&lt;p&gt;In most enterprises today, big, and small, it is not uncommon to have a tech stack comprising 50 or more different technologies across their applications and infrastructure, and this number is likely to increase as companies embrace microservices, multi/hybrid clouds, and containerization.&lt;/p&gt;

&lt;p&gt;All these individual components generate logs, and lots of the. These logs serve as invaluable sources of information, providing insights into the health of individual components, transaction details, timestamps, and other critical data. By analyzing these logs, SREs and DevOps engineers can gain a comprehensive understanding of their systems, diagnose issues promptly, and optimize performance. Development teams rely on these logs to understand and address issues before they affect customers and businesses.&lt;/p&gt;

&lt;p&gt;Each log entry represents a specific event that happened at a precise moment in time, allowing for accurate tracking and analysis of system behavior. For instance, when a fault occurs, logs enable developers to identify errors and look for related logs, system performance metrics, and application traces and drill down to the exact line of code to troubleshoot. &lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges in managing Terabyte and Petabyte scale logs
&lt;/h3&gt;

&lt;p&gt;As more logs get generated, it quickly becomes a “storage” and “search” problem. Although individual logs are tiny – just a few Bytes, the cumulative volume of logs across your stack, multiplied over several days can quickly reach Terabytes or Petabytes. Efficient search and storage mechanisms become crucial for developers and engineers in handling this large log volume.&lt;/p&gt;

&lt;p&gt;Log retention, defines how long the logs are stored and in turn determines the total log volume. Factors such as security, regulatory compliances and cost have to be taken into account, to arrive at an optimal log retention period. Striking a balance between cost-effectiveness and fulfilling operational, analytical, and regulatory needs is key to optimizing log storage.&lt;/p&gt;

&lt;p&gt;However, retaining logs for extended periods, spanning months or years, introduces complications. The common approach of compressing and storing logs in cost-effective solutions like AWS Glacier hinders real-time log retrieval and search capabilities. While suitable for auditing, this method limits developers' ability to efficiently analyze and troubleshoot logs in a timely manner.&lt;/p&gt;

&lt;p&gt;To overcome these limitations, engineers require a solution that allows quick access to archived logs without sacrificing real-time search functionality. This ensures developers can effectively analyze logs, even in long-term retention scenarios, enabling timely analysis and troubleshooting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing SnappyFlow's Secondary storage feature
&lt;/h2&gt;

&lt;p&gt;SnappyFlow provides an elegant solution to ingest, store and search large volumes of logs for extended periods of time using what we call the “Secondary Storage” feature. Secondary Storage allows massive streams of logs to be ingested and stored in a cost-efficient manner without losing the ability to easily search logs. &lt;/p&gt;

&lt;h3&gt;
  
  
  So, is there a Primary Storage?
&lt;/h3&gt;

&lt;p&gt;Yes. By default, all logs sent to SnappyFlow are stored in a “Primary Storage”. Think of Primary storage as fast and responsive storage system capable of handling a large volume of searches at lightning-fast speeds. These are typically fast SSD-type storages and are as expected, expensive. &lt;/p&gt;

&lt;h3&gt;
  
  
  How does Secondary Storage work?
&lt;/h3&gt;

&lt;p&gt;Different log sources can be configured to send logs to Primary Storage, Secondary Storage, or both.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8f6zTWhk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/73rqi92rn4jr6gsoygof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8f6zTWhk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/73rqi92rn4jr6gsoygof.png" alt="Image description" width="400" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is available under &lt;strong&gt;Log Management &amp;gt; Manage Logs&lt;/strong&gt;. In the screenshot below, you can see a list of rules for the project apmmanager-opensearch. Note that in this example, you are looking at project-level rules. Similar views are available at Application and Profile levels.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Aj1JmUp8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6k3c1c9ur3uryag1adwl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Aj1JmUp8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6k3c1c9ur3uryag1adwl.png" alt="Image description" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;
Project level view of Secondary Storage Rules for the project apmmanager-opensearch





&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ySZQYbt7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9qz3dpsvqqcnm0bjwcb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ySZQYbt7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9qz3dpsvqqcnm0bjwcb5.png" alt="Image description" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;
Application level view of Secondary Storage Rules for the project apmmanager-k8



&lt;p&gt;&lt;br&gt;&lt;br&gt;
The default rules are set to send all logs to both Primary and Secondary storage with a retention period of 7 days and 30 days respectively. New rules can be added using the Add Rule button and it takes a couple of minutes for these rules to get activated. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pNrUecy5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ys6924ahtv2wuewtts6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pNrUecy5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ys6924ahtv2wuewtts6j.png" alt="Image description" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;
Once the rules are applied, these can be viewed under Applied Rules





&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g-SUzHx---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6nnbrf9623zgu0lmdf2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g-SUzHx---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6nnbrf9623zgu0lmdf2o.png" alt="Image description" width="620" height="786"&gt;&lt;/a&gt;&lt;/p&gt;
Adding a new secondary storage rule for server logs



&lt;h2&gt;
  
  
  Searching logs in Secondary Storage
&lt;/h2&gt;

&lt;p&gt;Search for logs in secondary storage is available under the respective applications. To access go to any application and select &lt;strong&gt;Log Management &amp;gt; Secondary Storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the Secondary Storage page, live search is available for data from the last 30 minutes. Logs can be filtered using the log type or using simple search strings. The Search History tab allows you to create search jobs and these jobs run in the background. Once a search job is completed, the search results can be accessed instantly any time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZOHKVUK4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xhpeengf5twe8hydy6va.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZOHKVUK4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xhpeengf5twe8hydy6va.png" alt="Image description" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;
Live search and search history for secondary storage logs



&lt;p&gt;Limitations&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;All logs in secondary storage can be searched in real-time (only the last 30 minutes) or search jobs can be set up and the indexed results can be accessed instantly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is not possible to create dashboard out of logs in secondary storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Secondary storage logs are not part of the usual log workflow i.e trace to log&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  An illustration of the benefit of using secondary storage for log management
&lt;/h2&gt;

&lt;p&gt;Secondary storage can compress logs by upto 40 times the original size and can provide significant cost benefits. Consider a medium-sized SnappyFlow deployment with an average daily ingest volume of 1 TB with a retention of 15 days. At any given time, 15 TB of primary storage is required simply to hold this data. If we were to use Secondary storage to move say 60% of all the logs, we would need to incrementally store only 400 GB of logs on a daily basis and this works out to 6 TB of primary storage. &lt;/p&gt;

&lt;p&gt;At the time of writing, the cost of EBS storage on AWS is&lt;/p&gt;

&lt;p&gt;15 TB, GP3 - $2001/mo&lt;/p&gt;

&lt;p&gt;6 TB, GBP - $800/mo&lt;/p&gt;

&lt;p&gt;Here, there is a straightforward reduction in monthly costs of $1200 simply by routing 60% of logs to Secondary storage. Do note that there will be an additional cost of storing data in Secondary Storage but this is significantly lower as we will be using an object-based storage service like S3.&lt;/p&gt;

&lt;p&gt;With a compression factor of 40x and a log retention period of 60 days, total log volume in secondary storage will be (1 TB/day * 60% * 60 days) / 40 = 0.9 TB&lt;/p&gt;

&lt;p&gt;S3 storage cost is just $20 for ~1TB of compressed logs. &lt;/p&gt;

&lt;h3&gt;
  
  
  Explore Secondary Storage today!
&lt;/h3&gt;

&lt;p&gt;Secondary Storage features are available to all SaaS and Self Hosted Turbo customers. Secondary Storage is the easiest and simplest way to control your logs storage costs and stay compliant to long term regulatory and security requirements. What’s more? This feature comes at no extra cost.&lt;/p&gt;

&lt;p&gt;To try SnappyFlow, start your &lt;a href="https://accounts.snappyflow.io/freetrial"&gt;14-day free trial&lt;/a&gt; today.&lt;/p&gt;

</description>
      <category>logs</category>
      <category>monitoring</category>
      <category>kubernetes</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Kubernetes Monitoring Simplified</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Wed, 19 Jul 2023 06:54:17 +0000</pubDate>
      <link>https://dev.to/kcdchennai/kubernetes-monitoring-simplified-3g4g</link>
      <guid>https://dev.to/kcdchennai/kubernetes-monitoring-simplified-3g4g</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Today, organizations are transitioning from monolithic architecture to microservice architecture. When we speak of microservice architecture, there is no way to ignore Kubernetes (K8’s). K8’s is the leading open-source Container Orchestration Platform, and it holds a prominent position for certain reasons, such as: Robust, Scalability, High Availability, Portability, Multi-Cloud Support, Self-Healing, Auto-Scaling, Declarative Configuration, Rollbacks, Service Discovery, Load Balancing, among other features.&lt;/p&gt;

&lt;p&gt;We all know Kubernetes is a powerful tool, but the question here is, how do I get the most out of it? The answer is simple: by proactively monitoring your Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges in Monitoring Kubernetes Clusters
&lt;/h2&gt;

&lt;p&gt;Monitoring K8’s clusters can be a complicated task. Here are few difficulties faced by SRE’s.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Complexity:&lt;/strong&gt; Kubernetes operates in a multi-layer environment, requiring the collection and correlation of metrics from pods, nodes, containers, and applications to obtain an overall performance metric of the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt; Monitoring tools collect sensitive data from the Kubernetes cluster, raising security concerns that need to be addressed to protect the cluster data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Flow:&lt;/strong&gt; As the Kubernetes cluster grows, whether it's on the cloud or on-premises, tracing the data flow between endpoints becomes increasingly difficult.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ephemerality:&lt;/strong&gt; Kubernetes scales up and down based on demand. Pods created during scale-up disappear when no longer required. To avoid data loss, monitoring tools must collect metrics from various resources such as deployments, daemon sets, and jobs. &lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Monitoring Kubernetes Clusters
&lt;/h2&gt;

&lt;p&gt;Despite the challenges, monitoring K8’s environment can lead to significant benefits and improvements:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced Visibility:&lt;/strong&gt; Monitoring Kubernetes clusters helps you gain enhanced visibility about overall health of your system and its components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proactive Issue Detection:&lt;/strong&gt; Monitoring Kubernetes clusters enables you to proactively detect issues such as application failures, abnormal behaviours, and performance degradation. With the help of this you can prevent potential downtime and service disruptions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Utilization:&lt;/strong&gt; By tracking the resource usage of your containers, pods, and nodes, you can fine-tune resource allocation, enhance efficiency, reduce costs, and maximize the utilization of the infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance Optimization:&lt;/strong&gt; Monitoring Kubernetes enables you to identify the slowness in the response times, inefficient resource usage and helps you to optimize scaling specific to components and optimize network settings to improve overall system performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root Cause Analysis:&lt;/strong&gt; Monitoring Kubernetes facilitates you to pinpoint the root cause, isolate problematic pods, or nodes, and analyse relevant logs and metrics.&lt;/p&gt;

&lt;p&gt;Here comes the next question, how do I monitor my Kubernetes cluster? Monitoring Kubernetes is challenging task, but this is where monitoring tools like SnappyFlow come in handy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simplify Kubernetes Monitoring with SnappyFlow
&lt;/h2&gt;

&lt;p&gt;SnappyFlow is a full stack application monitoring tool. By integrating your Kubernetes cluster with Snappyflow, it starts collecting the data from your cluster and enables efficient cluster monitoring. SnappyFlow monitors various aspects of the cluster which will help you to take an informed decision. Its alerting system notifies any deviation that happens in the cluster through your preferred communication channel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h0W8siCJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ata5xcsx48w0vjw4jx28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h0W8siCJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ata5xcsx48w0vjw4jx28.png" alt="Image description" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What can you monitor with SnappyFlow?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M2-Rk4d3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/71b3kieb23byus3426zp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M2-Rk4d3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/71b3kieb23byus3426zp.png" alt="Image description" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Utilization&lt;/strong&gt;: SnappyFlow monitors CPU, memory, and disk usage of nodes, pods, and containers to identify resource bottlenecks. This helps to ensure efficient utilization of the resources and prevents overloading.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pod Health&lt;/strong&gt;: SnappyFlow tracks the status and health of individual pods, including their readiness and liveness probes. It monitors the pod failures, crashes, and restarts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster Capacity&lt;/strong&gt;: SnappyFlow keeps an eye on cluster capacity to avoid resource exhaustion. It monitors the number of nodes, available resources, Daemon set, Deployment, Replica set, stateful set and the ability to schedule new pods. It also monitors how many critical and warning events are occurring in the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Performance&lt;/strong&gt;: SnappyFlow monitors network traffic, latency, and throughput at node level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistent Volumes&lt;/strong&gt;: SnappyFlow monitors the status and capacity of persistent volumes (PVs) and their associated claims (PVCs). Ensure that storage resources are available and accessible as required. It also monitors Read and Write operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service Discovery and Load Balancing&lt;/strong&gt;: SnappyFlow monitors the health and availability of Kubernetes services and their endpoints. Track load balancing across pods to ensure even distribution of traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes API Server&lt;/strong&gt;: SnappyFlow keeps an eye on the API server's performance, latency, and response times. Monitors for any errors, throttling, or potential bottlenecks in API communication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logging and Event Monitoring&lt;/strong&gt;: SnappyFlow sets up centralized logging and monitoring to capture container logs, Kubernetes events, and system-level metrics. check for issues related to image pulling, container startup, or resource allocation. This enables quick troubleshooting and identification of issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L3CdR29q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d5nqelabiejobvdqtki3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L3CdR29q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d5nqelabiejobvdqtki3.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;
Node Summary



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ndXdXsMC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ct75gza4h6ah8mg20k7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ndXdXsMC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ct75gza4h6ah8mg20k7h.png" alt="Image description" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;
Cluster Resource Utilization



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DCzM0UKM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mzok2d5ygy3huvk03mdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DCzM0UKM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mzok2d5ygy3huvk03mdu.png" alt="Image description" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;
Kubelet operation details



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, SnappyFlow offers a simplified and effective solution for monitoring Kubernetes clusters. By leveraging SnappyFlow, you can ensure the health and performance of your Kubernetes cluster while reducing the complexity and effort involved in monitoring.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>observability</category>
      <category>monitoring</category>
      <category>productivity</category>
    </item>
    <item>
      <title>GCP Cloud Function 2nd Gen CLI For A Compatible Ingress Cloud Shell In Variant Web App</title>
      <dc:creator>Anshuman Mishra</dc:creator>
      <pubDate>Tue, 20 Jun 2023 07:52:52 +0000</pubDate>
      <link>https://dev.to/kcdchennai/gcp-cloud-function-2nd-gen-cli-for-a-compatible-ingress-cloud-shell-in-variant-web-app-4hj</link>
      <guid>https://dev.to/kcdchennai/gcp-cloud-function-2nd-gen-cli-for-a-compatible-ingress-cloud-shell-in-variant-web-app-4hj</guid>
      <description>&lt;p&gt;1.&lt;br&gt;
GCP Region/Zones Extraneous CLI For Downward Function With Parallel Kubernetes Pod:&lt;br&gt;
&lt;code&gt;gcloud enable googleapis &lt;br&gt;
artifactregistry.googleapis.com&lt;br&gt;
pubsub.googleapis.com - 1&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
export REGION=us-east4&lt;br&gt;
gcloud config set compute/region $REGION&lt;br&gt;
2.&lt;br&gt;
Now,Let us populate Cloud Storage Javascript function With Ingress Audit Logs-&lt;/p&gt;

&lt;p&gt;Python CLI:&lt;br&gt;
`gcloud enable googleapis &lt;br&gt;
artifactregistry.googleapis.com&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node.js 2
`&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;import os&lt;br&gt;
color = os.environ.get('COLOR')- (Artifact Registry JSON Function With Ingress Python Function Only For One HTTP Port Node Instance)&lt;br&gt;
def hello_world(request):&lt;br&gt;
    return f'&lt;/p&gt;
&lt;h1&gt;Hello World!&lt;/h1&gt;'

&lt;p&gt;Kotlin/Java 11/Javascript CLI:&lt;/p&gt;

&lt;p&gt;`gcloud enable googleapis &lt;br&gt;
artifactregistry.googleapis.com&lt;br&gt;
pubsub.googleapis.com&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node.js 3
(Cold Restarts Node From Public Subscription Javascript Topic To Kotlin CLI)
`&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Squeezed IntelliJ IDEA Ultimate IDE Coroutine From GCP &lt;br&gt;
const qj = require('google-cloud/functions-framework');&lt;br&gt;
qj.http('req/res',(req,res)=&amp;gt;{&lt;br&gt;
  res.status(200).send('HTTP function (2nd gen) has been called!');&lt;br&gt;
});)&lt;br&gt;
data class tp(val qw:Int,var qw2:Int)&lt;/p&gt;

&lt;p&gt;fun main()&lt;br&gt;
{&lt;br&gt;
  IntelliJ IDEA Kotlin Function 1 (Extraneous CLI For Multiple Warm Instance In Similar Methods For Cold Node Run Analysis For AI/ML,Web App)&lt;/p&gt;

&lt;p&gt;val tq = qj.set()  - Here We have to find out if Google Cloud Run function can be originated for similar Cold restarted nodes function(Point 1 Described Above) &lt;/p&gt;

&lt;p&gt;}&lt;/p&gt;

</description>
      <category>cloudfunction</category>
    </item>
    <item>
      <title>Cloud Monitoring API For Enabled API And IAM Resources In GCP K8S Cloud Shell Region</title>
      <dc:creator>Anshuman Mishra</dc:creator>
      <pubDate>Sat, 17 Jun 2023 06:30:47 +0000</pubDate>
      <link>https://dev.to/kcdchennai/cloud-monitoring-api-for-enabled-api-and-iam-resources-in-gcp-k8s-cloud-shell-region-10dh</link>
      <guid>https://dev.to/kcdchennai/cloud-monitoring-api-for-enabled-api-and-iam-resources-in-gcp-k8s-cloud-shell-region-10dh</guid>
      <description>&lt;p&gt;Step 1:&lt;br&gt;
       Initializing GCP VM Instance SSH Node For Distributed YAML Function&lt;/p&gt;

&lt;p&gt;Apache Server 1  - Metrics 1 API Cloud Monitoring&lt;/p&gt;

&lt;h2&gt;
  
  
  1st Cloud Backports API In Ingress Mode Analytics
&lt;/h2&gt;

&lt;p&gt;Compatibility In GCP Cloud Storage Bucket And Its API/IAM For Retrieving Metrics:&lt;br&gt;
                     &lt;a href="https://dev.toURL%201%20Query"&gt;Cloud Monitoring API&lt;/a&gt;&lt;br&gt;
                                             Workbench GUI 1:&lt;br&gt;
curl -sS0 &lt;a href="https://www.googleapis.com/GoogleCloudPlatform/(IAM/API"&gt;https://www.googleapis.com/GoogleCloudPlatform/(IAM/API&lt;/a&gt; 1).yaml&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                 [Cloud Monitoring API](URL 2 Query)

                                          Workbench GUI 2:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;curl -sS0 -l &lt;a href="https://www.googleapis.com/GoogleCloudPlatform/(IAM/API2).yaml"&gt;https://www.googleapis.com/GoogleCloudPlatform/(IAM/API2).yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-l indicates list of all CLI and GCP privileges that can be scaled for downward Pub/Sub topics allocated in a Cloud Shell&lt;br&gt;
Step 2:&lt;br&gt;
 Query :&lt;br&gt;
Select cpu.utilzation,api_privileges from GCP_Project_ID1,GCP_Project_ID2 where (Backport_Nodes MIN_BY  Cloud Monitoring API) Group By GUI1,Apache Server 1&lt;br&gt;
                      Group By GUI1,Apache Server 2(Depends upon frequency of GCP K8S Cloud Shell)&lt;/p&gt;

&lt;p&gt;In the above query,We have to find out exact amount of CPU Utilization for simultaneous API that can be enabled for GUI 1 for both project_id&lt;/p&gt;

</description>
      <category>cloud</category>
    </item>
    <item>
      <title>CI/CD Pipeline Dual Helm Github Repo With Similar Canary Deployment</title>
      <dc:creator>Anshuman Mishra</dc:creator>
      <pubDate>Tue, 13 Jun 2023 06:06:30 +0000</pubDate>
      <link>https://dev.to/kcdchennai/cicd-pipeline-dual-helm-github-repo-with-similar-canary-deployment-53lm</link>
      <guid>https://dev.to/kcdchennai/cicd-pipeline-dual-helm-github-repo-with-similar-canary-deployment-53lm</guid>
      <description>&lt;p&gt;Continuous Integration/Continuous Delivery VM Instance Pipeline:&lt;/p&gt;

&lt;p&gt;$ gcloud config list project&lt;br&gt;
[core]&lt;br&gt;
GCP 2nd Gen Cloud Function Inflated With One Pub/Sub CLI Run For Squeezed Cloud Shell&lt;/p&gt;

&lt;p&gt;gcloud  pubsub(-Module 1) subscriptions create --topic myTopic mySubscription&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        1.Inflated Pub/Sub index.js - 

              fun sleep(){
                      Pub/Sub Module 1.()

                                    - Module 2.() -Function 2(Here,Let us now identify if similar Cloud Function can be run in ingress way for another docker container enabled randomly to a subsequent Pub/Sub object) 

                       }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Java 11 Open JDK CLI: public class PipelineOptions implements Serializable&lt;br&gt;
public String Module 2(){-Now let us redirect for one atomic Cloud Function to test its Dataflow API for another Similar Docker Container for a browser&lt;br&gt;
void aq()&lt;br&gt;
}&lt;br&gt;
{&lt;br&gt;
PipelineOptions xd = new PipelineOptions(new FileInputStream());&lt;br&gt;
xd.Module 2() Function 2;&lt;br&gt;
String xd = "";&lt;/p&gt;

&lt;p&gt;--Subsequent Module Function's Cloud Storage Bucket&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    gsutil ls gs://Storage_Bucket/().txt
                              1.CI/CD Pipeline API Analytics In Ingress Mode For Its Function In Storage Bucket 1 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;Conclusion:&lt;/p&gt;

&lt;p&gt;Whenever We run GCP Kubernetes pod for one Storage Bucket,FileInputStream class are managed for its Java 11/Kotlin CLI for one ingress Dataflow Docker Container&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>node</category>
    </item>
    <item>
      <title>Google Cloud Dataprep Initialization For One Data Schema and AI</title>
      <dc:creator>Anshuman Mishra</dc:creator>
      <pubDate>Wed, 07 Jun 2023 06:51:08 +0000</pubDate>
      <link>https://dev.to/kcdchennai/google-cloud-dataprep-initialization-for-one-data-schema-and-ai-1n70</link>
      <guid>https://dev.to/kcdchennai/google-cloud-dataprep-initialization-for-one-data-schema-and-ai-1n70</guid>
      <description>&lt;p&gt;us-east1&lt;br&gt;
Adding zones for regional and multi-regional data in this lake.&lt;/p&gt;

&lt;p&gt;Labels 1:&lt;br&gt;
We have to find the exact Regular Expression for a data with its ingress VM where Lake data can be scaled for one API&lt;/p&gt;

&lt;p&gt;Redirection Of Worker Node For One Named Zone:&lt;/p&gt;

&lt;p&gt;gsutil create --zone name=""&lt;br&gt;
               type= "raw"&lt;br&gt;
                --region= "us-central1"&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now inserting data for raw zone needs exact Java/JSON API that can be run only at these region&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;open class sw:xd{&lt;br&gt;
 fun gy(){&lt;br&gt;
  println("h")&lt;br&gt;
}&lt;br&gt;
    override fun toString(): String { - One atomic Kotlin's toString() function in one Dataplex Lake&lt;br&gt;
        return super.toString()&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
data class hu(val gq:Int)&lt;/p&gt;

&lt;p&gt;fun main(){&lt;br&gt;
val dw = sw().gy().equals("h")&lt;br&gt;
    val dw2 = sw().gy()&lt;br&gt;
    if(dw is String){&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    val ki = ""?:"x"
    println(ki)
}-
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;2.Now let us consider a Kotlin CLI from IntelliJ IDEA IDE to GCP Dataprep with one data class&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   --worker-node = 1
     data class (Dataprep Instance 1 for similar equals method)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;{&lt;br&gt;
  "document":{&lt;br&gt;
    "type":"PLAIN_TEXT",&lt;br&gt;
    "content":"With approximately 8.2 million people residing in Boston, the capital city of Massachusetts is one of the largest in the United States."&lt;br&gt;
  },&lt;br&gt;
  "encodingType":"UTF8"&lt;br&gt;
}- fun toString(){ val rq = ""?:"document"}&lt;br&gt;
(IntelliJ IDEA Ultimate IDE JSON Function)&lt;/p&gt;

&lt;p&gt;Google Cloud Dataprep Regex Pattern Schema To Entity/Sentiment Analysis CLI In A Dataplex:&lt;/p&gt;

&lt;p&gt;(Labels 1 Described Above)-&lt;/p&gt;

&lt;p&gt;gsutil create --zone name=""&lt;br&gt;
               type= "raw"&lt;br&gt;
                --region= "us-central1"&lt;/p&gt;

&lt;h2&gt;
  
  
  Now let us connect Dataplex raw zone with downward curl CLI for one enabled Cloud Shell described below:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;curl -X -h "&lt;a href="https://www.googleapis.com/Project_1/().json"&gt;https://www.googleapis.com/Project_1/().json&lt;/a&gt;&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Kotlin Function CLI 1 -   googleapis.com/(Dataprep Instance1)

     interface qu{
      val gq = ()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;}&lt;br&gt;
         data class ()(val aq:Int)                        &lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;fun main(){&lt;br&gt;
                 val qwp = arrayOf("","")&lt;br&gt;&lt;br&gt;
                  (Now We have to declare a GCP JSON Sentiment AI curl CLI here and interpret whether one Array object can be run for ingress Cloud Shell Dataplex Lake) &lt;br&gt;
                    val qwp2 = ""?:data class(-Testing One GCP CLI AI object)&lt;br&gt;
                      }&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     Java/JSON CLI 1 -   googleapis.com/(Dataprep Instance2)
              import java.util.*;
             public class (){
               static public void main(String[] arg){
                ArrayList zs = new ArrayList();
                zs.add(Dataprep Instance2);-Here we have to find whether public ArrayList object can be injected with downward CLI for a Cloud Shell Lake for a browser region
                            }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>json</category>
    </item>
    <item>
      <title>AWS Project using SHELL SCRIPTING for DevOps</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Wed, 31 May 2023 10:00:31 +0000</pubDate>
      <link>https://dev.to/kcdchennai/ws-project-using-shell-scripting-for-devops-115m</link>
      <guid>https://dev.to/kcdchennai/ws-project-using-shell-scripting-for-devops-115m</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the real-world DevOps scenario, The &lt;strong&gt;AWS Resource Tracker&lt;/strong&gt; script is widely used to provide an overview of the AWS resources being utilised within an environment.&lt;/p&gt;

&lt;p&gt;It aims to help organisations monitor and manage their AWS resources effectively. The script utilises the AWS Command Line Interface (&lt;strong&gt;CLI&lt;/strong&gt;) to fetch information about different AWS services, such as &lt;strong&gt;S3 buckets, EC2 instances, Lambda functions, and IAM users&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does this project do?
&lt;/h2&gt;

&lt;p&gt;By running the AWS Resource Tracker script, users can quickly obtain a list of S3 buckets, EC2 instances, Lambda functions, and IAM users associated with their AWS account.&lt;/p&gt;

&lt;p&gt;This information can be valuable for various purposes, including auditing, inventory management, resource optimisation, and security assessment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build the project
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Create EC2 Instance
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;. Go to your &lt;strong&gt;AWS&lt;/strong&gt; account and log in, then search for &lt;strong&gt;EC2&lt;/strong&gt; Instances in the search bar. Or you can click on the services button that is on the top left corner of your dashboard, from there also you can search for the same.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45bp44kldk9i1yhnl8dk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45bp44kldk9i1yhnl8dk.png" alt="Imageaws"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then click on the &lt;strong&gt;Launch Instance&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;. Give it a &lt;strong&gt;name&lt;/strong&gt; of your choice, &lt;strong&gt;ubuntu&lt;/strong&gt; as a machine image, select your &lt;strong&gt;key-pair&lt;/strong&gt; or create one if not have any. Leave the instance type &lt;strong&gt;t2.micro&lt;/strong&gt; ie free tier as the same and click again on the &lt;strong&gt;Launch Instance&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fresa6qttug4lqtl7l48r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fresa6qttug4lqtl7l48r.png" alt="Image aws"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can see your &lt;strong&gt;instance&lt;/strong&gt; up and &lt;strong&gt;running&lt;/strong&gt; like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12bnduj2ggqlebdjn8lg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12bnduj2ggqlebdjn8lg.png" alt="Image aws"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;. Now click on the instance &lt;strong&gt;id&lt;/strong&gt; which will give you detailed information about the running instance, there copy the public &lt;strong&gt;IP address&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F750rec8gy31u2876xc5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F750rec8gy31u2876xc5a.png" alt="Image aws"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the Instance&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;. Now open up your terminal and run the below command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ssh -i /Users/poonampawar/Downloads/my-key-pair.pem ubuntu@ip_add&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We have to go inside the directory where you have downloaded your key-pair and in my case, it is in &lt;code&gt;/Downloads&lt;/code&gt; folder.&lt;/p&gt;

&lt;p&gt;Check yours and give a path accordingly. And don't forget to replace &lt;code&gt;ip_add&lt;/code&gt; with your copied one. This will take you to the virtual machine which we have created in AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t67paiubn1xyvyxafm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t67paiubn1xyvyxafm5.png" alt="Image aws"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Script
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt;. Now create a shell script file named &lt;code&gt;aws_resource_tracker.sh&lt;/code&gt; and copy and paste the below script.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#########################
# Author: Your Name
# Date: 28/05/23
# Version: v1
#
# This script will report the AWS resource usage
########################

set -x

# AWS S3
# AWS EC2
# AWS Lambda
# AWS IAM Users

# list s3 buckets
echo "Print list of s3 buckets"
aws s3 ls

# list EC2 Instances
echo "Print list of ec2 instances"
aws ec2 describe-instances | jq '.Reservations[].Instances[].InstanceId'

# list lambda
echo "Print list of lambda functions"
aws lambda list-functions

# list IAM Users
echo "Print list of iam users"
aws iam list-users


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1q4n733hq6o8ybgsxc4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1q4n733hq6o8ybgsxc4.png" alt="Image aws"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is a good convention to always provide your details so that it will make it easier for other developers to contribute if they want to ie, metadata.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The command &lt;code&gt;set -x&lt;/code&gt; is given here in order to debug the script, this will print the command which we are running and then prints the output.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The commands &lt;code&gt;aws s3 ls&lt;/code&gt;, &lt;code&gt;aws lambda list-functions&lt;/code&gt; and &lt;code&gt;aws iam list-users&lt;/code&gt; print the information of the given statement.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The command&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ec2 describe instances | jq '.Reservations[].Instances[].InstanceId'&lt;/code&gt; will give you all the Instance IDs present in your &lt;code&gt;aws&lt;/code&gt; in &lt;code&gt;JSON&lt;/code&gt; format.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test the script
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 6&lt;/strong&gt;. Run the below command to see the output.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./aws_resource_tracker.sh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The output will look like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsk0ke7hl253msylpx9m1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsk0ke7hl253msylpx9m1.png" alt="Image o/p"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Apply CronJob
&lt;/h3&gt;

&lt;p&gt;To use cron jobs with your script, you can schedule it to run at specific intervals using the cron syntax. Here's an example of how you can modify your script to use cron jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open a terminal and run the following command to edit the &lt;br&gt;
crontab file:&lt;br&gt;
&lt;code&gt;crontab -e&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If prompted, choose your preferred text editor.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6328ifon40392btyjxb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6328ifon40392btyjxb.png" alt="Image o/p"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add a new line to the crontab file to schedule your 
script. For example, to run the script every day at 9 AM, 
you can add the following line:
&lt;code&gt;0 9 * * * /path/to/your/script.sh&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Modify the path &lt;code&gt;/path/to/your/script.sh&lt;/code&gt; to the actual &lt;br&gt;
  path where your script is located.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4vc06fc6kjsfxxo6tv52.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4vc06fc6kjsfxxo6tv52.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Save the crontab file and exit the text editor.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The above cron expression (0 9 * * *) represents the schedule: minute (0), hour (9), any day of the month, any month, and any day of the week. You can customize the schedule based on your requirements using the cron syntax.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By adding this line to the crontab file, your script will be executed automatically according to the defined schedule. The output of the script will be sent to your email address by default, or you can redirect the output to a file if needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure that the script is executable &lt;code&gt;(chmod +x /path/to/your/script.sh)&lt;/code&gt; and that the necessary environment variables and AWS CLI configurations are set up correctly for the script to run successfully within the cron environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitoring and auditing&lt;/strong&gt;: By running this script periodically using cron jobs, you can monitor and audit your AWS resources. It provides insights into the status and details of different resources, such as S3 buckets, EC2 instances, Lambda functions, and IAM users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource inventory&lt;/strong&gt;: The script helps in maintaining an up-to-date inventory of your AWS resources. It lists the S3 buckets, EC2 instances, Lambda functions, and IAM users, allowing you to have a clear understanding of what resources exist in your environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Troubleshooting&lt;/strong&gt;: In case of any issues or incidents, this script can be used to quickly gather information about the relevant AWS resources. For example, if there is an issue with an EC2 instance, you can run the script to get the instance ID and other details for further investigation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automation and reporting&lt;/strong&gt;: The script can be integrated into an automated pipeline or workflow to generate regular reports about AWS resource usage. This information can be valuable for tracking costs, resource utilization, and compliance requirements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability and efficiency&lt;/strong&gt;: In larger environments with numerous AWS resources, manually retrieving information about each resource can be time-consuming and error-prone. By using this script, you can automate the process and retrieve resource details in a consistent and efficient manner.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, this script simplifies the process of gathering information about AWS resources, enhances visibility into your infrastructure, and supports effective management and monitoring of your DevOps environment.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Github link: &lt;a href="https://github.com/Poonam1607/shell-scripting-projects" rel="noopener noreferrer"&gt;https://github.com/Poonam1607/shell-scripting-projects&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Resource: For better understanding and visual learning you can check out this tutorial - &lt;a href="https://youtu.be/gx5E47R9fGk" rel="noopener noreferrer"&gt;https://youtu.be/gx5E47R9fGk&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;This project is purely based on my learnings. It may occur error while performing in your setup. If you find any issue with it, feel free to reach out to me.&lt;/p&gt;

&lt;p&gt;Thank you🖤!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>shell</category>
      <category>devops</category>
      <category>kcdchennai</category>
    </item>
    <item>
      <title>Kubernetes Cluster Maintenance</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Tue, 30 May 2023 10:16:13 +0000</pubDate>
      <link>https://dev.to/kcdchennai/kubernetes-cluster-maintenance-58k8</link>
      <guid>https://dev.to/kcdchennai/kubernetes-cluster-maintenance-58k8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction✍️
&lt;/h2&gt;

&lt;p&gt;Till now we have done a lot of things!!🥹&lt;/p&gt;

&lt;p&gt;Recap👇&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes Installation &amp;amp; Configurations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes Networking, Workloads, Services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes Storage &amp;amp; Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;kudos to you!👏🫡 This is literally a lot...!😮‍💨&lt;/p&gt;

&lt;p&gt;By doing all these workloads your cluster is tiered now🤕. You have to give the energy back and make it faster. It is very important to make your &lt;strong&gt;cluster healthy&lt;/strong&gt;😄 and fine.&lt;/p&gt;

&lt;p&gt;So it's high time to understand the Kubernetes cluster maintenance stuff now.&lt;/p&gt;

&lt;p&gt;In our today's learning, we will be covering cluster upgradation, backing up and &lt;strong&gt;restoring&lt;/strong&gt; the data and &lt;strong&gt;scaling&lt;/strong&gt; our Kubernetes cluster. So, let's get started!🚀&lt;/p&gt;

&lt;h2&gt;
  
  
  Cluster Upgradation♿
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Let's say you are running your application in the Kubernetes cluster having &lt;strong&gt;master&lt;/strong&gt; node and &lt;strong&gt;worker&lt;/strong&gt; nodes. Pods and replicas are up and running.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now you want to &lt;strong&gt;upgrade&lt;/strong&gt; your nodes. As everyone wants to keep updated themselves and so do the nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When a node is in upgradation mode, it generally goes down and is no longer in use. So you cannot keep your master and worker node together in the upgrade state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;So firstly, you will upgrade the master node. While it's upgrading, your worker node cannot deploy new pods or do any modifications. The Pods that are running already, only they will be available for the users to access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Though the users are not going to be impacted as they still have the application up and running.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When the master node is done with the upgradation and again up and running, now we can upgrade our worker nodes. But in this also, we cannot make worker nodes goes down altogether as this may impact the users who are using the applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If we have three Worker nodes in our cluster, they will go one after the other in the upgrade state. When &lt;code&gt;node01&lt;/code&gt; goes down, the pods and replicas running in that node will shift to the other working nodes for a while ie in &lt;code&gt;node02&lt;/code&gt; and &lt;code&gt;node03&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then &lt;code&gt;node02&lt;/code&gt; will go down after &lt;code&gt;node01&lt;/code&gt; is upgraded and available again for the users. The pod distribution of &lt;code&gt;node02&lt;/code&gt; will go to &lt;code&gt;node01&lt;/code&gt; and &lt;code&gt;node03&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And the same procedure will follow up to upgrade &lt;code&gt;node03&lt;/code&gt;. This is how we upgrade our cluster in Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There is another way to upgrade the cluster, you can deploy a worker node with the updated version into your cluster and shift the workloads of the older one to the new ones then delete the older node. This is how you can achieve the upgradation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let's do this practically. First, the master node:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubeadm&lt;/code&gt; which is a tool for managing clusters has an upgrade command that helps in upgrading the clusters. Run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubeadm upgrade plan&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Run the above command to see the detailed information on the &lt;strong&gt;upgrade plan&lt;/strong&gt; and gives you the information of the upgrade plan if your system is needed.&lt;/p&gt;

&lt;p&gt;Then run the &lt;code&gt;drain&lt;/code&gt; command to make it &lt;code&gt;un-schedulable&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F348wg85q9mp4x4wj39on.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F348wg85q9mp4x4wj39on.png" alt="Image drain"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now install all the packages to use &lt;code&gt;kubelet&lt;/code&gt; as it is a must in running &lt;code&gt;controlplane&lt;/code&gt; node:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyeghfdhh2g0m0t94boe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyeghfdhh2g0m0t94boe.png" alt="Image cp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, Run the below command to upgrade the version:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;apt-get upgrade -y kubeadm=1.12.0-00&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now to upgrade the cluster, run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubeadm upgrade apply v1.12.0&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It will pull the necessary images and upgrade the cluster components.&lt;/p&gt;

&lt;p&gt;Now run the below command to see the changes&lt;/p&gt;

&lt;p&gt;&lt;code&gt;systemctl restart kubelet&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now it's time to upgrade the worker node one at a time. Follow these commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# First move all the workloads of node-1 to the others

$ kubectl drain node-1
# this terminate all the pods from a node &amp;amp; reschedule them on others

$ apt-get upgrade -y kubeadm=1.12.0-00
$ apt-get upgrade -y kubelet=1.12.0-00

$ kubeadm upgrade node config --kubelet-version v1.12.0
# upgrade the node config for the new kubelet version

$ systemctl restart kubelet

# as we marked the node un-schedulable above, wee need to make schedule again
$ kubectl uncordon node-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqn3m3ysu8s1m3vec7ugh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqn3m3ysu8s1m3vec7ugh.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3066gimwfgwbiiw1s3gs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3066gimwfgwbiiw1s3gs.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvudtuw6hsufcc2diojsk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvudtuw6hsufcc2diojsk.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femzf1fjaighld5a2d4fp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femzf1fjaighld5a2d4fp.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxzshycmrtwdbmne8r5j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxzshycmrtwdbmne8r5j.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; command will not work on the worker node, it will only work on the master node that's why after applying all the commands come back on the &lt;code&gt;controlplane&lt;/code&gt; and make the node available again for scheduling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backup &amp;amp; Restore Methods🛗
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We have till now deployed many numbers of applications in the Kubernetes cluster using &lt;code&gt;pods&lt;/code&gt;, &lt;code&gt;deployments&lt;/code&gt; and &lt;code&gt;services&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;So there are many files that are important to be backed up. Like the &lt;code&gt;ETCD&lt;/code&gt; cluster where all the information about clusters is stored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Persistent volumes&lt;/strong&gt; storage is where we store the pod's data as we learned above.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can store all these files in a source code repository like &lt;strong&gt;GitHub&lt;/strong&gt; which is a good practice. In this even if you lost your whole cluster you still can deploy it again if you're using GitHub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A better approach to back up your file is to query the &lt;code&gt;kube-api&lt;/code&gt; server using the &lt;code&gt;kubectl&lt;/code&gt; or by accessing the API server directly and saving all resource configurations for all objects created on the cluster as a copy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can also choose to back up the &lt;code&gt;ETCD&lt;/code&gt; server itself instead of the files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Like the screenshots on your phone, you take snapshots here of the database by using the &lt;code&gt;etdctl&lt;/code&gt; utilities snapshot save command.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;ETCDCTL_API=3 etcdctl \ snapshot save &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, if you want to restore the snapshot. First, you have to stop the server as the restore process requires the restart &lt;code&gt;ETCD&lt;/code&gt; cluster and &lt;code&gt;kube-api&lt;/code&gt; server&lt;/p&gt;

&lt;p&gt;&lt;code&gt;service kube-apiserver stop&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then run the &lt;code&gt;restore&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ETCDCTL_API=3 etcdctl \ snapshot restore &amp;lt;name&amp;gt; \ --data-dir &amp;lt;path&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now restart the services which we stopped earlier&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$systemctl daemon-reload
$service etcd restart
$service kube-apiserver start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Scaling Clusters📶
&lt;/h2&gt;

&lt;p&gt;We have done the scaling of pods in the Kubernetes cluster very well. Now what if you want to scale your cluster? Let's see how it can be done.&lt;/p&gt;

&lt;p&gt;According to the capacity worker nodes adjust themselves by adding or removing from the cluster.&lt;/p&gt;

&lt;p&gt;Kubernetes provides several tools and methods for scaling a cluster, like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manual Scaling🫥&lt;/li&gt;
&lt;li&gt;Horizontal Pod Autoscaler (HPA)▶️&lt;/li&gt;
&lt;li&gt;Cluster Autoscaler↗️&lt;/li&gt;
&lt;li&gt;Vertical Pod Autoscaler (VPA)⏫&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's have a look at all one by one&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Manual Scaling&lt;/strong&gt; - As the name suggests, we have to scale it manually using the &lt;code&gt;kubectl&lt;/code&gt; command. Or if you're using any cloud provider, increase or decrease the number of worker nodes manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Horizontal Pod Autoscaler (HPA)&lt;/strong&gt; - It automatically scales the number of replicas of a deployment or a replica set based on the observed CPU utilisation or other custom metrics.&lt;/p&gt;

&lt;p&gt;When defining the definition file, you must ensure the usage of memory and cpu&lt;/p&gt;

&lt;p&gt;To use &lt;code&gt;utilisation-based&lt;/code&gt; resource scaling, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; type: Resource
 resource:
   name: cpu
   target:
     type: Utilization
     averageUtilization: 60
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is also known as "&lt;strong&gt;scaling out&lt;/strong&gt;". It involves adding more replicas of a pod to a deployment or replica set to handle the increased load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster Autoscaler&lt;/strong&gt; - Based on the pending pods and the available resources in the cluster, it automatically scales the number of worker nodes in a cluster.&lt;/p&gt;

&lt;p&gt;Read more about this scaler in detail &lt;a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="noopener noreferrer"&gt;here&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vertical Pod Autoscaler (VPA)&lt;/strong&gt; - It automatically adjusts the resource requests and limits of the containers in a pod based on the observed resource usage.&lt;/p&gt;

&lt;p&gt;It is also known as "&lt;strong&gt;scaling up&lt;/strong&gt;," which involves increasing the CPU, memory, or other resources allocated to a single pod.&lt;/p&gt;




&lt;p&gt;Thank you!🖤&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>maintenance</category>
      <category>devops</category>
      <category>kcdchennai</category>
    </item>
    <item>
      <title>Simply Deploying Kubernetes Workloads</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Mon, 29 May 2023 06:00:29 +0000</pubDate>
      <link>https://dev.to/kcdchennai/simply-deploying-kubernetes-workloads-loi</link>
      <guid>https://dev.to/kcdchennai/simply-deploying-kubernetes-workloads-loi</guid>
      <description>&lt;h2&gt;
  
  
  Introduction✍️
&lt;/h2&gt;

&lt;p&gt;Before moving forward directly to our main topics let's recall the sub-topics which is crucial for the next ones. So that you can grasp the context in a much more clear way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Every other Relationship👀
&lt;/h2&gt;

&lt;p&gt;First, let's talk about every relationship which we have to keep in mind forever from now onward.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;POD -&amp;gt; CONTAINER -&amp;gt; NODE -&amp;gt; CLUSTER&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Feh-ZmBB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f5gu9bwa8of93b2d4rqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Feh-ZmBB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f5gu9bwa8of93b2d4rqq.png" alt="Imagepods" width="794" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Are you thinking of the same as what I am thinking?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Dabbe pe dabba, uske upper phir se ek dabba...🤪&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;No?? Ok ok! Forgive me:)&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(translation: don't mind it, please. Thank you!)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So basically you got the idea of how pods are encapsulated into the containers which are placed inside the node and the group of nodes becomes a cluster.&lt;/p&gt;

&lt;p&gt;And we have already covered their workings in the previous articles.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-requisites👉
&lt;/h2&gt;

&lt;p&gt;So a must-have pre-requisite topic is &lt;strong&gt;ReplicaSets&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now be serious guys and let's start the conversation on &lt;code&gt;replicaset&lt;/code&gt; aka &lt;code&gt;rs&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ReplicaSets🗂️
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--85NzyhVQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5j65fjmtomsae9yds6hy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--85NzyhVQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5j65fjmtomsae9yds6hy.png" alt="Image RS" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Continuing the imagination from the previous blog of your application deployment...&lt;/p&gt;

&lt;p&gt;You have your application running into a pod. Now suddenly your application traffic grows and you didn't prepare your app for this huge number and due to this it crashes.&lt;/p&gt;

&lt;p&gt;Or imagine another scenario where you have to update your app version from &lt;code&gt;1.0&lt;/code&gt; to &lt;code&gt;2.0&lt;/code&gt; ie &lt;code&gt;v1&lt;/code&gt; to &lt;code&gt;v2&lt;/code&gt; and in doing so your app stops running and the users fail to access the application.&lt;/p&gt;

&lt;p&gt;In case of application failure, you need another &lt;strong&gt;instance&lt;/strong&gt; of your application that saves you from crashes and running at the same time. So that the users cannot lose access to the application.&lt;/p&gt;

&lt;p&gt;This is where the &lt;strong&gt;replication controller&lt;/strong&gt; (now as &lt;strong&gt;replicaset&lt;/strong&gt; the upgradation version) comes in as a savior. ReplicaSets always takes care of running multiple instances of a single pod running in the k8s cluster.&lt;/p&gt;

&lt;p&gt;It helps us to automatically bring up the new pod when the existing ones fail to run.&lt;/p&gt;

&lt;p&gt;You can set it to either one or hundreds, it's totally up to your choice.&lt;/p&gt;

&lt;p&gt;It also helps us to balance the load in our k8s clusters. It maintains the &lt;strong&gt;load balance&lt;/strong&gt; when the demand increases by creating instances of the pod in other clusters too.&lt;/p&gt;

&lt;p&gt;So, it helps us to scale our application when the demand increases.&lt;/p&gt;

&lt;p&gt;A simple Replica Yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      tier: nginx
  template:
    metadata:
      labels:
        tier: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f /root/replicaset-demo.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To scale &lt;code&gt;up&lt;/code&gt; or &lt;code&gt;down&lt;/code&gt; your replicas, you can use &lt;code&gt;kubectl scale&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl scale rs replicaset-demo --replicas=5&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To check the replicasets use &lt;code&gt;get&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get rs&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will show you all the previous and newly created replicas by you in the system.&lt;/p&gt;

&lt;p&gt;This is how you can create replicas of your pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployments🏗️
&lt;/h2&gt;

&lt;p&gt;This comes to the highest in the hierarchy level of deploying our applications to production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TWYvzcMp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4cqi4qg2n2bkgqq4dxge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TWYvzcMp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4cqi4qg2n2bkgqq4dxge.png" alt="Image Deploy" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you have many instances running into the k8s cluster and want to update the version of your application then the rolling updates do their job one after the other instead of making it down at the same time and then up altogether for obvious reasons.&lt;/p&gt;

&lt;p&gt;These rolling updates and rollback performance is done by the k8s deployment.&lt;/p&gt;

&lt;p&gt;As we know that every pod deploys single instances of our application and each container is encapsulated in the pod and then such pods are deployed using replica sets.&lt;/p&gt;

&lt;p&gt;Then comes Deployment with all the capabilities of doing upgradation of the whole set of environment production.&lt;/p&gt;

&lt;p&gt;A simple deploymentyaml file &lt;code&gt;deployment-definition-httpd.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd-frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      name: httpd-frontend
  template:
    metadata:
      labels:
        name: httpd-frontend
    spec:
      containers:
      - name: httpd-frontend
        image: httpd:2.4-alpine
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f deployment-definition-httpd.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To check the deployments you have created use &lt;code&gt;get&lt;/code&gt; command, deployment aka &lt;code&gt;deploy&lt;/code&gt; :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get deploy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can also check for the specific one by giving its name:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get deploy httpd-frontend&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To get more detailed information regarding the deployment, run the &lt;code&gt;describe&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe deploy httpd-frontend&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To check all the workloads you have created till now, use &lt;code&gt;get all&lt;/code&gt; commands:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get all&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3-nYZMuQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cpleobjonnik5c4mg5wx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3-nYZMuQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cpleobjonnik5c4mg5wx.png" alt="Image get" width="538" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is how you can deploy your pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  StatefulSets🏗️📑
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Some topics will repeat themselves but for the sake of connecting them to the next one, it is necessary to do so. So bear with me.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The use of Deployment is to deploy all the pods together to make sure every pod is up and running, all these things we have seen above.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;But what if you have many servers in the form of pods that you want it to run in order? For this, deployment will not help you because there is no order specified in it for running pods in the k8s cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For that, you need &lt;strong&gt;StatefulSet&lt;/strong&gt;. It ensures to run the pods in the sequential order which you want to run. First, the pod is deployed and it must be in a running state then only the second one will be deployed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stateful set is similar to deployment sets, they create pods based on the templates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They can scale &lt;code&gt;up&lt;/code&gt; and scale &lt;code&gt;down&lt;/code&gt; as per the requirements. And can perform &lt;strong&gt;rolling updates&lt;/strong&gt; and &lt;strong&gt;rollbacks&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Like, you want to deploy the &lt;code&gt;master-node&lt;/code&gt; first then the &lt;code&gt;worker-node-1&lt;/code&gt; up completely then starts running and after that &lt;code&gt;worker-node-2&lt;/code&gt; will be up and run itself into the k8s cluster. StatefulSet can help you to achieve this.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As in the deployment sets, when a pod goes down and a new pod comes up, it is with a different pod name and different &lt;strong&gt;IPs&lt;/strong&gt; come in.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;But in StatefulSets when a pod goes down and a new pod comes up, it will be the same name that you have specifically defined for that pod.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;So the StateulSets maintain an identity for each of its pods that helps to maintain the order of the deployment of your pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you want to use StatefulSets just for the sake of identity purposes for the pods and not for the sequential deploying order then you can manually remove the commands of the order maintenance, just you have to make some changes in the YAML file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Though It is not necessary for you to use this StatefulSet. As it totally depends on your need for the application. If you have servers that require an order to run or you need a stable naming convention for your pods. Then this is the right choice to use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can create a StatefulSet yaml file just like the deployment file with some changes like the main one is &lt;code&gt;kind&lt;/code&gt; as StatefulSet. Take a look:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  labels:
        app: msql
spec:
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql
  replicas: 3
  selector:
      matchLabels:
          app: mysql
  serviceName: mysql-h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;than run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f statefulset-definition.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To scale up or down use the &lt;code&gt;scale&lt;/code&gt; command with the numbers you wanted to scale:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl scale statefulset mysql --replicas=5&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is how you can work with StatefulSets.&lt;/p&gt;

&lt;h2&gt;
  
  
  DaemonSets🤖
&lt;/h2&gt;

&lt;p&gt;Till now, we have deployed the replicasets as per the demand so the required number of pods is always up and running.&lt;/p&gt;

&lt;p&gt;Daemon sets are like replica sets. It helps you to deploy multiple instances of the pod.&lt;/p&gt;

&lt;p&gt;So what's the difference?&lt;/p&gt;

&lt;p&gt;It runs one &lt;strong&gt;copy&lt;/strong&gt; of your pod on each node of your cluster.&lt;/p&gt;

&lt;p&gt;Whenever you add a new node, it makes sure that a replica of the pod is automatically added to that node. And removed automatically when the node is removed.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The main difference is, the daemon sets make sure that one copy of the pod is always present in all the nodes in the k8s cluster. On the other hand replica set runs a specified number of replicas of the pod which you defined in the YAML file.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It is also used to deploy the monitoring agent in the form of a pod.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Creating a daemon file is much similar to the replica file which we have already created above. You just have to make some changes and you are done.&lt;/p&gt;

&lt;p&gt;The main difference is of course the &lt;code&gt;kind&lt;/code&gt; ie ReplicaSet to DaemonSet&lt;/p&gt;

&lt;p&gt;Take a look &lt;code&gt;fluentd.yaml&lt;/code&gt; is the name of the file with the content below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: elasticsearch
  name: elasticsearch
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - image: registry.k8s.io/fluentd-elasticsearch:1.20
        name: fluentd-elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f fluentd.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To get the daemonstate with all the namespace which you have created, run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get daemonsets --all-namespaces&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zLQaWVO_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v78do0ps4it79sd66o5o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zLQaWVO_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v78do0ps4it79sd66o5o.png" alt="Image daemon" width="761" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is how you can create DemonSets&lt;/p&gt;

&lt;h2&gt;
  
  
  JOBS🧑‍💻
&lt;/h2&gt;

&lt;p&gt;As we all know, Kubernetes ensures that our application is always &lt;strong&gt;up&lt;/strong&gt; and &lt;strong&gt;running&lt;/strong&gt; no matter what.&lt;/p&gt;

&lt;p&gt;Let's say we perform work in our application and when it successfully completed it shows the successful message and then again comes in a running state because that's the nature of k8s.&lt;/p&gt;

&lt;p&gt;Jobs in k8s does the work in your application and when it becomes successful it shows the completed message and then stops the work as it is no longer in use.&lt;/p&gt;

&lt;p&gt;But why do we need JOBS?&lt;/p&gt;

&lt;p&gt;Let's assume you are using your camera and clicked some pictures. So this enabling of camera work is going on in your application, k8s is running your app in the pod and after use, you disabled your camera. But k8s again starts running it as it always looks up for the application up and running. And you are unaware of this pod running.&lt;/p&gt;

&lt;p&gt;Then isn't it a dangerous task?&lt;/p&gt;

&lt;p&gt;That's why we need JOBS so that in a specific work when the task is completed it does not go into a running state again.&lt;/p&gt;

&lt;p&gt;So a task you may not want to run continuously, then using a Job would be appropriate. Once the task is complete, the Job can be terminated, and the pod will not start again unless a new Job is created.&lt;/p&gt;

&lt;p&gt;It is done so because a spec member ie &lt;code&gt;restartPolicy&lt;/code&gt; is set as always by default.&lt;/p&gt;

&lt;p&gt;So in the creation of the JOBS yaml file, we set it to never&lt;/p&gt;

&lt;p&gt;Run the command to create a job definition file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create job throw-dice-job --image=kodekloud/throw-dice --dry-run=client -o yaml &amp;gt; throw-dice-job.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the following YAML file to create the job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: batch/v1
kind: Job
metadata:
  name: throw-dice-job
spec:
  backoffLimit: 15 # This is so the job does not quit before it succeeds.
  template:
    spec:
      containers:
      - name: throw-dice
        image: kodekloud/throw-dice
      restartPolicy: Never
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above file, we have created a JOB to throw the dice until it gets &lt;code&gt;six&lt;/code&gt; and the chances offered is 15 times to play.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f throw-dice-job.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is how you can create your own job file.&lt;/p&gt;

&lt;h3&gt;
  
  
  CronJob🥸
&lt;/h3&gt;

&lt;p&gt;A CronJob is a JOB that will perform the task on a given &lt;strong&gt;time period&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You can &lt;strong&gt;schedule&lt;/strong&gt; your job to do a &lt;strong&gt;task&lt;/strong&gt; at a specific &lt;strong&gt;time&lt;/strong&gt; you want.&lt;/p&gt;

&lt;p&gt;It supports complex scheduling, with the ability to specify the minute, hour, day of the month, month, and day of the week.&lt;/p&gt;

&lt;p&gt;It can be used to create one-off Jobs or Jobs that run multiple times.&lt;/p&gt;

&lt;p&gt;It can be used to run parallel processing tasks by specifying the number of pods to be created.&lt;/p&gt;

&lt;p&gt;Example,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Updating the phone at midnight to avoid interruptions while using the phone.🤳&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Email scheduling, you schedule an email using cronjob to perform this task. periodically.🧑‍💻&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let us now schedule a job to run at 21:30 hours every day.&lt;/p&gt;

&lt;p&gt;Create a CronJob for this.&lt;/p&gt;

&lt;p&gt;Use the following YAML file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: batch/v1
kind: CronJob
metadata:
  name: throw-dice-cron-job
spec:
  schedule: "30 21 * * *"
  jobTemplate:
    spec:
      completions: 3
      parallelism: 3
      backoffLimit: 25 # This is so the job does not quit before it succeeds.
      template:
        spec:
          containers:
          - name: throw-dice
            image: kodekloud/throw-dice
          restartPolicy: Never
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is how you can use the CronJob file.&lt;/p&gt;

&lt;p&gt;Every example I have given above is just for the sake of explanation. Don't take that seriously.&lt;/p&gt;

&lt;p&gt;Now we have deployed every workload of the Kubernetes cluster. It takes practice to make hands-free in creating replicas, deployments, jobs etc. So do the practice.&lt;/p&gt;




&lt;p&gt;Thank you!🖤&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>deployments</category>
      <category>devops</category>
      <category>kcdchennai</category>
    </item>
    <item>
      <title>Vertex AI And GCP Cloud Dataprep VM Instantiation For One JSON Function In A Cloud Shell</title>
      <dc:creator>Anshuman Mishra</dc:creator>
      <pubDate>Mon, 29 May 2023 04:59:38 +0000</pubDate>
      <link>https://dev.to/kcdchennai/vertex-ai-and-gcp-cloud-dataprep-vm-instantiation-for-one-json-function-in-a-cloud-shell-2j7f</link>
      <guid>https://dev.to/kcdchennai/vertex-ai-and-gcp-cloud-dataprep-vm-instantiation-for-one-json-function-in-a-cloud-shell-2j7f</guid>
      <description>&lt;p&gt;GCP JSON CLI For One Virtual Machine Instance:&lt;/p&gt;

&lt;p&gt;--worker-nodes n1-standard-1&lt;br&gt;
                 n2-standard-2&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                 gsutil cp gs://(
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;VM Node 1 In Cloud Shell Browser)&lt;br&gt;
)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;--worker-node:1&lt;br&gt;
   n1-standard-1(JavaScript Atomic Function Scalability):&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                    index.js: const functions = require('@google-cloud/functions-framework');
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;functions.cloudEvent('eventStorage', (cloudevent) =&amp;gt; {&lt;br&gt;
console.log('A new event in your Cloud Storage bucket has been logged!');&lt;br&gt;
console.log(cloudevent);&lt;br&gt;
});&lt;br&gt;
-Now We have to detect the exact function for Cloud Storage Bucket CLI for one cloud event where its downward AI/ML API can be run in ingress Cloud Shell &lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Kubernetes Networking</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Sat, 27 May 2023 09:18:32 +0000</pubDate>
      <link>https://dev.to/kcdchennai/kubernetes-networking-18mk</link>
      <guid>https://dev.to/kcdchennai/kubernetes-networking-18mk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction✍️
&lt;/h2&gt;

&lt;p&gt;Networking in k8s is such a crucial topic to understand and a must for working in k8s. Here we will discuss networking, particularly in the work use of k8s directly. So before you come to this topic you must have knowledge of basic computer networking.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Switching&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Routing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gateways&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bridges&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  PODS📦
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Before jumping right into the Kubernetes networking stuff. First, let's recall the Pods concept because we will be using Pods in every sentence further.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Whatever we are doing, the main goal is to deploy our application in the form of containers on a worker node in the cluster which must be up and running.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As Kubernetes does not &lt;strong&gt;deploy&lt;/strong&gt; the containers directly on the nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The containers are encapsulated into an object known as Pods&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Pod is a single instance of an application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A Pod is the smallest object that you can create in Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A simple yaml pod file:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
      apiVersion: v1
      kind: Pod
      metadata:
        name: demo-pod
      spec:
        containers:
        - name: demo-container
          image: nginx
          ports:
          - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Then run the &lt;code&gt;kubectl&lt;/code&gt; apply command once you have created the YAML file to deploy it in the Kubernetes cluster with the necessary file name you have given.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f demo-pod.yaml&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is how you can start creating PODs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Network Policies📋
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Kubernetes, every routing network traffic instruction is set by the network policies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A set of tools that defines the communication between the pods in a cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is a powerful tool used for the security of network traffic in k8s clusters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Allowance of traffics from one specific pod to another one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Restricting traffic to a specific set of ports and protocols.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implementation is done by Network APIs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can be applied to namespaces or individual pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A simple network policy yaml file:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  apiVersion: networking.k8s.io/v1
  kind: NetworkPolicy
  metadata:
    name: demo-network-policy
  spec:
    podSelector:
      matchLabels:
        app: my-app
    policyTypes:
    - Ingress
    ingress:
    - from:
      - podSelector:
          matchLabels:
            app: allowed-app
      ports:
      - protocol: TCP
        port: 80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;For the deployment of the Network Policy YAML file, use the &lt;code&gt;kubectl&lt;/code&gt; apply command:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f demo-network-policy.yaml&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is how you can define your own network policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Services🪖
&lt;/h2&gt;

&lt;p&gt;Let's say, you deployed a pod having a web application running on it and you want to access your application outside the pod. Then how to access the application as an external user?&lt;/p&gt;

&lt;p&gt;The k8s cluster setup in your local machine has an IP address similar to your system IP but the pod has a different &lt;strong&gt;IP address&lt;/strong&gt; which is inside the node.&lt;/p&gt;

&lt;p&gt;As the pod and your system share different addresses, there is no way to access the application directly into the system.&lt;/p&gt;

&lt;p&gt;So we need something in-between to fill the gap and gives us access to web application directly from our systems.&lt;/p&gt;

&lt;p&gt;Getting started with Kubernetes Services - Spectro Cloud&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kGeyCPj---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/imjqjst89wdsppr0d1y8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kGeyCPj---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/imjqjst89wdsppr0d1y8.png" alt="Image svc" width="650" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is where &lt;strong&gt;Kubernetes Services&lt;/strong&gt; comes into the picture.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;k8s services activate the communication between several components inside-out the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It helps us to connect the application together with other applications and users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is an object just like PODs, Replicas and Deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It listens to a port on the node and forwards the requests to the port where the application is running inside the pod.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A simple service yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: demo-service
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Services Types📑:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;NodePort Service&lt;/li&gt;
&lt;li&gt;ClusterIP Service&lt;/li&gt;
&lt;li&gt;LoadBalancer Service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;1) NodePort Service - It makes an internal POD accessible to the Port on the Node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q0Li1KBg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgo6sd54q06d2tvv04y9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q0Li1KBg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgo6sd54q06d2tvv04y9.png" alt="Figure 1: Kubernetes NodePort service" width="738" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the port number inside the pod is the target port&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the port number in the service area is the 2nd Port&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and the port on the node is NodePort where we're going to access the web server externally.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the valid range of NodePort (by default): 3000 to 32767&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;labels &amp;amp; selectors are necessary to describe in the spec section in case of multiple pods in a node or multiple nodes in a cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2) ClusterIP Service - It creates a virtual IP inside the cluster to enable communication between different services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DlykfcvK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dfi0hikbd6surfmbdh9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DlykfcvK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dfi0hikbd6surfmbdh9.png" alt="Figure2: Kubernetes ClusterIP service" width="613" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a service created of a tier of your application like backend or frontend which helps to group all the pods of a tier and provide a single interface for other pods to access this service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3) LoadBalancer Service - It exposes the service on a publicly accessible IP address in a supported cloud provider.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZL5QP7w6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dqt6yy9o3sfijmblms2v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZL5QP7w6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dqt6yy9o3sfijmblms2v.png" alt="Figure3: Kubernetes LoadBalancer service" width="597" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A demo NodePort yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: my-demo-service
spec:
  type: NodePort
  selector:
    run: nginx
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30007
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now run,&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f myservice.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is how you can create your service of NodePort type. By changing the &lt;code&gt;spec&lt;/code&gt; &lt;code&gt;type&lt;/code&gt; to &lt;code&gt;ClusterIP&lt;/code&gt; or &lt;code&gt;LoadBalancer&lt;/code&gt;, whatever you wish to work with just change it with the name and port according to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  CNI (Computer Network Interface)🌐
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CNI is a set of standards that defines how to configure networking challenges in a container runtime environment like Docker and Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is a simple plugin-based architecture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It defines how the plugin should be developed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Plugins are responsible for configuring the network interface.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CNI comes with a set of supported plugins already. Like &lt;strong&gt;bridge, VLAN, IPVLAN, MACVLAN&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker does not implement CNI. It has its own set of standards known as CNM ie, Container Network Model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We cannot run a docker container to specify the network plugin to use CNI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;But you can do it manually by creating a docker container without any network configuration&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;docker run --network=none nginx&lt;/code&gt;&lt;br&gt;
then invoke the bridge plugin by yourself.&lt;/p&gt;

&lt;h3&gt;
  
  
  CNI in k8s🕸️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kubernetes is responsible for creating container network namespaces&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attaching those namespaces to the right network&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can see the network plugins set to CNI by running the below command&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;ps -aux | grep kubelet&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The CNI plugin is configured in the kubelet service on each node in the cluster&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The CNI bin directory has all the supported CNI plugins as executables.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;ls /opt/cni/bin&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;k8s supports various CNI plugins like Calico, Weave Net, Flannel, DHCP and many more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can identify which plugin is currently used with the help of this command&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;ls /etc/cni/net.d&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kubelet finds out which plugin will be used.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  DNS in k8s📡
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kubernetes deploys a built-in DNS server by default when you set up a cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Let's say we have three nodes k8s cluster having pods and services deployed in it having node name and IP address assigned to each one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;All pods and services can connect to each other using their IPs and to make the web app available to external users we have services defined on them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Every new entry of a pod goes into the DNS server record by&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;cat &amp;gt;&amp;gt; /etc/resolv.conf&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The k8s DNS keeps the record of the created services and maps the service name to the IP address. So anyone can reach by the service name itself.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DNS implemented by Kubernetes was known as kube-dns and later versions CoreDNS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The CoreDNS server is deployed as a POD in the kube-system namespace in the k8s cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To look for DNS Pods, run the command&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl get pods -n kube-system&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Ingress🔵
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Let's say you have an application deployed in the k8s cluster with a database pod having required services for external accessing with the NodePort in it. The replicas are going up and down as per the demand of the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You configured the DNS server so that anyone can access the web app by typing their name instead of IP address every time&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;like, &lt;code&gt;http://my-app.com:port&lt;/code&gt; instead of &lt;code&gt;http://&amp;lt;node-ip&amp;gt;:port&lt;/code&gt; and if you do not want others to type port number also then you served a proxy-server layer between your cluster and DNS server. so that anyone can access it by just&lt;br&gt;
&lt;code&gt;http://my-app.com&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now, you want some new features to be added to your application. You developed the code and deployed the &lt;code&gt;webapp&lt;/code&gt; again for the new features of pods and services into the cluster. Again setting up the proxy server in-between to access the &lt;code&gt;webapp&lt;/code&gt; in just a single domain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Maintaining all these outside the cluster is a tedious task to do when your application scales. Every time a new feature adds on, you have to do all the layerings again and again.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is where Ingress comes in. Ingress helps users to access your application with a single URL.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ingress is just another k8s definition file inside the cluster&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have to expose it one time to make it accessible outside the cluster either with NodePort or the cloud provider like GCP which uses the LoadBalancer service.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Ingress Controller🔹
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the case of using ingress in a k8s cluster, an ingress controller must be deployed and configured.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mostly used ingress controllers are &lt;strong&gt;Nginx, Traefik, and Istio&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy the application on the k8s cluster and configure them to route traffic to other services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The configuration involves defining &lt;strong&gt;URL&lt;/strong&gt; Routes, Configuring &lt;strong&gt;SSL&lt;/strong&gt; certificates etc. This set of rules to configure ingress is called an &lt;strong&gt;Ingress Resource&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the controller is running, resources can be created next and configured to route traffic to different services in the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is created using definition files like the previous ones we created for Pods, Services and Deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ingress Controller is not built-in in them by default. You have to create it &lt;strong&gt;manually&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A simple &lt;strong&gt;ingress&lt;/strong&gt; yaml file for the sake of explanation:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      apiVersion: networking.k8s.io/v1
      kind: Ingress
      metadata:
        name: minimal-ingress
        annotations
          nginx.ingress.kubernetes.io/rewrite-target: /
      spec:
        ingressClassName: nginx-example
        rules:
        - http:
            paths:
            - path: /testpath
              pathType: Prefix
              backend:
                service:
                  name: test
                  port:
                    number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret "my-cert"&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      kubectl create ingress simple --rule="foo.com/bar=svc1:8080,tls=my-cert"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is how you can start working with Ingress in Kubernetes.&lt;/p&gt;




&lt;p&gt;Thankyou!🖤&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>networking</category>
      <category>devops</category>
      <category>kcdchennai</category>
    </item>
  </channel>
</rss>
