<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: KCD Chennai</title>
    <description>The latest articles on DEV Community by KCD Chennai (@kcdchennai_user).</description>
    <link>https://dev.to/kcdchennai_user</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kcdchennai_user"/>
    <language>en</language>
    <item>
      <title>Updates about KCD Chennai 2024</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Tue, 08 Oct 2024 06:13:25 +0000</pubDate>
      <link>https://dev.to/kcdchennai/updates-about-kcd-chennai-2024-18pb</link>
      <guid>https://dev.to/kcdchennai/updates-about-kcd-chennai-2024-18pb</guid>
      <description>&lt;p&gt;&lt;strong&gt;Hey KCD Chennai Community!&lt;/strong&gt; 👋 &lt;/p&gt;

&lt;p&gt;Many of you have been asking about KCD Chennai 2024. There's been some exciting developments in the KCD world this year! 🎉 &lt;/p&gt;

&lt;p&gt;Firstly, Cloud Native Computing Foundation (CNCF) has implemented a new guideline for KCD events within the same country/region. To ensure a vibrant ecosystem and avoid scheduling conflicts, KCDs need to be at least 2 months apart, and not held within 2 months of other major CNCF/LF events. 🥇 &lt;/p&gt;

&lt;p&gt;2024 has seen the incredible launch of brand new KCDs in India! KCD Kerala, KCD Pune and KCD Hyderabad. 🎈 &lt;/p&gt;

&lt;p&gt;Additionally, India is hosting its very first dedicated KubeCon event in New Delhi from December 10th-12th, 2024! This flagship event deserves the spotlight, so we've made the &lt;strong&gt;difficult decision to forgo KCD Chennai 2024.&lt;/strong&gt; ✨ &lt;/p&gt;

&lt;p&gt;This decision allows other KCD communities in India to flourish and grow, fostering collaboration and knowledge sharing – the core values of open source! 🦄 &lt;/p&gt;

&lt;p&gt;Open-source communities thrive on the principles of openness, collaboration, and cooperation. These values are fundamental to nurturing innovation, knowledge sharing and community growth. CNCF and hence the KCD Chennai community exemplifies these principles, creating a space where other communities thrive as well. Together, we're stronger! 🤝 &lt;/p&gt;

&lt;p&gt;We'll keep you updated on KCD Chennai 2025 soon. &lt;strong&gt;Stay tuned!&lt;/strong&gt; 🔊 &lt;/p&gt;

</description>
      <category>kcdchennai</category>
      <category>cncf</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Storing and Searching TeraByte Scale logs in SnappyFlow with Secondary Storage</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Wed, 19 Jul 2023 07:22:18 +0000</pubDate>
      <link>https://dev.to/kcdchennai/storing-and-searching-terabyte-scale-logs-in-snappyflow-with-secondary-storage-263h</link>
      <guid>https://dev.to/kcdchennai/storing-and-searching-terabyte-scale-logs-in-snappyflow-with-secondary-storage-263h</guid>
      <description>&lt;h2&gt;
  
  
  The premise
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Log management in modern organizations
&lt;/h3&gt;

&lt;p&gt;In most enterprises today, big, and small, it is not uncommon to have a tech stack comprising 50 or more different technologies across their applications and infrastructure, and this number is likely to increase as companies embrace microservices, multi/hybrid clouds, and containerization.&lt;/p&gt;

&lt;p&gt;All these individual components generate logs, and lots of the. These logs serve as invaluable sources of information, providing insights into the health of individual components, transaction details, timestamps, and other critical data. By analyzing these logs, SREs and DevOps engineers can gain a comprehensive understanding of their systems, diagnose issues promptly, and optimize performance. Development teams rely on these logs to understand and address issues before they affect customers and businesses.&lt;/p&gt;

&lt;p&gt;Each log entry represents a specific event that happened at a precise moment in time, allowing for accurate tracking and analysis of system behavior. For instance, when a fault occurs, logs enable developers to identify errors and look for related logs, system performance metrics, and application traces and drill down to the exact line of code to troubleshoot. &lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges in managing Terabyte and Petabyte scale logs
&lt;/h3&gt;

&lt;p&gt;As more logs get generated, it quickly becomes a “storage” and “search” problem. Although individual logs are tiny – just a few Bytes, the cumulative volume of logs across your stack, multiplied over several days can quickly reach Terabytes or Petabytes. Efficient search and storage mechanisms become crucial for developers and engineers in handling this large log volume.&lt;/p&gt;

&lt;p&gt;Log retention, defines how long the logs are stored and in turn determines the total log volume. Factors such as security, regulatory compliances and cost have to be taken into account, to arrive at an optimal log retention period. Striking a balance between cost-effectiveness and fulfilling operational, analytical, and regulatory needs is key to optimizing log storage.&lt;/p&gt;

&lt;p&gt;However, retaining logs for extended periods, spanning months or years, introduces complications. The common approach of compressing and storing logs in cost-effective solutions like AWS Glacier hinders real-time log retrieval and search capabilities. While suitable for auditing, this method limits developers' ability to efficiently analyze and troubleshoot logs in a timely manner.&lt;/p&gt;

&lt;p&gt;To overcome these limitations, engineers require a solution that allows quick access to archived logs without sacrificing real-time search functionality. This ensures developers can effectively analyze logs, even in long-term retention scenarios, enabling timely analysis and troubleshooting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing SnappyFlow's Secondary storage feature
&lt;/h2&gt;

&lt;p&gt;SnappyFlow provides an elegant solution to ingest, store and search large volumes of logs for extended periods of time using what we call the “Secondary Storage” feature. Secondary Storage allows massive streams of logs to be ingested and stored in a cost-efficient manner without losing the ability to easily search logs. &lt;/p&gt;

&lt;h3&gt;
  
  
  So, is there a Primary Storage?
&lt;/h3&gt;

&lt;p&gt;Yes. By default, all logs sent to SnappyFlow are stored in a “Primary Storage”. Think of Primary storage as fast and responsive storage system capable of handling a large volume of searches at lightning-fast speeds. These are typically fast SSD-type storages and are as expected, expensive. &lt;/p&gt;

&lt;h3&gt;
  
  
  How does Secondary Storage work?
&lt;/h3&gt;

&lt;p&gt;Different log sources can be configured to send logs to Primary Storage, Secondary Storage, or both.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8f6zTWhk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/73rqi92rn4jr6gsoygof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8f6zTWhk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/73rqi92rn4jr6gsoygof.png" alt="Image description" width="400" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is available under &lt;strong&gt;Log Management &amp;gt; Manage Logs&lt;/strong&gt;. In the screenshot below, you can see a list of rules for the project apmmanager-opensearch. Note that in this example, you are looking at project-level rules. Similar views are available at Application and Profile levels.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Aj1JmUp8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6k3c1c9ur3uryag1adwl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Aj1JmUp8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6k3c1c9ur3uryag1adwl.png" alt="Image description" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;
Project level view of Secondary Storage Rules for the project apmmanager-opensearch





&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ySZQYbt7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9qz3dpsvqqcnm0bjwcb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ySZQYbt7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9qz3dpsvqqcnm0bjwcb5.png" alt="Image description" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;
Application level view of Secondary Storage Rules for the project apmmanager-k8



&lt;p&gt;&lt;br&gt;&lt;br&gt;
The default rules are set to send all logs to both Primary and Secondary storage with a retention period of 7 days and 30 days respectively. New rules can be added using the Add Rule button and it takes a couple of minutes for these rules to get activated. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pNrUecy5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ys6924ahtv2wuewtts6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pNrUecy5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ys6924ahtv2wuewtts6j.png" alt="Image description" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;
Once the rules are applied, these can be viewed under Applied Rules





&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g-SUzHx---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6nnbrf9623zgu0lmdf2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g-SUzHx---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6nnbrf9623zgu0lmdf2o.png" alt="Image description" width="620" height="786"&gt;&lt;/a&gt;&lt;/p&gt;
Adding a new secondary storage rule for server logs



&lt;h2&gt;
  
  
  Searching logs in Secondary Storage
&lt;/h2&gt;

&lt;p&gt;Search for logs in secondary storage is available under the respective applications. To access go to any application and select &lt;strong&gt;Log Management &amp;gt; Secondary Storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the Secondary Storage page, live search is available for data from the last 30 minutes. Logs can be filtered using the log type or using simple search strings. The Search History tab allows you to create search jobs and these jobs run in the background. Once a search job is completed, the search results can be accessed instantly any time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZOHKVUK4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xhpeengf5twe8hydy6va.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZOHKVUK4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xhpeengf5twe8hydy6va.png" alt="Image description" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;
Live search and search history for secondary storage logs



&lt;p&gt;Limitations&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;All logs in secondary storage can be searched in real-time (only the last 30 minutes) or search jobs can be set up and the indexed results can be accessed instantly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is not possible to create dashboard out of logs in secondary storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Secondary storage logs are not part of the usual log workflow i.e trace to log&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  An illustration of the benefit of using secondary storage for log management
&lt;/h2&gt;

&lt;p&gt;Secondary storage can compress logs by upto 40 times the original size and can provide significant cost benefits. Consider a medium-sized SnappyFlow deployment with an average daily ingest volume of 1 TB with a retention of 15 days. At any given time, 15 TB of primary storage is required simply to hold this data. If we were to use Secondary storage to move say 60% of all the logs, we would need to incrementally store only 400 GB of logs on a daily basis and this works out to 6 TB of primary storage. &lt;/p&gt;

&lt;p&gt;At the time of writing, the cost of EBS storage on AWS is&lt;/p&gt;

&lt;p&gt;15 TB, GP3 - $2001/mo&lt;/p&gt;

&lt;p&gt;6 TB, GBP - $800/mo&lt;/p&gt;

&lt;p&gt;Here, there is a straightforward reduction in monthly costs of $1200 simply by routing 60% of logs to Secondary storage. Do note that there will be an additional cost of storing data in Secondary Storage but this is significantly lower as we will be using an object-based storage service like S3.&lt;/p&gt;

&lt;p&gt;With a compression factor of 40x and a log retention period of 60 days, total log volume in secondary storage will be (1 TB/day * 60% * 60 days) / 40 = 0.9 TB&lt;/p&gt;

&lt;p&gt;S3 storage cost is just $20 for ~1TB of compressed logs. &lt;/p&gt;

&lt;h3&gt;
  
  
  Explore Secondary Storage today!
&lt;/h3&gt;

&lt;p&gt;Secondary Storage features are available to all SaaS and Self Hosted Turbo customers. Secondary Storage is the easiest and simplest way to control your logs storage costs and stay compliant to long term regulatory and security requirements. What’s more? This feature comes at no extra cost.&lt;/p&gt;

&lt;p&gt;To try SnappyFlow, start your &lt;a href="https://accounts.snappyflow.io/freetrial"&gt;14-day free trial&lt;/a&gt; today.&lt;/p&gt;

</description>
      <category>logs</category>
      <category>monitoring</category>
      <category>kubernetes</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Kubernetes Monitoring Simplified</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Wed, 19 Jul 2023 06:54:17 +0000</pubDate>
      <link>https://dev.to/kcdchennai/kubernetes-monitoring-simplified-3g4g</link>
      <guid>https://dev.to/kcdchennai/kubernetes-monitoring-simplified-3g4g</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Today, organizations are transitioning from monolithic architecture to microservice architecture. When we speak of microservice architecture, there is no way to ignore Kubernetes (K8’s). K8’s is the leading open-source Container Orchestration Platform, and it holds a prominent position for certain reasons, such as: Robust, Scalability, High Availability, Portability, Multi-Cloud Support, Self-Healing, Auto-Scaling, Declarative Configuration, Rollbacks, Service Discovery, Load Balancing, among other features.&lt;/p&gt;

&lt;p&gt;We all know Kubernetes is a powerful tool, but the question here is, how do I get the most out of it? The answer is simple: by proactively monitoring your Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges in Monitoring Kubernetes Clusters
&lt;/h2&gt;

&lt;p&gt;Monitoring K8’s clusters can be a complicated task. Here are few difficulties faced by SRE’s.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Complexity:&lt;/strong&gt; Kubernetes operates in a multi-layer environment, requiring the collection and correlation of metrics from pods, nodes, containers, and applications to obtain an overall performance metric of the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt; Monitoring tools collect sensitive data from the Kubernetes cluster, raising security concerns that need to be addressed to protect the cluster data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Flow:&lt;/strong&gt; As the Kubernetes cluster grows, whether it's on the cloud or on-premises, tracing the data flow between endpoints becomes increasingly difficult.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ephemerality:&lt;/strong&gt; Kubernetes scales up and down based on demand. Pods created during scale-up disappear when no longer required. To avoid data loss, monitoring tools must collect metrics from various resources such as deployments, daemon sets, and jobs. &lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Monitoring Kubernetes Clusters
&lt;/h2&gt;

&lt;p&gt;Despite the challenges, monitoring K8’s environment can lead to significant benefits and improvements:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced Visibility:&lt;/strong&gt; Monitoring Kubernetes clusters helps you gain enhanced visibility about overall health of your system and its components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proactive Issue Detection:&lt;/strong&gt; Monitoring Kubernetes clusters enables you to proactively detect issues such as application failures, abnormal behaviours, and performance degradation. With the help of this you can prevent potential downtime and service disruptions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Utilization:&lt;/strong&gt; By tracking the resource usage of your containers, pods, and nodes, you can fine-tune resource allocation, enhance efficiency, reduce costs, and maximize the utilization of the infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance Optimization:&lt;/strong&gt; Monitoring Kubernetes enables you to identify the slowness in the response times, inefficient resource usage and helps you to optimize scaling specific to components and optimize network settings to improve overall system performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root Cause Analysis:&lt;/strong&gt; Monitoring Kubernetes facilitates you to pinpoint the root cause, isolate problematic pods, or nodes, and analyse relevant logs and metrics.&lt;/p&gt;

&lt;p&gt;Here comes the next question, how do I monitor my Kubernetes cluster? Monitoring Kubernetes is challenging task, but this is where monitoring tools like SnappyFlow come in handy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simplify Kubernetes Monitoring with SnappyFlow
&lt;/h2&gt;

&lt;p&gt;SnappyFlow is a full stack application monitoring tool. By integrating your Kubernetes cluster with Snappyflow, it starts collecting the data from your cluster and enables efficient cluster monitoring. SnappyFlow monitors various aspects of the cluster which will help you to take an informed decision. Its alerting system notifies any deviation that happens in the cluster through your preferred communication channel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h0W8siCJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ata5xcsx48w0vjw4jx28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h0W8siCJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ata5xcsx48w0vjw4jx28.png" alt="Image description" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What can you monitor with SnappyFlow?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M2-Rk4d3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/71b3kieb23byus3426zp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M2-Rk4d3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/71b3kieb23byus3426zp.png" alt="Image description" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Utilization&lt;/strong&gt;: SnappyFlow monitors CPU, memory, and disk usage of nodes, pods, and containers to identify resource bottlenecks. This helps to ensure efficient utilization of the resources and prevents overloading.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pod Health&lt;/strong&gt;: SnappyFlow tracks the status and health of individual pods, including their readiness and liveness probes. It monitors the pod failures, crashes, and restarts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster Capacity&lt;/strong&gt;: SnappyFlow keeps an eye on cluster capacity to avoid resource exhaustion. It monitors the number of nodes, available resources, Daemon set, Deployment, Replica set, stateful set and the ability to schedule new pods. It also monitors how many critical and warning events are occurring in the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Performance&lt;/strong&gt;: SnappyFlow monitors network traffic, latency, and throughput at node level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistent Volumes&lt;/strong&gt;: SnappyFlow monitors the status and capacity of persistent volumes (PVs) and their associated claims (PVCs). Ensure that storage resources are available and accessible as required. It also monitors Read and Write operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service Discovery and Load Balancing&lt;/strong&gt;: SnappyFlow monitors the health and availability of Kubernetes services and their endpoints. Track load balancing across pods to ensure even distribution of traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes API Server&lt;/strong&gt;: SnappyFlow keeps an eye on the API server's performance, latency, and response times. Monitors for any errors, throttling, or potential bottlenecks in API communication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logging and Event Monitoring&lt;/strong&gt;: SnappyFlow sets up centralized logging and monitoring to capture container logs, Kubernetes events, and system-level metrics. check for issues related to image pulling, container startup, or resource allocation. This enables quick troubleshooting and identification of issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L3CdR29q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d5nqelabiejobvdqtki3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L3CdR29q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d5nqelabiejobvdqtki3.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;
Node Summary



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ndXdXsMC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ct75gza4h6ah8mg20k7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ndXdXsMC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ct75gza4h6ah8mg20k7h.png" alt="Image description" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;
Cluster Resource Utilization



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DCzM0UKM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mzok2d5ygy3huvk03mdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DCzM0UKM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mzok2d5ygy3huvk03mdu.png" alt="Image description" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;
Kubelet operation details



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, SnappyFlow offers a simplified and effective solution for monitoring Kubernetes clusters. By leveraging SnappyFlow, you can ensure the health and performance of your Kubernetes cluster while reducing the complexity and effort involved in monitoring.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>observability</category>
      <category>monitoring</category>
      <category>productivity</category>
    </item>
    <item>
      <title>KCD Chennai 2023 Blogathon Contest 🎉</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Sat, 08 Apr 2023 11:21:12 +0000</pubDate>
      <link>https://dev.to/kcdchennai/kcd-chennai-2023-blogathon-contest-4jcg</link>
      <guid>https://dev.to/kcdchennai/kcd-chennai-2023-blogathon-contest-4jcg</guid>
      <description>&lt;h2&gt;
  
  
  Blogathon
&lt;/h2&gt;

&lt;p&gt;Either write something worth reading or do something worth writing. --&lt;strong&gt;Benjamin Franklin&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Blogging is a powerful means of sharing knowledge and promoting your thoughts and views.As of 2022, there are more than 572 million blogs on the internet (and this number is constantly growing). Over 7 million blog posts are published daily. 77% of people regularly read blogs online.[&lt;em&gt;Source:firstsiteguide.com&lt;/em&gt;]&lt;/p&gt;

&lt;p&gt;Kubernetes Community Days (KCD) Chennai 2023 is hosting a Blogathon. If you are a seasoned blogger, or an occasional writer, or the one searching for right reason to start blogging, we invite you to participate!! Unleash the writer in you.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do I Participate?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Register yourself &lt;a href="https://docs.google.com/forms/d/e/1FAIpQLSdjcqWzI2ck-wn3Ta1amAttHajlPVDFfExoTXLFzqlnKdaUmw/viewform?usp=sf_link"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Upon registration you'll receive an email containing further steps to be followed within few hours&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Important Dates
&lt;/h2&gt;

&lt;p&gt;17-April-2023 : Blogathon begins&lt;br&gt;
19-April-2023 : Workshop on Blogging Tips&lt;br&gt;
31-May-2023  : Blogging ends&lt;br&gt;
20-June-2023 : Know the winners&lt;/p&gt;

&lt;h2&gt;
  
  
  What topics can I write about?
&lt;/h2&gt;

&lt;p&gt;We invite Technical blogs on a wide variety of topics. You are allowed to write about any topic you are passionate about, as long as the content is technical.&lt;/p&gt;

&lt;h2&gt;
  
  
  Publishing your blog
&lt;/h2&gt;

&lt;p&gt;When you create a new blog, under 'Create Post' choose '&lt;em&gt;Kubernetes Community Days Chennai&lt;/em&gt;' as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HAAEW7fd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qmjr6qvb5suu92n6bkrc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HAAEW7fd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qmjr6qvb5suu92n6bkrc.png" alt="Image description" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Rules
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Blogs published during 17-April to 31-May alone would be considered. (Tip: Keep publishing as many good-quality blogs as possible)&lt;/li&gt;
&lt;li&gt;You are allowed to publish any number of blogs; there is no restriction.&lt;/li&gt;
&lt;li&gt;You are allowed to cross-post blogs you've already published in other blogging sites e.g. medium, hashnode, wordpress etc.&lt;/li&gt;
&lt;li&gt;You are allowed to publish blogs you've already published in DEV under your personal account.&lt;/li&gt;
&lt;li&gt;Published blogs would be reviewed by our team. If the blog is offensive, violates copy rights or we detect plagiarism, such blogs would be removed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Can I promote my blogs?
&lt;/h2&gt;

&lt;p&gt;Of course yes! Feel free to promote your blogs via Social media, Emails, Newsletters, WhatsApp etc. This would help increase the views and reactions!!&lt;/p&gt;

&lt;h2&gt;
  
  
  Contact Us
&lt;/h2&gt;

&lt;p&gt;If you have further questions or need help, post it in the #kcd-chennai channel in &lt;a href="https://cloud-native.slack.com"&gt;https://cloud-native.slack.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>cncf</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Fostering Innovation with a Kubernetes Platform</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Thu, 26 May 2022 13:17:20 +0000</pubDate>
      <link>https://dev.to/kcdchennai/fostering-innovation-with-a-kubernetes-platform-39lj</link>
      <guid>https://dev.to/kcdchennai/fostering-innovation-with-a-kubernetes-platform-39lj</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Author:&lt;/strong&gt; Vishal Ghariwala, SUSE (Platinum sponsor, KCD Chennai 2022)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Hybrid and multi-cloud are now the established order in the tech world. According to SUSE’s recently commissioned Insight Avenue report, &lt;a href="https://links.imagerelay.com/cdn/3404/ql/9aa0622def504bbf8e09828636742b10/Final_Why-Todays-IT-Leaders-are-Choosing-Open-Research-report.pdf"&gt;Why Today’s IT Leaders are Choosing Open&lt;/a&gt;, more than 800 IT leaders believe the biggest benefits of a hybrid and multi-cloud approach are cost-effectiveness (45%), increased flexibility and agility (44%), and being able to take advantage of best-of-breed solutions (35%).&lt;/p&gt;

&lt;p&gt;Yet unlocking these benefits is difficult when many IT leaders haven’t adequately factored the prevalence of containers and Kubernetes in their multi-cloud strategy. As a result, while they will undoubtedly have differing Kubernetes distributions within their environment, they may lack a unified platform for managing, governing and having visibility of the varied distributions.&lt;/p&gt;

&lt;p&gt;In the same way that Linux became the data center’s operating system, Kubernetes is now widely regarded as the operating system of the hybrid and multi-cloud – ultimately because Kubernetes makes it easier to manage software complexity.&lt;/p&gt;

&lt;p&gt;In the early days of Kubernetes, companies would have experimented with their own DIY Kubernetes stack to run their cloud native applications. However, as these enterprise applications became more complex, it became harder to manage them. Today, the cloud and container market has matured significantly, so we believe it’s time for enterprises to rethink their Kubernetes approach.&lt;/p&gt;

&lt;p&gt;What do IT leaders want from a Kubernetes platform?&lt;/p&gt;

&lt;p&gt;IT leaders love Kubernetes because it fosters innovation. It helps to significantly increase the agility and efficiency of their software development teams, enabling them to reduce the time and complexity associated with putting differentiated applications into production.&lt;/p&gt;

&lt;p&gt;According to our report, 42% of organisations currently run containers for production workloads – with a further 41% planning to do so in the next 12 months. 57% of organisations that are running containers for production workloads use Kubernetes.&lt;/p&gt;

&lt;p&gt;There has been an ongoing evolution in the build vs buy debate regarding a Kubernetes platform. Our survey found that 66% of IT leaders now prefer a commercially curated and supported distribution of an open-source Kubernetes platform vs a homegrown Kubernetes platform. This is a significant shift from just last year, where 87% of IT leaders still preferred a DIY approach. What has changed? Is the growing complexity of applications becoming too difficult for them to manage? Is it due to a shortage of Kubernetes skill sets? Is it due to implementation cost?&lt;/p&gt;

&lt;p&gt;What we do know is that IT leaders are embracing open source. When we asked IT leaders for the factors they look for in a Kubernetes platform, the top three are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;100% open source (36%)&lt;/li&gt;
&lt;li&gt;support for multi-cluster and edge deployments (34%)&lt;/li&gt;
&lt;li&gt;ease of installation (34%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Development and operations teams are pivotal to innovation. However, it is also quite well known that the priorities of these two teams can be diametrically opposed, for good reasons. Development teams want to focus on writing code and rolling out their applications quickly. Operations teams work hard to manage stability, security and control their computing environments. We call this the agility-stability paradox. How do we create a balance between the freedom to innovate and ease of management and governance?&lt;/p&gt;

&lt;p&gt;Kubernetes can help to cope with these competing desires between development and operations teams by decoupling application development and operational stability. As a result, development teams can build what they need, optimised for innovation while still aligning to continuous delivery and automation processes defined by the operations teams. To do this successfully however, companies need to leverage an enterprise Kubernetes management platform. Such a platform must be able to meet key requirements that will foster innovation and collaboration between development and operations teams.&lt;/p&gt;

&lt;p&gt;The platform should provide development teams with a rich catalogue of services for building, deploying, and scaling containerized applications, app packaging, CI/CD, logging, monitoring, and service mesh. It should also empower operations teams to automate processes and apply a consistent set of operational, governance and security policies for their Kubernetes clusters which may be running on any CNCF-certified Kubernetes distribution, in the data center, in the cloud or at the edge.&lt;/p&gt;

&lt;p&gt;Where are you on your Kubernetes adoption journey, and which factors are most important for you in a Kubernetes platform? How do you envision development and operations teams using a Kubernetes platform in your organisation? What priority do you place on the value of open source solutions for giving you the freedom to innovate everywhere?&lt;/p&gt;

&lt;h3&gt;
  
  
  About the author
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Vishal Ghariwala&lt;/strong&gt; is the Chief Technology Officer for SUSE for the APJ and Greater China regions. In this capacity, he engages with customer and partner executives across the region, and is responsible for growing SUSE’s mindshare by being the executive technical voice to the market, press, and analysts. He also supports the global Office of the CTO to assess relevant industry, market and technology trends and identify opportunities aligned with the company’s strategy.&lt;/p&gt;

&lt;p&gt;Prior to joining SUSE, Vishal was the Director for Cloud Native Applications at Red Hat where he led a team of senior technologists responsible for driving the growth and adoption of the Red Hat OpenShift, API Management, Integration and Business Automation portfolios across the Asia Pacific region.&lt;/p&gt;

&lt;p&gt;Vishal has over 20 years of experience in the IT industry and holds a Bachelor’s Degree in Electrical and Electronic Engineering from the Nanyang Technological University in Singapore.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Preparing for the new needs of edge computing</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Thu, 26 May 2022 13:03:25 +0000</pubDate>
      <link>https://dev.to/kcdchennai/preparing-for-the-new-needs-of-edge-computing-3069</link>
      <guid>https://dev.to/kcdchennai/preparing-for-the-new-needs-of-edge-computing-3069</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Author:&lt;/strong&gt; Vishal Ghariwala, SUSE (Platinum sponsor, KCD Chennai 2022)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The proliferation of distributed and data-intensive workloads is spurring rapid growth in edge computing. According to &lt;a href="https://www.gartner.com/smarterwithgartner/what-edge-computing-means-for-infrastructure-and-operations-leaders/"&gt;Gartner&lt;/a&gt;, around 10 percent of enterprise-generated data is created and processed outside a traditional centralised data center or cloud, with this figure expected to reach 75% by 2025. The adoption of intelligent edge applications is already transforming a range of industries through autonomous vehicles, industrial robotics, and industrial IoT devices, as the number of global IoT connections is set to reach 83B by 2024.&lt;/p&gt;

&lt;p&gt;In SUSE’s recent global survey of more than 800 IT leaders, 93% say they are excited or interested in the possibilities that edge computing presents, and 79% agree that COVID-19 has accelerated moves to edge computing. As part of our report: &lt;a href="https://www.suse.com/open/"&gt;Why Today’s IT Leaders are Choosing Open&lt;/a&gt;, these IT leaders told us the most compelling benefits of edge computing are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reliability (45%)&lt;/li&gt;
&lt;li&gt;enhanced security and privacy (43%)&lt;/li&gt;
&lt;li&gt;enhancing the entire IT ecosystem (41%)&lt;/li&gt;
&lt;li&gt;fostering the use of innovative new IT services (40%)&lt;/li&gt;
&lt;li&gt;real-time insights (39%)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As edge computing becomes more ubiquitous, there are several questions that IT leaders will need to ask to reap its benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Where should edge solutions replace/augment public clouds, as they may be too far away to provide the required latency?&lt;/li&gt;
&lt;li&gt;How can I create consistency across traditional and edge environments in cloud-native development and deployment?&lt;/li&gt;
&lt;li&gt;Is my infrastructure platform future-ready for the rapid growth of both edge computing and hybrid and multi-cloud use cases?&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Success criteria for edge-related innovations
&lt;/h2&gt;

&lt;p&gt;Innovation in edge computing is leading to digital transformation across industries with autonomous vehicles and equipment, industrial robotics, and IoT devices – each connecting operational functions and outputs with organisational management technologies to create an intelligent and interconnected network. There are three key factors undergirding the success of many of the modern edge-related innovations.&lt;/p&gt;

&lt;p&gt;The first factor pertains to defining how the edge infrastructure will intersect with existing on-premises and cloud infrastructure. This will largely depend on the use case and can be segmented into three logical tiers namely the near edge, far edge and tiny edge. Near edge is closest to centralized services such as university compute facilities. Far edge is furthest from the data center and close to the edge device. Tiny edge is the edge device itself which includes sensors and actuators. As an example, the rapid rise in remote working and remote learning over the past year has clearly shown the importance of good internet latency and bandwidth especially for those living in rural areas. In such a scenario, service providers can deploy a near-edge infrastructure as close as possible to their customers.&lt;/p&gt;

&lt;p&gt;The second factor is about having a consistent way to develop, deploy and maintain your cloud native applications regardless of where they will be living be it in the data center, cloud or at the edge. You should also be able to perform all the routine maintenance functions like applying patches, updates, configuration changes and roll backs seamlessly. Kubernetes will be a central technology here as it helps to abstract underlying heterogenous infrastructures and provides common APIs to manage software complexity across these environments.&lt;/p&gt;

&lt;p&gt;The final factor is around building an infrastructure that is able to support the diverse set of edge use cases in addition to existing hybrid and multi cloud applications. Compute resources at the edge are generally scarce and not always connected to the Internet. Hence we will need to leverage lightweight cloud-native technology stacks that are fit for resource constrained environments and able to operate in remote locations. &lt;/p&gt;

&lt;h2&gt;
  
  
  SUSE edge solutions
&lt;/h2&gt;

&lt;p&gt;SUSE provides Kubernetes-ready open source, edge solutions for full-lifecycle edge infrastructure management. This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A secure and lightweight Linux operating system&lt;/li&gt;
&lt;li&gt;A secure and lightweight edge-ready Kubernetes distribution&lt;/li&gt;
&lt;li&gt;A distributed, software-defined storage platform for Kubernetes that can run anywhere&lt;/li&gt;
&lt;li&gt;GitOps tooling for continuous delivery of containerized applications at the edge&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With SUSE edge solutions, customers can orchestrate containerized workloads at the edge. The solutions employ a unified management layer to empower technology leaders to understand where and how their containerized applications are running across traditional, cloud and edge infrastructures. Finally, SUSE edge solutions are architected using lightweight technologies that are fit for resource-constrained and remote environments.&lt;/p&gt;

&lt;p&gt;Where are you on your edge computing adoption journey?&lt;br&gt;
Which edge computing use cases are relevant to your organization?&lt;br&gt;
How is edge computing influencing your cloud strategy?&lt;/p&gt;

&lt;h3&gt;
  
  
  About the author
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Vishal Ghariwala&lt;/strong&gt; is the Chief Technology Officer for SUSE for the APJ and Greater China regions. In this capacity, he engages with customer and partner executives across in the region, and is responsible for growing SUSE’s mindshare by being the executive technical voice to the market, press, and analysts. He also supports the global Office of the CTO to assess relevant industry, market and technology trends and identify opportunities aligned with the company’s strategy.&lt;br&gt;
Prior to joining SUSE, Vishal was the Director for Cloud Native Applications at Red Hat where he led a team of senior technologists responsible for driving the growth and adoption of the Red Hat OpenShift, API Management, Integration and Business Automation portfolios across the Asia Pacific region.&lt;/p&gt;

&lt;p&gt;Vishal has over 20 years of experience in the IT industry and holds a Bachelor’s Degree in Electrical and Electronic Engineering from the Nanyang Technological University in Singapore.&lt;/p&gt;

</description>
      <category>edgecomputing</category>
      <category>cloud</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Top 7 reasons to adopt microservices architecture</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Sat, 21 May 2022 10:51:55 +0000</pubDate>
      <link>https://dev.to/kcdchennai/top-7-reasons-to-adopt-microservices-architecture-4oo1</link>
      <guid>https://dev.to/kcdchennai/top-7-reasons-to-adopt-microservices-architecture-4oo1</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Author:&lt;/strong&gt; Abhinav Dubey, Devtron Labs (Platinum sponsor, KCD Chennai 2022)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Microservice is an architectural style that structures an application as a collection of nearly independent, loosely coupled components that are developed and deployed independently unlike monolith architecture. When a software is designed, architects check for several different architectural styles suitable for their application that need to be followed by developers, while building the application. For a long time, monolithic architectures were used, where the applications were designed, developed, and deployed as a single unit. Even today, many of the enterprises use monolithic architecture. &lt;/p&gt;

&lt;p&gt;With the continuous evolution of technology, the need to serve data on different platforms like, mobile phones, tablets, smartwatches, etc has arisen and it becomes difficult to manage the source code by adding more and more lines of programming into the same code base, to cater to the different types of clients. There are several constraints with monolithic architecture some of which are,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Serving the application on different gadgets at runtime with the same build&lt;/li&gt;
&lt;li&gt;Scaling the application&lt;/li&gt;
&lt;li&gt;Continuous testing of the application&lt;/li&gt;
&lt;li&gt;Deployment issues - as all components are to be deployed as a single unit&lt;/li&gt;
&lt;li&gt;Delay in delivery&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To address all these and many more challenges, micro-services came into the picture. It breaks down the complete application into smaller parts making it easier to develop and release applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top 7 reasons to adopt microservices
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Easy Development:&lt;/strong&gt; Since microservices are smaller units, they provide ease of development for technical teams. There’s no need to assign large teams to work on a single project, risking delays and errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Scalability:&lt;/strong&gt; Smaller components are always easier to scale instead of one single monolith. Microservices enable organizations to scale up their applications at a far lower cost, with easy integration with third-party services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Infrastructure:&lt;/strong&gt; Microservices are Cloud-based and thus provide agility for any organization that adopts them. Components and Services can be spread across many servers and data centers, offering innovation at a lower cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Continuity:&lt;/strong&gt; Software can be broken down into component services and each of these services can be deployed without compromising application probity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Decentralized Data:&lt;/strong&gt; In microservices, all the components have their own data stores which help to maintain the agility of applications with minimal to zero impact on new releases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Technological Flexibility:&lt;/strong&gt; Adopting microservices enables the organizations to cope up with technological changes, making them independent to choose any tech stack for an individual service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Containerization:&lt;/strong&gt; Containerization is one of the most efficient ways of scaling applications. Containers provide a runtime environment for applications from development to testing and finally, deploying the microservices wrapped under it. Furthermore, for orchestrating containers in production, Kubernetes is one of the best and most adopted tools in the market.&lt;/p&gt;

&lt;p&gt;Organizations like Netflix, Microsoft, Google, Uber, Amazon, PayPal, Twitter, Delhivery, Bharatpe, etc have adopted the microservice architecture and witnessed great results. Netflix has been able to service 93.8 million users globally and stream more than 10 billion hours of movies and shows. Amazon has transitioned to continuous deployment and its engineers have been able to deploy code every 11.7 sec. &lt;/p&gt;

&lt;p&gt;Now that you know the benefits of the microservices architecture, why don’t you get started and deploy your applications on Kubernetes in the easiest possible way? &lt;a href="https://github.com/devtron-labs/devtron"&gt;Devtron&lt;/a&gt; on its first open-source anniversary, has announced to help the first &lt;strong&gt;100 organizations to accelerate 🚀 their journey of adopting Kubernetes FREE OF COST&lt;/strong&gt;. Check out this Open Source project and do sign-up on &lt;a href="https://devtron.ai/support.html"&gt;#AdoptK8sWithDevtron&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Decreasing Carbon footprints using Kubernetes</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Sat, 21 May 2022 10:19:45 +0000</pubDate>
      <link>https://dev.to/kcdchennai/decreasing-carbon-footprints-using-kubernetes-4abo</link>
      <guid>https://dev.to/kcdchennai/decreasing-carbon-footprints-using-kubernetes-4abo</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Author:&lt;/strong&gt; Prakarsh &amp;amp; Abhinav Dubey, Devtron Labs (Platinum sponsor, KCD Chennai)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you are running microservices, there is a high probability that you're running it on a Cloud platform like AWS, GCP, Azure, etc. The services of these cloud providers are powered by data centers, which typically comprise thousands of interconnected servers and consume a substantial amount of electrical energy. It is estimated that data centers will use anywhere between 3% and 13% of global electricity by 2030 and will be responsible for a similar share of carbon emissions. This post will step through an example with a case study of how to use Kubernetes to minimize the carbon footprint of your organization's infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Carbon Footprint
&lt;/h2&gt;

&lt;p&gt;What exactly is carbon footprint? It is the total amount of Greenhouse gases (Carbon dioxide, Methane, etc) released into the atmosphere due to human interventions. Sounds familiar, isn’t it? We all have read about the different ways greenhouse gases get released into the atmosphere and the need to prevent them. But why are we talking about it in a technical blog? And how can Kubernetes help to reduce your organization's carbon footprint?&lt;/p&gt;

&lt;p&gt;In the era of technology, everything is hosted on cloud servers, which are backed by massive data centers. In other words, we can say data centers are the brain of the internet. Right from the servers to storage blocks, everything is present in these data centers. &lt;/p&gt;

&lt;p&gt;All the machines require energy to operate, i.e, electricity, irrespective of the source of generation, renewable or nonrenewable.  According to a survey conducted by &lt;a href="https://www.agci.org/"&gt;Aspen Global Change Institute&lt;/a&gt;, data centers account for being one of the direct contributors to climate change due to the release of greenhouse gasses.  According to an article by &lt;a href="https://energyinnovation.org/"&gt;Energy Innovation&lt;/a&gt;, a data center that contains thousands of IT devices can use around 100 megawatts (MW) of electricity.&lt;/p&gt;

&lt;p&gt;How can Kubernetes help reduce the carbon footprint? Before answering the above question, let’s learn how to calculate the carbon footprint of a server that is created with a public cloud provider like AWS. In AWS, we create instances of different sizes, computation power, storage capacity, etc as per our needs. Let’s calculate the carbon footprint emitted by initializing an instance of type - m5.2xlarge in the region - ap-south-1 which runs for 24 hours. We can calculate using the &lt;a href="https://engineering.teads.com/sustainability/carbon-footprint-estimator-for-aws-instances/?estimation=true&amp;amp;instance_id=2310&amp;amp;region_id=2238&amp;amp;compute_hours=24#calculator"&gt;Carbon Footprint Estimator for AWS instances&lt;/a&gt;. As you can see in the below image, after giving the values, we can easily estimate the carbon footprint for the respective instance which is &lt;strong&gt;1,245.7 gCO₂eq.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o0YV5oYF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nj6zk4k62brm89alqe95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o0YV5oYF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nj6zk4k62brm89alqe95.png" alt="Image description" width="790" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Factors considered during the calculation
&lt;/h2&gt;

&lt;p&gt;There are primarily two factors that contribute to carbon emissions in the Compute Resource carbon footprint analysis, which are discussed below:&lt;/p&gt;

&lt;h3&gt;
  
  
  Carbon emissions related to running the instance, including the data center PUE
&lt;/h3&gt;

&lt;p&gt;Carbon emissions from electricity consumed are the major carbon footprint source in the tech industry. The AWS EC2 Carbon Footprint Dataset available, is used for the calculation of carbon emissions based on how much wattage is used on various CPU consumption levels.&lt;/p&gt;

&lt;h3&gt;
  
  
  Carbon emissions related to manufacturing the underlying hardware
&lt;/h3&gt;

&lt;p&gt;Carbon emissions from the manufacturing of hardware components are another major contributor when it comes to carbon footprint calculations within the scope of this study.&lt;/p&gt;

&lt;h3&gt;
  
  
  How K8s helps
&lt;/h3&gt;

&lt;p&gt;Kubernetes is one of the most adopted technologies for running containerized workloads. As per the survey by &lt;a href="https://www.purestorage.com/content/dam/pdf/en/analyst-reports/ar-portworx-pure-storage-2021-kubernetes-adoption-survey.pdf"&gt;Portworx&lt;/a&gt;, 68% of companies have increased the usage of Kubernetes and IT automation, and the adoption metrics are increasing, day by day. For any application to be deployed over Kubernetes, we need to create a Kubernetes cluster that comprises any number of master and worker nodes. These are nothing but instances/servers which are being initialized, where all the applications will be deployed. The number of nodes increases as the load increases, which eventually will contribute more to the carbon footprint. But with the help of Kubernetes Autoscaling, we can lower the count of nodes i.e, reduce the number of instances created as per the requirements.&lt;/p&gt;

&lt;p&gt;Let’s try to understand it with a use case.&lt;/p&gt;

&lt;p&gt;A logistic company uses Kubernetes in its production to deploy all its microservices. A single microservice requiring around 60 replicas, can initialize around 20 instances of m5.2xlarge type in ap-south-1 region. If we calculate the carbon footprint of a single microservice in 24 hours, it would be around &lt;strong&gt;24,914 gCO₂eq.&lt;/strong&gt; This is the amount of carbon footprint emitted just by a single microservice, and there are thousands of microservices in an organization that runs 24*7 workloads. Now, if you are wondering how this can be reduced, you are at the right place.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XX3hrxUB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rlwonv4m4bryw11bsr1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XX3hrxUB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rlwonv4m4bryw11bsr1b.png" alt="Image description" width="782" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The maximum traffic that a logistic company's first / last mile app experiences are during the daytime when the deliveries happen. Generally, from morning around 8:00 AM, there’s a steep increase in the traffic and it experiences its peak in the afternoon hours. That's when it would have the most number of pod replicas and nodes/instances. Then post 8:00 PM the traffic decreases gradually. During the off-hours, when the traffic decreases, the count of replicas also drops to 6 from an average of 60 for each microservice, which requires only 2 nodes/instances. The microservice uses throughput metrics based on horizontal pod autoscaling using Keda.&lt;/p&gt;

&lt;p&gt;Now, if we talk about the carbon emissions by 1 micro service after deploying it on Kubernetes paired with horizontal pod auto-scaling, it would drop to around&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;51.9 gCO₂eq x 2 (nodes) x 12 (hrs 8pm to 8am non-peak hours) + 51.9 gCO₂eq x 20 (nodes) x 12 (hrs 8am to 8pm peak traffic hours) = 13,701.6 gCO₂eq&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;51.9 gCO₂eq x 20 (nodes) x 24 (hrs no autoscaling) = 24,912 gCO₂eq&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's a whopping &lt;strong&gt;11,211 gCO₂eq (45% reduction)&lt;/strong&gt; in carbon emissions &lt;em&gt;[1]&lt;/em&gt;, and this is just by 1 micro-service in 24hrs. This translates to &lt;strong&gt;4092 KgCo2eq per year!&lt;/strong&gt; To understand how much this is, a Boeing 747-400 releases about &lt;strong&gt;90 KgCo2eq&lt;/strong&gt; per hour of flight when it covers around 1000 Kms &lt;em&gt;[2]&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Now think of how much your organization is capable of reducing per year by migrating 1000s of microservices on Kubernetes paired with efficient autoscaling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating to Kubernetes?
&lt;/h2&gt;

&lt;p&gt;Kubernetes is the best container orchestration technology out there in the cloud-native market, but the learning curve for adopting Kubernetes is still a challenge for the organizations looking to migrate to that platform. There are many Kubernetes-specific tools in the marketplace like ArgoCD, Flux for deployments, Prometheus and Grafana for monitoring, Argo workflows for creating parallel jobs, etc. Kubernetes space has so many options, but they need to be configured separately, which again is complex.&lt;/p&gt;

&lt;p&gt;That's where the open source tool - &lt;a href="https://github.com/devtron-labs/devtron"&gt;Devtron&lt;/a&gt;, can help you. Devtron configures and integrates all these tools that you would have to otherwise configure separately and lets you manage everything from one slick Dashboard. Devtron is a software delivery workflow for Kubernetes-based applications. It is trusted by thousands of users and used by organizations like &lt;a href="https://www.delhivery.com"&gt;Delhivery&lt;/a&gt;, &lt;a href="https://www.bharatpe.com"&gt;Bharatpe&lt;/a&gt;, &lt;a href="https://www.livspace.com"&gt;Livspace&lt;/a&gt;, &lt;a href="https://www.moglix.com"&gt;Moglix&lt;/a&gt;, etc, with good user communities across the globe. And the best thing is, it gives you a no-code Kubernetes deployment experience which means there's no need to write Kubernetes YAML at all.&lt;/p&gt;

&lt;p&gt;Devtron is running an initiative &lt;a href="https://devtron.ai/support.html"&gt;#AdoptK8sWithDevtron&lt;/a&gt; where it offers expert assistance and help to the first 100 organizations that want to migrate their microservices to Kubernetes.&lt;/p&gt;

&lt;p&gt;References&lt;br&gt;
[1] When compared to micro-services which are not configured to scale efficiently.&lt;br&gt;
[2] &lt;a href="https://catsr.vse.gmu.edu/SYST460/LectureNotes_AviationEmissions.pdf"&gt;https://catsr.vse.gmu.edu/SYST460/LectureNotes_AviationEmissions.pdf&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Packet flow in OpenShift SDN</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Fri, 20 May 2022 11:20:41 +0000</pubDate>
      <link>https://dev.to/kcdchennai/packet-flow-in-openshift-sdn-46oi</link>
      <guid>https://dev.to/kcdchennai/packet-flow-in-openshift-sdn-46oi</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Author:&lt;/strong&gt; Red Hat Inc. (Platinum Sponsor, KCD Chennai 2022)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;OpenShift Container Platform uses Software Defined Networking (SDN) as the default Cluster Network Interface (CNI) provider. OpenShift SDN sets up cluster networking which facilitates the communication between the pods. It does so by configuring an overlay network using Open VSwitch (OVS).&lt;/p&gt;

&lt;p&gt;In every node, an OVS bridge (br0) is created. Whenever any packet reaches the OVS bridge, the packet is checked against the flow rules in the flow-tables. After processing the packet through the flow-tables, all the actions corresponding to the matching flow rules are applied to the packet.&lt;/p&gt;

&lt;p&gt;An internal interface (tun0) is configured on each node and is connected to the OVS bridge via a port. A route is added in the node’s routing table for sending all traffic destined for the pod network CIDR to the tun0 interface. The tun0 will send it to the OVS bridge for processing the packet through the flow tables.&lt;/p&gt;

&lt;p&gt;OpenShift SDN uses Virtual Extensible LAN (VXLAN) tunnels to set up an overlay network between all the nodes in a cluster. The VXLAN tunnels are used to send traffic from one node to another, thus also enabling communication between pods belonging to different nodes. In every node, a VXLAN interface (vxlan0) is configured and is also connected to the OVS bridge via a port. Whenever a packet is needed to be forwarded from one node to another one, the bridge (br0) on the first node sends the packet to the corresponding vxlan0 interface. The vxlan0 interface then encapsulates the packet and sends it to the other node. When the packet is received by the vxlan0 interface on the other node, it first removes the encapsulation from the packet and then sends the packet to the bridge (br0) for further processing.&lt;/p&gt;

&lt;p&gt;Every pod that is provisioned on a node is configured with a virtual ethernet pair, one interface in the pod’s network namespace and the other interface in the node’s network stack. The ethernet interface that is configured in the node’s network stack is connected to the OVS bridge via a port.&lt;/p&gt;

&lt;p&gt;There can be 3 different types of communications a pod can have: 1) Pod to pod communication, 2) Pod to service communication, and 3) Pod to external host communication. The packet flow from source to destination for these different scenarios are described in the following sections.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pod to pod communication
&lt;/h2&gt;

&lt;p&gt;In pod to pod communication, a pod tries to communicate with another pod directly. For example, pod A wants to communicate with pod B. In this case, directly communicating would mean that either pod A uses pod B’s DNS name or pod B’s IP address. The two pods may be in the same node or in different nodes. In the following sections we will see how both scenarios are handled in OpenShift SDN.&lt;/p&gt;

&lt;h3&gt;
  
  
  In the same node
&lt;/h3&gt;

&lt;p&gt;Fig. 1 shows how pod to pod communication takes place when both the pods belong to the same node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrezhnm1x6zyk8wcbfls.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrezhnm1x6zyk8wcbfls.png" alt="Pod to pod communication in the same node"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig. 1: Pod to pod communication in the same node.&lt;/p&gt;

&lt;p&gt;The pod A sends a packet to its own ethernet interface (eth0) with pod B’s address as the destination. From eth0, the packet is sent to the corresponding virtual ethernet (veth0) interface in the node’s network stack. Since veth0 is connected to the OVS bridge (br0) the packet is forwarded to it. When the bridge receives the packet, it is checked for matches in the flow-tables and the corresponding actions are applied to the packet. As the packet is destined for pod B in the same node, the bridge forwards the traffic to the virtual ethernet interface (veth1) in the node’s network stack corresponding to pod B’s ethernet interface (eth0).&lt;/p&gt;

&lt;h3&gt;
  
  
  In different nodes
&lt;/h3&gt;

&lt;p&gt;Fig. 2 shows how pod to pod communication takes place when the communicating pods belong to the different nodes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pesx6r5hfk281lh0m19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pesx6r5hfk281lh0m19.png" alt="Pod to pod communication in the different nodes."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig. 2: Pod to pod communication in the different nodes.&lt;/p&gt;

&lt;p&gt;The pod A sends a packet to its own ethernet interface (eth0) with pod B’s address as the destination. From eth0, the packet is sent to the corresponding virtual ethernet (veth0) interface in the node’s network stack. As veth0 is connected to the OVS bridge (br0) the packet is forwarded to it. The actions of the matching flows in the flow-tables are executed and the bridge forwards the traffic to the Virtual Extensible LAN interface (vxlan0) of the node. The reason behind this is that the packet is destined for a pod in a different node.&lt;br&gt;
The vxlan0 encapsulates the packet and sends the traffic from node 1 to node 2 through the VXLAN tunnel. Once the packet is received by the vxlan0 interface in node 2, the encapsulation is removed. Then, the vxlan0 interface forwards the packet to the bridge (br0) of node 2. The packet is checked for matches in the flow-tables. As the packet is destined for the local pod B the corresponding flows match, and the packet is forwarded to the virtual ethernet interface (veth0) in the node’s network stack corresponding to pod B’s ethernet interface (eth0).&lt;/p&gt;

&lt;h2&gt;
  
  
  Pod to Pod communication via service
&lt;/h2&gt;

&lt;p&gt;In this scenario, a pod tries to communicate with another set of pods by using the DNS name or IP address of a service. For example, pod A wants to communicate with a set of pods which are represented by service S. Lets say, packets sent by pod A to service S get redirected to pod B. Pods A and B may be in the same node or in different nodes. In the following sections we will see how both scenarios are handled in OpenShift SDN.&lt;/p&gt;

&lt;h3&gt;
  
  
  In the same node
&lt;/h3&gt;

&lt;p&gt;Fig. 3 shows how pod to pod communication via service takes place when both the pods belong to the same node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyxcjwuv3phpcardyqu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyxcjwuv3phpcardyqu2.png" alt="Pod to service communication in the same node."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig. 3: Pod to service communication in the same node.&lt;/p&gt;

&lt;p&gt;The pod A sends a packet to its own ethernet interface (eth0) with service S’s address as the destination. As eth0’s corresponding virtual ethernet (veth0) in the nodes’s network stack is connected to the OVS bridge (br0), the packet is forwarded to it. The packet is checked for matches in the flow-tables and the corresponding actions are executed. As the packet is destined for the service S, it is forwarded to the internal interface (tun0). Before forwarding the packet, the iptables rules are processed and the backend pod (pod B) is chosen for service S. The destination of the packet is now modified to pod B from service S. Thus, the packet is sent to tun0 as it is destined for a pod which, in turn, forwards the packet to the bridge (br0). The packet is matched with the flow rules in the flow-tables and it is forwarded to the virtual ethernet interface (veth1) in the node’s network stack corresponding to pod B’s ethernet interface (eth0).&lt;/p&gt;

&lt;h3&gt;
  
  
  In different nodes
&lt;/h3&gt;

&lt;p&gt;Fig. 4 shows how pod to pod communication via service takes place when the communicating pods belong to the different nodes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9923rjxzw12b0hn2loon.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9923rjxzw12b0hn2loon.png" alt="Pod to service communication in different nodes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig. 4: Pod to service communication in different nodes.&lt;/p&gt;

&lt;p&gt;The pod A sends a packet to its own ethernet interface (eth0) with service S’s address as the destination. As eth0’s corresponding virtual ethernet (veth0) in the nodes’s network stack is connected to the OVS bridge (br0), the packet is forwarded to it. The packet is checked for matches in the flow-tables and the corresponding actions are executed. As the packet is destined for the service S, it is forwarded to the internal interface (tun0). Before forwarding the packet, the iptables rules are processed and the backend pod (pod B) is chosen for service S. The destination of the packet is now modified to pod B from service S. The packet is sent to tun0 as it is destined for a pod which in turn forwards the packet to the br0 interface. As the packet is destined for a pod B in a different node the corresponding flow rules match, the bridge forwards the traffic to the virtual extensible LAN interface (vxlan0).&lt;/p&gt;

&lt;p&gt;The vxlan0 encapsulates the packet and sends the traffic from node 1 to node 2 through the VXLAN tunnel. Once the packet is received by the vxlan0 interface in node 2, the encapsulation is removed. Then, the vxlan0 interface forwards the packet to the bridge (br0) of node 2. The packet is checked for matches in the flow-tables. As the packet is destined for the local pod B the corresponding flow rules match, and the packet is forwarded to the virtual ethernet interface (veth0) in the node’s network stack corresponding to pod B’s ethernet interface (eth0).&lt;/p&gt;

&lt;h2&gt;
  
  
  Pod to external host communication
&lt;/h2&gt;

&lt;p&gt;In this case, a pod tries to communicate with an external host. Traffic can be sent from a pod to an external host in different ways: 1) through special pods called Egress Router pods, 2) through the same node using the node IP, and 3) through Egress IP (either on same node or different node). In the following sections we will see how all the scenarios are handled in OpenShift SDN.&lt;/p&gt;

&lt;h3&gt;
  
  
  Through Egress Router
&lt;/h3&gt;

&lt;p&gt;Fig. 5 shows how pod to external host communication takes place via an Egress Router pod.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6g3kfzusfr9t9s2zhafc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6g3kfzusfr9t9s2zhafc.png" alt="Pod to external host communication through Egress Router"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig. 5: Pod to external host communication through Egress Router.&lt;/p&gt;

&lt;p&gt;The Egress Router pod is configured with an additional MACVLAN interface (macvlan0) in the node’s network stack. The macvlan0 is assigned an IP address and any traffic sent out through the interface is NAT-ed using the assigned IP address.&lt;/p&gt;

&lt;p&gt;The pod A sends a packet with the Egress Router pod’s address as the destination. In this example the Egress Router pod is provisioned on a different node. However, the Egress Router pod can also be on the same node as well. The packet flow, in both the cases, from pod A to Egress Router pod would happen in the same way as explained above in the pod to pod communication section. Once the packet is received by the Egress Router pod, it will send the packet to the External Host (based on how the Egress Router pod is configured). The source address of the packet would be the IP address of the macvlan0 interface.&lt;/p&gt;

&lt;h3&gt;
  
  
  Through the same node
&lt;/h3&gt;

&lt;p&gt;Fig. 6 shows how pod to external host communication takes place through the same node using the node IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8ewf4pjss9377cfcgje.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8ewf4pjss9377cfcgje.png" alt="Pod to external host communication through the same node."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig. 6: Pod to external host communication through the same node.&lt;/p&gt;

&lt;p&gt;Whenever a pod tries to send a packet to an external host, the default method is sending the packet from the same node. When pod A sends a packet destined for the external host, the packet is matched against the flow rules after it reaches the OVS bridge (br0). As the destination is an external host, the packet is sent to the internal interface (tun0) in the node’s network stack. Then, the packet is sent through the eth0 interface after NAT-ing the packet using the node IP assigned on eth0 interface.&lt;/p&gt;

&lt;h3&gt;
  
  
  Through Egress IPs
&lt;/h3&gt;

&lt;p&gt;Fig. 7 shows how pod to external host communication takes place through an Egress IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw49fksfn1yijvcl6jvrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw49fksfn1yijvcl6jvrx.png" alt="Pod to external host communication through Egress IP"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig. 7: Pod to external host communication through Egress IP.&lt;/p&gt;

&lt;p&gt;As mentioned above, the default method of sending a packet, destined for an external host, is through the same node using the Node IP. However, a namespace can be configured with Egress IP(s) for sending packets to external hosts. The Egress IPs will be provisioned on the ethernet interfaces of the nodes in addition to the Node IP. When a pod which belongs to a namespace configured with Egress IP(s) sends a packet destined for an external host, then the packet would first be sent to the node on which the chosen Egress IP is provisioned. If the chosen Egress IP is provisioned on the same node then it would be similar to sending the packet using the Node IP, except the NAT-ed packet would have the chosen Egress IP as the source address. If the chosen Egress IP is provisioned on another node, then for sending the packet from one node to another, the VXLAN tunnel is used. Once the packet reaches the OVS bridge (br0) on the node with the chosen Egress IP, the packet is matched against the flow rules. The packet is then sent to the tun0 interface which, in turn, sends the packet to the eth0 interface to be sent to the external host. Before the packet is forwarded to the external host, it is NAT-ed using the chosen Egress IP.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;More details can be found here: &lt;a href="https://github.com/openshift/sdn" rel="noopener noreferrer"&gt;https://github.com/openshift/sdn&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Official Red Hat OpenShift documentation for OpenShift SDN:&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.openshift.com/container-platform/4.10/networking/openshift_sdn/about-open" rel="noopener noreferrer"&gt;https://docs.openshift.com/container-platform/4.10/networking/openshift_sdn/about-open&lt;/a&gt; shift-sdn.html&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>openshift</category>
      <category>kubernetes</category>
      <category>sdn</category>
      <category>linux</category>
    </item>
    <item>
      <title>Difference between Helm2 and Helm3</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Fri, 20 May 2022 11:01:25 +0000</pubDate>
      <link>https://dev.to/kcdchennai/difference-between-helm2-and-helm3-52b6</link>
      <guid>https://dev.to/kcdchennai/difference-between-helm2-and-helm3-52b6</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;Author&lt;/em&gt;&lt;/strong&gt; : &lt;em&gt;KodeKloud (Platinum Sponsor, KCD Chennai 2022)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In this blog, we will see what is the difference between Helm2 and Helm3.&lt;/p&gt;

&lt;p&gt;As administrators had to manage more and more Kubernetes objects in their clusters to run their apps and infrastructure, a natural need for something to manage all of this complexity arose. So, the project called Helm appeared under the CNCF’s (Cloud Native Computing Foundation) umbrella. Since the initial launch in 2016, the project has matured and it got better and better. The improvements were also made possible by the fact that Kubernetes itself was improving, so Helm had more tools at its disposal it could leverage right off of Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  No More Tiller
&lt;/h2&gt;

&lt;p&gt;When Helm 2 was around, Kubernetes lacked features such as Role-Based Access Control and Custom Resource Definitions. To allow Helm to do its magic, an extra component, called Tiller, had to be installed in the Kubernetes cluster. So, whenever you wanted to install a Helm chart, you used the Helm (client) program installed on your local computer. This communicated with Tiller that was running on some server. Tiller, in turn, communicated with Kubernetes and proceeded to take action to make whatever you requested happen. So, Tiller was the middleman, so to speak.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Rnaq5lWm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6r5q3ostxilp4vlipqzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rnaq5lWm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6r5q3ostxilp4vlipqzv.png" alt="Image description" width="602" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Besides the fact that an extra component sitting between you and Kubernetes adds complexity, there were also some security concerns. By default, Tiller was running in “God mode” or otherwise said, it had the privileges to do anything it wanted. This was good since it allowed it to make whatever changes necessary in your Kubernetes cluster to install your charts. But this was bad since it allowed any user with Tiller access to do whatever they wanted in the cluster.&lt;/p&gt;

&lt;p&gt;After cool stuff like Role-Based Access Control (RBAC) and Custom Resource Definitions appeared in Kubernetes, the need for Tiller decreased, so it was removed entirely in Helm 3. Now there’s nothing sitting between Helm and the cluster.&lt;br&gt;
That is, when you use the Helm program on your local computer, this connects directly to the cluster (Kubernetes API server) and starts to work its magic.&lt;/p&gt;

&lt;p&gt;Furthermore, with RBAC, security is much improved and any user can be limited in what they can do with Helm. Before you had to set these limits in Tiller and that was not the best option. But with RBAC built from the ground up to fine-tune user permissions in Kubernetes, it’s now straightforward to do. As far as Kubernetes is concerned, it doesn’t matter if the user is trying to make changes within the cluster with kubectl or with helm commands. The user requesting the changes has the same RBAC-allowed permissions whatever tool they use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three-way strategic merge patch
&lt;/h2&gt;

&lt;p&gt;Now, this is probably a more important change than what we discussed before with Tiller.&lt;/p&gt;

&lt;p&gt;The name might sound intimidating, but don’t worry, at the end of this section we’ll see it’s actually simple, but a very smart thing that can prove quite useful.&lt;/p&gt;

&lt;p&gt;Helm has something like a snapshot feature. Here’s an example:&lt;/p&gt;

&lt;p&gt;You can use a chart to install a full-blown WordPress website. This will create revision number 1 for this install. Then, if you change something, for example, you upgrade to a newer chart to upgrade your WordPress install, you will arrive at revision number 2. These revisions can be considered something like snapshots, the exact state of a Kubernetes package at that moment in time. If there’s a need, you can return to revision number 1, do a rollback. This would get your package/app to the same state it was at when you first installed your chart.&lt;/p&gt;

&lt;p&gt;New revisions are created whenever important changes are done with the Helm command. For example, when we install a package, a revision is created. When we upgrade that package, a new revision appears. Even when we roll back a new revision is created.&lt;/p&gt;

&lt;p&gt;This is not like a typical backup/restore feature, in the sense that you do not get old data back. If you deleted a database in a persistent volume, this does not restore the persistent volume with the old data. What this does is bring all Kubernetes objects back to their old state, their old declarations, as they were at the time the revision was created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--26_yzHfb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ettzhwbzoflggps4arax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--26_yzHfb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ettzhwbzoflggps4arax.png" alt="Image description" width="602" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another way to think about this is that a rollback restores pretty much everything back to the way it was, EXCEPT for persistent data. Persistent data should be backed up with regular methods.&lt;/p&gt;

&lt;p&gt;Helm 2 was less sophisticated when it came to how it did such rollbacks. To understand what was lacking, the official documentation page gives us this example:&lt;/p&gt;

&lt;p&gt;You install a chart. This creates revision number 1 and it contains a deployment with 3 replicas. But someone, for some reason, brings down the number of replicas to 0, with a command like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl scale --replicas=0 deployment/myapp&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, this does not create a new revision, as it wasn’t a helm upgrade, install, or rollback command. It was just a “manual” change done without Helm.&lt;/p&gt;

&lt;p&gt;But we still have revision 1 available, with the original state. So no problem we would think, we just roll back to the original:&lt;br&gt;
&lt;code&gt;helm rollback myapp&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;But Helm 2 compares the chart currently used with the chart we want to revert to. Since this is the original install, revision 1, and we want to roll back to revision 1, the current chart and the old chart are identical. We did not change any chart, we just manually edited a small Kubernetes object. Helm 2 considers nothing should be changed so nothing happens. The count of replicas remains at 0.&lt;/p&gt;

&lt;p&gt;Helm 3 on the other hand is more intelligent. It compares the chart we want to revert to, the chart currently in use, and also, &lt;strong&gt;the live state (how our Kubernetes objects currently look like, their declarations in .yaml form)&lt;/strong&gt;. This is where that fancy &lt;strong&gt;“Three-way strategic merge patch”&lt;/strong&gt; name comes from. By also looking at the live state, it notices that the live state replica count is at 0, but the replica count in revision 1 we want to revert to is at 3, so it makes necessary changes to come back to the original state.&lt;br&gt;
Besides rollbacks, there are also things like upgrades to consider, where Helm 2 was also lacking. For example, say you install a chart. But then you make some changes to some of the Kubernetes objects installed. It all works nicely until you perform an upgrade. Helm 2 looks at the old chart and the new chart you want to upgrade to. All your changes will be lost since they don’t exist in the old chart or the new chart. But Helm 3, as mentioned, looks at the charts and also at the live state. It notices you added some stuff of your own so it performs the upgrade while preserving anything you might have added.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BjhiAt2a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/17jfgh6sfdry25ijnh0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BjhiAt2a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/17jfgh6sfdry25ijnh0e.png" alt="Image description" width="602" height="341"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You can visit the &lt;a href="https://helm.sh/docs/faq/#improved-upgrade-strategy-3-way-strategic-merge-patches"&gt;link &lt;/a&gt;for a more in-depth explanation.&lt;/p&gt;

&lt;p&gt;What was mentioned above are probably the biggest changes in Helm 3. There are some other smaller changes, but they don’t really affect how you’ll work with the newer version of Helm, especially if you’re a new user.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Checkout the Helm for the Absolute Beginners course &lt;a href="https://bit.ly/3wD9ZR6"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Checkout the Complete Kubernetes learning path &lt;a href="https://bit.ly/3PuNB52"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>helm</category>
    </item>
    <item>
      <title>Your Roadmap to Become a DevOps Engineer in 2022</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Fri, 20 May 2022 09:08:16 +0000</pubDate>
      <link>https://dev.to/kcdchennai/your-roadmap-to-become-a-devops-engineer-in-2022-1k0h</link>
      <guid>https://dev.to/kcdchennai/your-roadmap-to-become-a-devops-engineer-in-2022-1k0h</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;Author&lt;/em&gt;&lt;/strong&gt; : &lt;em&gt;KodeKloud (Platinum Sponsor, KCD Chennai 2022)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;DevOps Engineer has become an emotion and not just a job profile. People from all sorts of professions are willing to get their hands dirty and make a shift to this career as a DevOps engineer. Apart from the buzzword bingo, DevOps holds a unique position in the software field, even though many technologies evolved over time and vanished, this path doesn't seem like it will disappear very soon. DevOps is here to stay, and we will witness a large number of companies embracing this approach sooner or later.&lt;/p&gt;

&lt;h2&gt;
  
  
  What exactly is DevOps?
&lt;/h2&gt;

&lt;p&gt;DevOps is a process that emphasizes communication and learning from a technical standpoint between software developers and IT professionals, like dev and ops, managing production environments, while automating the deployment of software and infrastructure changes with utmost care and collaboration. At the core, DevOps means combining development and operations into one unified team so that continuous process of learning, knowledge sharing, and shared responsibilities happen seamlessly between the two.&lt;br&gt;
The idea of DevOps grew out of the Agile methodology and first gained attention in 2009.&lt;/p&gt;

&lt;h2&gt;
  
  
  The need for DevOps
&lt;/h2&gt;

&lt;p&gt;Many departments in the companies are usually siloed and carry their own procedures. Especially when it comes to a software powered organization, Devs will have no idea what is happening with Ops and vice versa, and it creates a lot of confusion among the teams and impacts the overall company growth and individual productivity. The idea of DevOps is to bridge the gap between development and operations to support other departments, so the workflow within the organization is smooth. This allows companies to fail early and learn early and thereby quickly deliver software features and security updates. The ultimate goal of DevOps is to bring products faster to the market with more quality and reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Skills required to become a DevOps engineer:
&lt;/h2&gt;

&lt;p&gt;DevOps is a cultural phenomenon rather than an individual job role. It is more of a team sport and can't be done alone. So, contradicting my own statement, there is no concept of DevOps engineer, it is just that firms have created this role for their understanding.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, is it about the tools?
&lt;/h2&gt;

&lt;p&gt;Nope:)&lt;br&gt;
But having the understanding of all the DevOps tools like Docker, GitHub, Kubernetes, Terraform, Ansible, Puppet, etc is necessary because that is what companies are looking for when hiring a DevOps engineer.&lt;/p&gt;

&lt;p&gt;But most of all, it is all about learning the DevOps culture and framework rather than tools. Most people might also stress the automation aspect, it is not all about automation. DevOps engineers should have a basic knowledge of scripting, programming, and framework. People coming from other departments to DevOps should understand what developers are trying to do in the development phase and then how they are managing the versions of their code, how they are testing, integrating them, and deploying them to servers and finally, how end users are getting the software to use. Once they understand how things are done theoretically and manually with no automation tools, the concepts will get clear and easier.&lt;/p&gt;

&lt;p&gt;In Addition to this, Ops knowledge is also necessary to become good in DevOps. A DevOps engineer, not only writes code or automates but he also has to know other related aspects too, for example&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scripting and Linux basics&lt;/li&gt;
&lt;li&gt;Knowledge of different cloud providers&lt;/li&gt;
&lt;li&gt;Knowledge of how software development life cycle works (SDLC)&lt;/li&gt;
&lt;li&gt;Familiarity with source control and versioning and tools like GitHub &amp;amp; Bitbucket&lt;/li&gt;
&lt;li&gt;Experience with building tools&lt;/li&gt;
&lt;li&gt;Artifacts management tools like JFrog artifactory &amp;amp; Sonatype&lt;/li&gt;
&lt;li&gt;Infrastructure design and microservices&lt;/li&gt;
&lt;li&gt;Better communication skills&lt;/li&gt;
&lt;li&gt;Automation testing skills&lt;/li&gt;
&lt;li&gt;Understanding about infrastructure as code&lt;/li&gt;
&lt;li&gt;Troubleshooting skills&lt;/li&gt;
&lt;li&gt;Understanding the concepts of CI/CD and tools&lt;/li&gt;
&lt;li&gt;Knowledge of DevOps pipeline and how it works&lt;/li&gt;
&lt;li&gt;Knowing how systems scale - Horizontal scaling and vertical scaling&lt;/li&gt;
&lt;li&gt;Virtualization concepts&lt;/li&gt;
&lt;li&gt;Understanding of different DevOps success metrics like deployment frequency, lead time to change, change failure rate, time to restore services back, etc&lt;/li&gt;
&lt;li&gt;Containerization concepts and tools like Docker&lt;/li&gt;
&lt;li&gt;Container orchestration and tools like Kubernetes&lt;/li&gt;
&lt;li&gt;Software release cycle and management&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is CI, CD?
&lt;/h2&gt;

&lt;p&gt;Continuous integration (CI), as the name itself suggests, focuses on combining the work of individual developers together into a repository or a codebase to streamline the continuous efforts. This can be done several times a day; the primary objective is to enable early detection of integration bugs and also to allow for tighter cohesion and more smoother development collaboration. The goal of CI is to quickly make sure a new code change from a developer is good and suitable for further use in the codebase.&lt;/p&gt;

&lt;p&gt;The aim of continuous delivery (CD) is to minimize the friction points that are inherent in the deployment phases. Typically, a team's implementation involves automating each of the steps to build deployments so that a safe code release can be done at any moment in time.&lt;/p&gt;

&lt;p&gt;Continuous delivery is the repetitive practice of building, testing, and making delivering improvements to software codebase with the help of automated tools. The key result of the continuous delivery (CD) is the code that is always in a deployable state.&lt;/p&gt;

&lt;p&gt;Many people confuse between the two, continuous delivery and continuous deployment, whereas both are different in the DevOps space.&lt;/p&gt;

&lt;p&gt;Continuous deployment is a DevOps process in which a much higher degree of automation is involved, where a build/deployment occurs automatically whenever a major change is made to the code. Here, developer code changes are automatically detected and prepared for a release to production instantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to start your DevOps journey?
&lt;/h2&gt;

&lt;p&gt;The problem is, very rarely companies hire freshers to work as DevOps engineers but that being said, there is a huge skills gap in the industry. Firms struggle to hire a good DevOps candidate and they often fail because of the scarcity of talent.&lt;br&gt;
Listing down below some resources and courses you can opt-in to move into DevOps career path,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read the book ‘&lt;a href="https://www.goodreads.com/book/show/17255186-the-phoenix-project"&gt;The Pheonix Project&lt;/a&gt;’&lt;/li&gt;
&lt;li&gt;Take this course &lt;a href="https://bit.ly/3wo9Sdf"&gt;DevOps pre-requisite course&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Watch this video by Rackspace that explains the &lt;a href="https://youtu.be/_I94-tJlovg"&gt;meaning of DevOps in simple English&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Read these interesting answers on Quora by experts on &lt;a href="https://www.google.com/search?biw=1866&amp;amp;bih=974&amp;amp;ei=VwVFXrb_Ep6X4-EP3PWTwAM&amp;amp;q=how+to+become+a+devops+engineer+quora&amp;amp;oq=become+a+devops+engineer&amp;amp;gs_l=psy-ab.1.0.0i71l8.0.0..88741301...0.5..0.0.0.......0......gws-wiz.GGBsaxxXu-w"&gt;becoming a good DevOps engineer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Watch and learn side by side with this &lt;a href="https://bit.ly/3sGvYp2"&gt;Docker for beginners&lt;/a&gt; full free course by Mumshad (One of the top Udemy instructors)&lt;/li&gt;
&lt;li&gt;Follow these &lt;a href="https://techbeacon.com/devops/devops-100-top-leaders-practitioners-experts-follow"&gt;100 DevOps influencers on Twitter&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Join Developer community forums like &lt;a href="https://dev.to/"&gt;dev.to&lt;/a&gt;, &lt;a href="https://hashnode.com/"&gt;Hashnode&lt;/a&gt;, &lt;a href="https://dzone.com/"&gt;Dzone&lt;/a&gt;, &lt;a href="https://www.reddit.com/r/devops/"&gt;DevOps subreddit&lt;/a&gt;, &lt;a href="https://stackoverflow.com/"&gt;Stackoverflow&lt;/a&gt;, &lt;a href="https://devops.stackexchange.com/"&gt;DevOps StackExchange&lt;/a&gt;, &lt;a href="https://changelog.com/"&gt;Changelog&lt;/a&gt;, etc&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DevOps is taking the center stage and as we have mentioned before, it is becoming the epitome of software development. DevOps engineers are one of the highest-paid professionals in the world and this is the demanding tech job currently around the world. DevOps is a good career path and a proper plan and approach will get you a good job but once you get into it, it is highly recommended to always keep learning since the DevOps space is always evolving and new tools are emerging day by day.&lt;/p&gt;

&lt;p&gt;BTW, sometimes it can be difficult to get hired as a DevOps engineer without any prior work experience or knowledge of different tools and automation techniques, we at KodeKloud have come up with the &lt;strong&gt;KodeKloud Engineer&lt;/strong&gt; to help you gain free DevOps work experience by solving real DevOps problems and challenges, with which you can get hired for DevOps role. Sign up for Free &lt;a href="https://bit.ly/3wzTVj7"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Migrating Stateful Applications between Kubernetes Clusters using Crane</title>
      <dc:creator>KCD Chennai</dc:creator>
      <pubDate>Tue, 17 May 2022 15:43:05 +0000</pubDate>
      <link>https://dev.to/kcdchennai/migrating-stateful-applications-between-kubernetes-clusters-using-crane-3pe</link>
      <guid>https://dev.to/kcdchennai/migrating-stateful-applications-between-kubernetes-clusters-using-crane-3pe</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;Author :&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;John Matthews, Savitha Raghunathan , Red Hat (Platinum Sponsor, KCD Chennai 2022)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This post will introduce &lt;a href="https://github.com/konveyor/crane"&gt;Crane&lt;/a&gt;, a tool under the &lt;a href="https://www.konveyor.io/"&gt;Konveyor &lt;/a&gt;community that helps application owners migrate Kubernetes workloads, and their state between clusters.&lt;/p&gt;

&lt;p&gt;Migrating an application between Kubernetes clusters may be more nuanced than one would imagine.  In an ideal situation, this would be as simple as applying the YAML manifests to the new cluster and adjusting DNS records to redirect external traffic, yet often there is more that is needed.  Below are a few of the common concerns that need to be addressed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;YAML manifests - do we have the original YAML manifests stored in version control or accessible so we can reapply to the new cluster?&lt;/li&gt;
&lt;li&gt;Configuration drift - if we do have the YAML manifests, do we have confidence they are still accurate and represent the application as it’s running in the cluster? Perhaps the application has been running for a while, been modified, and we no longer have confidence we can reproduce it exactly as it’s currently running.&lt;/li&gt;
&lt;li&gt;State - we may need to address persisted state that has been generated inside of the cluster, either small elements of state such as generated certificates stored in a Secret, data stored in Custom Resources, or gigabytes of data in persistent volumes.
&lt;/li&gt;
&lt;li&gt;Customizations needed for a new environment - we may be migrating across cloud vendors or environments that require transformations to the applications so they can run in the new environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Crane helps users do more than just handle a point in time migration of a workload, it is intended to help users adopt current best practices such as onboarding to GitOps by reconstructing redeployable YAML manifests from inspecting a running application.  The project is the result of several years of experience performing large-scale production Kubernetes migrations and addressing the lessons learned.&lt;/p&gt;

&lt;p&gt;Crane follows the Unix philosophy of building small sharply focused tools that can be assembled in powerful ways.  It is designed with transparency and ease-of-diagnostics in mind. It drives migration through a pipeline of non-destructive tasks that output results to disk so the operation can be easily audited and versioned without impacting live workloads. The tasks can be run repeatedly and will output consistent results given the same inputs without side effects on the system at large.&lt;/p&gt;

&lt;p&gt;The process uses a few tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/konveyor/crane"&gt;crane&lt;/a&gt;: The command-line tool that migrates applications to the terminal.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/konveyor/crane-lib"&gt;crane-lib&lt;/a&gt;: The brains behind Crane functionality responsible for transforming resources.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/konveyor/crane-plugins"&gt;crane-plugins&lt;/a&gt;: Collection of plugins from the Konveyor community based on experience from performing Kube migrations.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/konveyor/crane-plugin-openshift"&gt;crane-plugin-openshift&lt;/a&gt;: An optional plugin specifically tailored to manage OpenShift migration workloads and an example of a repeatable best practice.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/backube/pvc-transfer"&gt;pvc-transfer&lt;/a&gt;: The library that powers the Persistent Volume migration ability, shared with the &lt;a href="https://volsync.readthedocs.io/en/stable/index.html"&gt;VolSync &lt;/a&gt; project.  State migration of Persistent Volumes is handled by rsync allowing storage migrations between different storage classes.
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/konveyor/crane-runner"&gt;crane-runner&lt;/a&gt;: A collection of resources showing how to leverage Tekton to build migration workflows with Crane&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A Walkthrough Migration using Crane
&lt;/h2&gt;

&lt;p&gt;Crane works by exporting the application-specific resources from one cluster, transforming the manifests to create re-deployable manifests, and applying the transformed manifests into the destination cluster. We will be using &lt;a href="https://github.com/konveyor/crane-runner"&gt;crane-runner&lt;/a&gt; to migrate a stateful application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Linux VM with a minimum of 2vCPUs and 8GB of RAM. If using an Amazon EC2 instance, the recommended VM size is t2.large.&lt;/li&gt;
&lt;li&gt; Ensure the following tools are installed and available in the PATH:
&lt;a href="https://podman.io/getting-started/installation"&gt;Podman&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/"&gt;kubectl&lt;/a&gt;, &lt;a href="https://minikube.sigs.k8s.io/docs/start/"&gt;minikube&lt;/a&gt;, &lt;a href="https://kubectl.docs.kubernetes.io/installation/kustomize/binaries/"&gt;kustomize&lt;/a&gt;, &lt;a href="https://github.com/stedolan/jq"&gt;jq&lt;/a&gt;, &lt;a href="https://github.com/mikefarah/yq"&gt;yq&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Stateful Guestbook App Migration
&lt;/h3&gt;

&lt;p&gt;We will migrate a simple PHP Guestbook application with a Redis database that will use the underlying persistent storage. The diagram below shows the high-level operations that will be performed by Crane as a part of this migration,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lyt8gtfk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/in03glex3pqweon3ksl6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lyt8gtfk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/in03glex3pqweon3ksl6.png" alt="Image description" width="624" height="245"&gt;&lt;/a&gt;&lt;br&gt;
Figure: Stateful App Migration using Crane&lt;/p&gt;

&lt;p&gt;Follow &lt;a href="https://github.com/konveyor/crane-runner/blob/main/examples/stateful-app-migration/README.md"&gt;this &lt;/a&gt;step-by-step guide to complete the migration. This example demonstrates how simple, yet powerful Crane can be and how easily it can be configured to perform complex operations with ease.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Check out more &lt;a href="https://github.com/konveyor/crane-runner/tree/main/examples#readme"&gt;examples &lt;/a&gt;using crane-runner.&lt;/li&gt;
&lt;li&gt;Participate in &lt;a href="https://docs.google.com/document/d/1DJDco4-ialwVoB2yAP54dhFxw_2xPPJ6z6fBp66Yphg/edit#heading=h.wtws91k8c1bo"&gt;Konveyor community meetings&lt;/a&gt; to learn more about the projects in the ecosystem.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  About Konveyor:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.konveyor.io/"&gt;Konveyor &lt;/a&gt;community is home for several open source projects that help with your Application Modernization &amp;amp; Migration journey. The community is actively working on expanding the tools to cater to various real world migration scenarios. If you would like to get involved with the community, reach out to us via &lt;a href="https://kubernetes.slack.com/archives/CR85S82A2"&gt;Kubernetes slack channel&lt;/a&gt;. You can also subscribe to our mailing list from the &lt;a href="https://www.konveyor.io/"&gt;Konveyor website&lt;/a&gt; to hear about what's happening in the community, the latest meetups, and read about new migration stories.&lt;/p&gt;

&lt;h2&gt;
  
  
  Acknowledgements:
&lt;/h2&gt;

&lt;p&gt;Huge shout out to &lt;a href="https://github.com/djzager"&gt;David Zager&lt;/a&gt; &amp;amp; &lt;a href="https://github.com/eriknelson"&gt;Erik Nelson&lt;/a&gt; for contributing to this blog post with their wonderful &lt;a href="https://github.com/konveyor/crane-runner/tree/main/examples#readme"&gt;crane-runner examples&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Authors:
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;John Matthews:&lt;/em&gt;&lt;br&gt;
John Matthews has a 20+ year background in software engineering specializing in Linux systems.  He's been focused primarily on Kubernetes since Kubernetes 1.6.  At Red Hat, John serves in a hybrid role of technical architect and people manager for solutions related to Kubernetes Migration and Data Protection.   He lives in Raleigh, NC with his wife, daughter, and a growing pack of Dobermans.&lt;br&gt;
LinkedIn:  &lt;a href="https://www.linkedin.com/in/johnwmatthews/"&gt;https://www.linkedin.com/in/johnwmatthews/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Savitha Raghunathan:&lt;/em&gt;&lt;br&gt;
Savitha is a Senior Software Engineer at Red Hat, working on Kubernetes/OpenShift Data Protection software solutions. She is an upstream Kubernetes contributor and passionate about mentoring individuals new to open source communities. She likes to read, travel, and crochet in her free time. &lt;br&gt;
LinkedIn: &lt;a href="https://www.linkedin.com/in/savitharaghunathan/"&gt;https://www.linkedin.com/in/savitharaghunathan/&lt;/a&gt; &lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
