<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Farshad Nickfetrat</title>
    <description>The latest articles on DEV Community by Farshad Nickfetrat (@farshad_nick).</description>
    <link>https://dev.to/farshad_nick</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/farshad_nick"/>
    <language>en</language>
    <item>
      <title>Policy Management in Kubernetes with Kyverno</title>
      <dc:creator>Farshad Nickfetrat</dc:creator>
      <pubDate>Sat, 08 Feb 2025 10:37:50 +0000</pubDate>
      <link>https://dev.to/farshad_nick/policy-management-in-kubernetes-with-kyverno-4k51</link>
      <guid>https://dev.to/farshad_nick/policy-management-in-kubernetes-with-kyverno-4k51</guid>
      <description>&lt;p&gt;Policy management in Kubernetes means setting rules to control how resources are used, who can access them, and how workloads behave. This helps improve security, compliance, and stability in a cluster.&lt;br&gt;
Why is Policy Management important?&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Security: Prevents unauthorized access and enforces best practices.
Compliance: Ensures the system follows company and legal rules.
Stability: Avoids resource misuse and keeps the cluster healthy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Tools like Kyverno and OPA Gatekeeper help enforce policies automatically.&lt;/p&gt;

&lt;p&gt;Let’s Get started&lt;br&gt;
What is the scenario ?&lt;/p&gt;

&lt;p&gt;We want every Pod to have an app label. If not, Kyverno should block it.&lt;br&gt;
1- First Step : Install Keyverno&lt;/p&gt;

&lt;p&gt;You can install it via manifest or helm&lt;/p&gt;

&lt;p&gt;helm installation :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;helm repo add kyverno https://kyverno.github.io/kyverno/&lt;br&gt;
helm repo update&lt;br&gt;
helm install kyverno kyverno/kyverno -n kyverno --create-namespace&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Manifest installation :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.11.1/install.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;1–1 Install Keyverno CLI&lt;/p&gt;

&lt;p&gt;Linux :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -LO https://github.com/kyverno/kyverno/releases/download/v1.12.0/kyverno-cli_v1.12.0_linux_x86_64.tar.gz
tar -xvf kyverno-cli_v1.12.0_linux_x86_64.tar.gz
sudo cp kyverno /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mac :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install kyverno
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;arch Linux&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yay -S kyverno-git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2- Define a Policy&lt;/p&gt;

&lt;p&gt;The validationFailureAction field in Kyverno determines how the policy behaves when a resource violates the defined rules. There are two main modes:&lt;/p&gt;

&lt;p&gt;We want every Pod to have an app label. If not, Kyverno should block it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#policy.yml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-app-label
spec:
  validationFailureAction: Enforce
  rules:
  - name: check-for-app-label
    match:
      resources:
        kinds:
        - Pod
    validate:
      message: "All Pods must have the 'app' label."
      pattern:
        metadata:
          labels:
            app: "?*" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f policy.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;validation Failure Action :&lt;/p&gt;

&lt;p&gt;Audit (default): Testing new policies, monitoring violations, gradual enforcement.&lt;/p&gt;

&lt;p&gt;Enforce : Strict security requirements, compliance enforcement, critical policies.&lt;br&gt;
2- Create a pod without label&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: nginx
    image: nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f pod.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the result would be something like that&lt;br&gt;
3- Verifying Your Policy&lt;/p&gt;

&lt;p&gt;we can test the policy by :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kyverno apply policy.yml --resource pod.yml 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;policy.yml → Your Kyverno policy (e.g., enforcing labels).
pod.yml → Your Kubernetes resource (e.g., a Pod you want to test).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;OPA Gatekeeper is another policy management tool that I wrote an article about. You can access it through the link below:&lt;br&gt;
Enforcing Kubernetes Policies with Gatekeeper: A Practical Scenario for Denying NodeName in Pods&lt;br&gt;
Gatekeeper is a Kubernetes-native policy enforcement tool that integrates with the Open Policy Agent (OPA) to provide…&lt;/p&gt;

&lt;p&gt;About Author :&lt;br&gt;
Hi 👋, I’m Farshad Nick (Farshad nickfetrat)&lt;br&gt;
A passionate Devops Engineer&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;📝 I regularly write articles on packops.dev and packops.ir
💬 Ask me about Devops , Cloud , Kubernetes , Linux
📫 How to reach me on my linkedin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>kyverno</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>packops</category>
    </item>
    <item>
      <title>Enhancing Kubernetes Networking: The Advantages of IPVS Over iptables</title>
      <dc:creator>Farshad Nickfetrat</dc:creator>
      <pubDate>Sat, 02 Nov 2024 08:48:37 +0000</pubDate>
      <link>https://dev.to/farshad_nick/enhancing-kubernetes-networking-the-advantages-of-ipvs-over-iptables-12jb</link>
      <guid>https://dev.to/farshad_nick/enhancing-kubernetes-networking-the-advantages-of-ipvs-over-iptables-12jb</guid>
      <description>&lt;p&gt;Hey there, If you’re diving into the world of container orchestration, you know that managing how your services talk to each other is crucial. Traditionally, Kubernetes has leaned on iptables for handling service load balancing. But guess what? There’s a cool kid in town: IPVS (IP Virtual Server). Let’s take a look at why you might want to consider IPVS over iptables for your Kubernetes setup.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Better Load Balancing&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;First off, IPVS is a superstar when it comes to distributing incoming traffic. It supports a bunch of scheduling options—like round-robin and least connections—so you can pick what works best for your app. Plus, it can keep sessions sticky, meaning users stick to the same backend server for their requests. This is super handy for apps that need to remember user state!&lt;br&gt;
IPVS Supported Load balancing Algorithm&lt;/p&gt;

&lt;p&gt;When you’re using IPVS for load balancing in Kubernetes, you have some cool options for how traffic gets distributed to your backend servers. Here’s a quick rundown of the main scheduling algorithms you might encounter:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rr: round-robin

lc: least connection

dh: destination hashing

sh: source hashing

sed: shortest expected delay

nq: never queue
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Performance That Rocks&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you’re running a high-traffic application, IPVS is the way to go. It handles way more connections than iptables with lower latency, which means faster response times for your users. With its efficient connection management, IPVS keeps things running smoothly, even when the traffic spikes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Health Checks Like a Pro&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We all want our apps to be reliable, right? IPVS has built-in health checks that keep an eye on your backend pods. If one of them goes down, IPVS automatically takes it out of the rotation, so your users don’t hit a dead end. This helps keep everything up and running without a hitch!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Smart Resource Usage&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By spreading the traffic around efficiently, IPVS helps make the most of your resources. This means your pods won’t get overloaded while others are sitting around doing nothing. It leads to a more stable and efficient setup overall.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Easier to Configure&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s face it—network configurations can get messy. IPVS makes it easier to set things up. You can define virtual servers and their backend services in a straightforward way, making it simpler to tweak things as your application needs change.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Smooth Integration with kube-proxy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Kubernetes has built-in support for IPVS as a mode for kube-proxy, so you can take advantage of all its features without overhauling your setup. It’s like getting a performance boost without the hassle!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Better Debugging and Monitoring&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With IPVS, you get detailed metrics and stats about how your load balancing is performing. This means you can keep an eye on traffic patterns and server health, making it easier to spot issues before they become problems.&lt;/p&gt;

&lt;p&gt;Wrapping It Up&lt;/p&gt;

&lt;p&gt;While iptables has been a solid tool for networking in Kubernetes, IPVS brings a lot to the table that can seriously enhance your app’s performance and reliability. As you scale up your Kubernetes deployments, switching to IPVS for load balancing is a smart move that can lead to better resource management and happier users. So why not give it a shot? Your Kubernetes setup might just thank you!&lt;/p&gt;

&lt;p&gt;I'm going to show you how IPVS works by walking through a comprehensive scenario in the next article.&lt;/p&gt;

&lt;p&gt;Soooo, stay tuned!&lt;/p&gt;

&lt;p&gt;Learn Kubernetes by Example &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/farshadnick/Mastering-Kubernetes/" rel="noopener noreferrer"&gt;https://github.com/farshadnick/Mastering-Kubernetes/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Don’t Forget to Give me a Star :)&lt;/p&gt;

&lt;p&gt;About Author :&lt;br&gt;
Hi 👋, I’m Farshad Nick (Farshad nickfetrat)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;📝 I regularly write articles on packops.dev and packops.ir

💬 Ask me about Devops , Cloud , Kubernetes , Linux

📫 How to reach me on my linkedin

Here is my Github repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>ipvs</category>
      <category>iptables</category>
      <category>kubernetes</category>
      <category>farshadnick</category>
    </item>
    <item>
      <title>iptables vs nftables: What’s New in Linux Firewalling?</title>
      <dc:creator>Farshad Nickfetrat</dc:creator>
      <pubDate>Wed, 09 Oct 2024 11:10:36 +0000</pubDate>
      <link>https://dev.to/farshad_nick/iptables-vs-nftables-whats-new-in-linux-firewalling-4a36</link>
      <guid>https://dev.to/farshad_nick/iptables-vs-nftables-whats-new-in-linux-firewalling-4a36</guid>
      <description>&lt;p&gt;When it comes to managing firewall rules on Linux, iptables has been the go-to tool for years. But now, there’s a new sheriff in town: nftables. It’s more efficient, more flexible, and it’s slowly becoming the default for modern Linux distributions. If you're wondering what the fuss is all about, let's dive into the differences between these two and see how they compare in real-world scenarios.&lt;/p&gt;

&lt;p&gt;A Quick Overview&lt;/p&gt;

&lt;p&gt;iptables has been around since the early 2000s. It’s tried and true, but it’s also starting to show its age, especially when dealing with large, complex firewall configurations. Enter nftables, a more modern alternative introduced in 2014. nftables was designed to address many of the limitations of iptables, bringing better performance and more flexible rule management.&lt;/p&gt;

&lt;p&gt;At a high level:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iptables: Solid, but separate tools for IPv4, IPv6, and ARP filtering, and it can get messy with large rule sets.

nftables: Unified syntax for all protocols (IPv4, IPv6, ARP, and more) and supports more efficient handling of complex rules.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkddclpqgeoxevj5owve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkddclpqgeoxevj5owve.png" alt="Image description" width="800" height="491"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let’s compare them in action!&lt;/p&gt;




&lt;h3&gt;
  
  
  Basic Syntax: Blocking Traffic on Port 22 (SSH)
&lt;/h3&gt;

&lt;p&gt;Say you want to block incoming SSH connections on port 22. Here’s how you’d do it in each:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;iptables:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-p&lt;/span&gt; tcp &lt;span class="nt"&gt;--dport&lt;/span&gt; 22 &lt;span class="nt"&gt;-j&lt;/span&gt; DROP

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Pretty straightforward! This command adds a rule to block incoming TCP traffic on port 22.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;nftables:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
nft add rule inet filter input tcp dport 22 drop

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;In nftables, it’s just as simple, but notice the keyword inet. It’s a unified table for both IPv4 and IPv6 traffic, so you don’t need separate rules like you would with iptables.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Allowing Traffic from a Specific IP Range
&lt;/h3&gt;

&lt;p&gt;Next, let’s allow traffic from a certain subnet, like 192.168.1.0/24.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;iptables:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-s&lt;/span&gt; 192.168.1.0/24 &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Here, we append a rule to allow traffic from the subnet.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;nftables:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
nft add rule inet filter input ip saddr 192.168.1.0/24 accept

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Notice that nftables uses ip saddr (source address), which is more descriptive and works for both IPv4 and IPv6.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Dealing with NAT (Network Address Translation)
&lt;/h3&gt;

&lt;p&gt;If you’re setting up source NAT (SNAT) for outbound traffic, the commands look a bit different.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;iptables:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-A&lt;/span&gt; POSTROUTING &lt;span class="nt"&gt;-o&lt;/span&gt; eth0 &lt;span class="nt"&gt;-j&lt;/span&gt; SNAT &lt;span class="nt"&gt;--to-source&lt;/span&gt; 203.0.113.5

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This is your classic iptables rule for source NAT on the eth0 interface.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;nftables:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
nft add rule ip nat postrouting oif &lt;span class="s2"&gt;"eth0"&lt;/span&gt; snat to 203.0.113.5

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;nftables syntax is cleaner here, using oif for the output interface and snat for source NAT. No need for -t nat because nftables handles everything within the same framework.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Handling Multiple IPs or Ports with Ease
&lt;/h3&gt;

&lt;p&gt;Here’s where nftables really starts to shine. Let’s say you want to block traffic from multiple IP addresses.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;iptables:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-s&lt;/span&gt; 192.168.1.10 &lt;span class="nt"&gt;-j&lt;/span&gt; DROP

iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-s&lt;/span&gt; 192.168.1.20 &lt;span class="nt"&gt;-j&lt;/span&gt; DROP

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Each IP requires a separate rule in iptables. Imagine if you had 100 IPs to block. Your rule set would get really long!&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;nftables:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
nft add &lt;span class="nb"&gt;set &lt;/span&gt;inet filter blocked_ips &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="nb"&gt;type &lt;/span&gt;ipv4_addr&lt;span class="se"&gt;\;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;

nft add element inet filter blocked_ips &lt;span class="o"&gt;{&lt;/span&gt; 192.168.1.10, 192.168.1.20 &lt;span class="o"&gt;}&lt;/span&gt;

nft add rule inet filter input ip saddr @blocked_ips drop

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;nftables lets you create a set of blocked IPs and apply the rule to all of them in one go. Way more efficient, right?&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Logging Packets for Debugging
&lt;/h3&gt;

&lt;p&gt;When debugging network traffic, logging is super helpful. Here’s how you log traffic on port 80 (HTTP).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;iptables:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-p&lt;/span&gt; tcp &lt;span class="nt"&gt;--dport&lt;/span&gt; 80 &lt;span class="nt"&gt;-j&lt;/span&gt; LOG &lt;span class="nt"&gt;--log-prefix&lt;/span&gt; &lt;span class="s2"&gt;"HTTP Traffic: "&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;nftables:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
nft add rule inet filter input tcp dport 80 log prefix &lt;span class="s2"&gt;"HTTP Traffic: "&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;nftables has the same functionality but with slightly cleaner syntax. Plus, nftables offers more advanced logging options, like counters and limits, making it easier to control log volume.&lt;/p&gt;




&lt;h3&gt;
  
  
  Efficiency &amp;amp; Performance
&lt;/h3&gt;

&lt;p&gt;If you're dealing with a large number of firewall rules or complex traffic filtering, nftables blows iptables out of the water in terms of efficiency. nftables is designed to handle maps, sets, and stateful traffic with less CPU usage, so you’ll notice better performance, especially in high-traffic environments.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;iptables processes rules in a linear fashion, so as your rule set grows, performance can drop.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;nftables uses optimized data structures (like sets and maps) to handle rules more efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Atomic Rule Changes
&lt;/h3&gt;

&lt;p&gt;One of nftables’ killer features is the ability to make atomic rule updates. This means you can load a whole new set of rules without any downtime or partial rule application.&lt;/p&gt;

&lt;p&gt;With iptables, you have to update rules one by one, which could lead to mistakes or even security gaps if you’re not careful.&lt;/p&gt;

&lt;p&gt;With nftables, you can write your rules into a file and apply them all at once:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
nft &lt;span class="nt"&gt;-f&lt;/span&gt; /etc/nftables.conf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way, your firewall rules are always consistent, and you avoid the risk of misconfigurations during updates.&lt;/p&gt;




&lt;h3&gt;
  
  
  Backward Compatibility
&lt;/h3&gt;

&lt;p&gt;The good news is, nftables can support iptables rules through a compatibility layer. So, if you’ve been using iptables for years and don’t want to completely rewrite your firewall rules, you can transition to nftables gradually. Just keep in mind that as Linux continues to evolve, nftables will be the default, so it's worth learning it now.&lt;/p&gt;




&lt;h3&gt;
  
  
  Conclusion: Should You Switch to nftables?
&lt;/h3&gt;

&lt;p&gt;If you’re managing a modern Linux system and want better performance, flexibility, and a cleaner syntax, nftables is definitely the way to go. It handles complex rule sets more efficiently, offers atomic rule updates, and provides a unified interface for IPv4, IPv6, and other protocols. Plus, it’s the future of Linux firewall management.&lt;/p&gt;

&lt;p&gt;That said, iptables still works, and if you’ve got a simple setup or a legacy system, it’s perfectly fine to keep using it. But if you’re scaling up or managing complex environments, making the switch to nftables will save you a lot of time and headaches.&lt;/p&gt;




&lt;p&gt;So, what do you think? Ready to give nftables a shot?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why must a Kubernetes cluster have an odd number of nodes</title>
      <dc:creator>Farshad Nickfetrat</dc:creator>
      <pubDate>Sun, 22 Sep 2024 10:41:27 +0000</pubDate>
      <link>https://dev.to/farshad_nick/why-must-a-kubernetes-cluster-have-an-odd-number-of-nodes-5co6</link>
      <guid>https://dev.to/farshad_nick/why-must-a-kubernetes-cluster-have-an-odd-number-of-nodes-5co6</guid>
      <description>&lt;p&gt;If you’ve spent any time setting up or managing Kubernetes, you might have come across the recommendation that clusters should have an odd number of nodes. But why is that? Let's break it down in simple terms.&lt;br&gt;
It's All About Leader Election&lt;/p&gt;

&lt;p&gt;Kubernetes relies on ETCD and ETCD uses RAFT Algorithm that is a consensus algorithm (Paxos)&lt;br&gt;
What is RAFT consensus&lt;/p&gt;

&lt;p&gt;RAFT is a consensus algorithm used to ensure multiple computers (or nodes) agree on shared data, even if some nodes fail. It's designed to be easier to understand than other algorithms like Paxos.&lt;/p&gt;

&lt;p&gt;RAFT ensures that distributed systems like etcd (used by Kubernetes) can agree on a single leader and maintain consistency, even when some nodes fail. Raft is designed to be understandable. &lt;/p&gt;

&lt;p&gt;Imagine a group of people trying to agree on a decision (like which movie to watch). RAFT works by choosing one person as the leader, who suggests a movie (or decision). The others (followers) can agree with the leader or ask for changes. If the leader goes away (fails), the group elects a new leader. As long as a majority agree, the group can keep making decisions, even if some people (nodes) aren't available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4apj3gs8z1w4mpoxhh9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4apj3gs8z1w4mpoxhh9w.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take a look at these examples : &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;4-Node System&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Total Nodes: 4Quorum Required: 3Allowed Failed Nodes: 1&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Quorum Required: To maintain consensus in this 4-node system, a majority (3 nodes) must be operational.&lt;br&gt;
    Allowed Failed Nodes: This system can tolerate the failure of only 1 node. If 2 nodes go down, the system loses quorum and cannot make decisions.&lt;br&gt;
    Scenario: If 3 nodes are up and 1 is down, the system can still function. If 2 nodes go down, the system cannot process any new transactions until at least 3 nodes are up again.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;9-Node System&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Total Nodes: 9Quorum Required: 5Allowed Failed Nodes: 4&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Quorum Required: In this 9-node system, at least 5 nodes (more than half) must be up and running to reach a consensus.&lt;/p&gt;

&lt;p&gt;Allowed Failed Nodes: This system can tolerate up to 4 node failures while still maintaining quorum.&lt;br&gt;
Scenario: If 4 nodes fail, the remaining 5 nodes can continue to operate and maintain consensus. However, if a 5th node fails, the system loses quorum and can no longer process updates or transactions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;10-Node System&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Total Nodes: 10Quorum Required: 6Allowed Failed Nodes: 4&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Quorum Required: In a 10-node system, at least 6 nodes must be operational to maintain consensus.&lt;br&gt;
Allowed Failed Nodes: This system can tolerate the failure of up to 4 nodes. If 5 nodes go down, the system loses quorum.&lt;/p&gt;

&lt;p&gt;Scenario: If 6 nodes are operational, the system can process transactions and make decisions. However, if 5 nodes are down, the system becomes inoperable because there are not enough nodes to reach quorum.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;11-Node System&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Total Nodes: 11Quorum Required: 6Allowed Failed Nodes: 5&lt;br&gt;
Explanation:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Quorum Required: For an 11-node system, a minimum of 6 nodes need to be up to form a quorum.
Allowed Failed Nodes: This system can tolerate up to 5 node failures. If 6 nodes fail, quorum is lost.
Scenario: With 6 nodes operational, the system can still reach consensus and process transactions. However, if the 6th node fails, the system is effectively halted since it no longer has a quorum to make decisions.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32uev41443r53ds6s2w9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32uev41443r53ds6s2w9.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnc285islbl8j9v5k4v5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnc285islbl8j9v5k4v5.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;By increasing the number of control plane nodes, you raise the failure tolerance of the cluster. However, it's crucial to have an odd number of nodes to simplify quorum (majority) calculations and avoid split-brain scenarios. This ensures the cluster can make decisions efficiently and remain stable even during failures.&lt;/p&gt;

&lt;p&gt;About Author :&lt;br&gt;
Hi 👋, I’m Farshad Nick (Farshad nickfetrat)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;📝 I regularly write articles on packops.dev and packops.ir
💬 Ask me about Devops , Cloud , Kubernetes , Linux
📫 How to reach me on my linkedin
Here is my Github repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>kubernetes</category>
      <category>farshadnick</category>
      <category>cka</category>
      <category>raft</category>
    </item>
    <item>
      <title>Understanding Pod Topology Spread Constraints in Kubernetes</title>
      <dc:creator>Farshad Nickfetrat</dc:creator>
      <pubDate>Thu, 19 Sep 2024 15:49:23 +0000</pubDate>
      <link>https://dev.to/farshad_nick/understanding-pod-topology-spread-constraints-in-kubernetes-5e8e</link>
      <guid>https://dev.to/farshad_nick/understanding-pod-topology-spread-constraints-in-kubernetes-5e8e</guid>
      <description>&lt;p&gt;When you're running a Kubernetes cluster, it's critical to ensure your Pods are evenly distributed across different parts of your infrastructure, especially for high availability and fault tolerance. You don’t want all your Pods landing on a single node or in one availability zone. If that resource fails, your application could go down. This is where Pod Topology Spread Constraints come into play.&lt;/p&gt;

&lt;p&gt;In this guide, we'll walk through what Pod Topology Spread Constraints are, how they work, and we'll explore a real-world example with a maxSkew of 2.&lt;br&gt;
What Are Pod Topology Spread Constraints?&lt;/p&gt;

&lt;p&gt;Pod Topology Spread Constraints allow you to control how Pods are distributed across various topology domains within your cluster. A topology domain could be anything from nodes to zones or regions, depending on your cluster setup. These constraints help ensure that Pods are not overly concentrated in a single domain, which would expose you to higher risk in case of a failure.&lt;/p&gt;

&lt;p&gt;Essentially, it’s a way of telling Kubernetes: "Don't put all my Pods in one place. Spread them out!" By doing so, you improve the resiliency of your application.&lt;br&gt;
Key Parameters Explained&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;topologyKey: This parameter defines what domain you want your Pods to be spread across. It could be zones (topology.kubernetes.io/zone), nodes, or even custom labels, depending on your infrastructure needs.

maxSkew: This parameter controls the allowed imbalance between topology domains. For instance, if you set maxSkew: 2, the difference in the number of Pods between any two domains should not be more than 2. It gives you some flexibility in distribution while still ensuring a reasonable balance.

whenUnsatisfiable: Defines what Kubernetes should do when it can’t satisfy the Pod distribution rules. There are two options:

labelSelector: Specifies which Pods the constraint applies to. Usually, you'll want this to match specific labels like app: my-app so that only a subset of Pods is affected.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Practical Example:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;apiVersion: apps/v1&lt;br&gt;
kind: Deployment&lt;br&gt;
metadata:&lt;br&gt;
  name: web-app-deployment&lt;br&gt;
spec:&lt;br&gt;
  replicas: 12&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: web-app&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app: web-app&lt;br&gt;
    spec:&lt;br&gt;
      containers:&lt;br&gt;
      - name: web-app&lt;br&gt;
        image: nginx:1.21.1&lt;br&gt;
      topologySpreadConstraints:&lt;br&gt;
      - maxSkew: 2&lt;br&gt;
        topologyKey: topology.kubernetes.io/zone&lt;br&gt;
        whenUnsatisfiable: DoNotSchedule&lt;br&gt;
        labelSelector:&lt;br&gt;
          matchLabels:&lt;br&gt;
            app: web-app &lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;What’s Going On Here?&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;maxSkew: 2: This ensures that the difference in the number of Pods between any two zones will be at most 2. For example, if there are 5 Pods in zone-a, the other zones can have anywhere from 3 to 5 Pods.

topology.kubernetes.io/zone: This tells Kubernetes to distribute the Pods across different availability zones (e.g., zone-a, zone-b, zone-c).

whenUnsatisfiable: DoNotSchedule: If Kubernetes can’t maintain the desired balance, it will stop scheduling Pods in the zones that would exceed the skew constraint. This prevents overloading any one zone at the expense of others.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;How Kubernetes Distributes Pods&lt;/p&gt;

&lt;p&gt;Let’s assume your cluster has three zones with enough capacity to run your Pods. Given 12 replicas and a maxSkew of 2, Kubernetes will attempt to distribute the Pods evenly. An ideal distribution would look something like:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zone-a: 4 Pods

zone-b: 4 Pods

zone-c: 4 Pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;and it also could be something like this : &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zone-a: 6 Pods

zone-b: 4 Pods

zone-c: 2 Pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxvg5yrd75js6quu8fnj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxvg5yrd75js6quu8fnj.png" alt="Image description" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a perfectly balanced scenario. But, with maxSkew: 2, Kubernetes has some leeway if things aren't perfect. For example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zone-a: 5 Pods

zone-b: 4 Pods

zone-c: 3 Pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here, the difference between any two zones doesn’t exceed 2 Pods, so the constraint is still respected.&lt;br&gt;
Handling Failures or Imbalances&lt;/p&gt;

&lt;p&gt;Let’s say zone-a has no available capacity. In this case, Kubernetes will still try to adhere to the maxSkew: 2 constraint. If you have 12 Pods to distribute and zone-a is unavailable, Kubernetes might distribute them like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zone-a: 0 Pods

zone-b: 6 Pods

zone-c: 6 Pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This scenario breaks the maxSkew: 2 rule because the difference between zone-a (0 Pods) and zone-b/zone-c (6 Pods) is 6, which exceeds the allowed imbalance. Since we set whenUnsatisfiable: DoNotSchedule, Kubernetes will refuse to schedule new Pods in zone-b or zone-c until capacity becomes available in zone-a. This way, the constraint is respected, and your application maintains resilience by avoiding overloading a single zone.&lt;br&gt;
Why Use Pod Topology Spread Constraints?&lt;/p&gt;

&lt;p&gt;Here are a few reasons you should consider using Pod Topology Spread Constraints:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Improved Availability: By spreading Pods across zones or nodes, you reduce the risk of downtime if one part of your infrastructure fails.

Fault Tolerance: Even in the event of a failure in one zone or node, your application can continue running in other zones.

Custom Control with maxSkew: You can fine-tune the balance between flexibility and strictness. A smaller maxSkew ensures tighter distribution control, while a larger value gives Kubernetes more room to optimize placement.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Pod Topology Spread Constraints are a powerful way to control how your Pods are distributed in a Kubernetes cluster. By spreading your Pods across zones or nodes and setting a reasonable maxSkew, you can ensure higher availability and fault tolerance.&lt;/p&gt;

&lt;p&gt;In our example, a maxSkew of 2 allowed for a slight imbalance while still ensuring that no zone had an overloaded number of Pods. This approach ensures that your application stays resilient, even in less-than-perfect infrastructure conditions.&lt;/p&gt;

&lt;p&gt;Improve Your Kubernetes Knowledge with my  CKA Git repo : &lt;/p&gt;

&lt;p&gt;Don't Forget to Give me a Star :) &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/farshadnick/Mastering-Kubernetes/" rel="noopener noreferrer"&gt;https://github.com/farshadnick/Mastering-Kubernetes/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;About Author :&lt;br&gt;
Hi 👋, I’m Farshad Nick (Farshad nickfetrat)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;📝 I regularly write articles on packops.dev and packops.ir

💬 Ask me about Devops , Cloud , Kubernetes , Linux

📫 How to reach me on my linkedin

Here is my Github repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>podtopologyspreadconstraints</category>
      <category>kubernetes</category>
      <category>cka</category>
      <category>farshadnick</category>
    </item>
    <item>
      <title>Kubescape : Comprehensive Kubernetes Security from Development to Runtime</title>
      <dc:creator>Farshad Nickfetrat</dc:creator>
      <pubDate>Mon, 16 Sep 2024 10:48:35 +0000</pubDate>
      <link>https://dev.to/farshad_nick/kubescape-comprehensive-kubernetes-security-from-development-to-runtime-2k80</link>
      <guid>https://dev.to/farshad_nick/kubescape-comprehensive-kubernetes-security-from-development-to-runtime-2k80</guid>
      <description>&lt;p&gt;Kubernetes is amazing for managing containers, but keeping it secure can be tricky. That's where Kubescape comes in—a super handy, open-source security tool for Kubernetes clusters. It helps you lock down your system from development all the way through runtime, making sure your cluster stays secure at every stage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06cca1lsbzy7gpnvlwtm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06cca1lsbzy7gpnvlwtm.png" alt="Image description" width="344" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s the quick rundown:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster Hardening&lt;/strong&gt; : Kubescape checks your cluster’s setup and flags potential vulnerabilities, following industry standards like the CIS benchmarks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Posture Management&lt;/strong&gt; : It continuously monitors your cluster’s security posture, letting you know if anything needs attention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Runtime Security&lt;/strong&gt; : Kubescape also keeps an eye on things when your system is live, catching any weird behavior or misconfigurations that could lead to security issues.&lt;/p&gt;

&lt;p&gt;It’s perfect for developers and security teams who want to integrate security checks early in the development process and keep monitoring once the cluster is up and running. Plus, since it’s open-source, it’s flexible, accessible, and free!&lt;/p&gt;

&lt;p&gt;In short, Kubescape is like having a security guard for your Kubernetes cluster, from start to finish. Easy to use, reliable, and it makes sure your cluster stays safe.&lt;br&gt;
Installation &lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Take look at some example:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scan a running Kubernetes cluster:&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;kubescape scan&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scan NSA framework&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Scan a running Kubernetes cluster with the NSA framework:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubescape scan framework nsa&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scan MITRE framework&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Scan a running Kubernetes cluster with the MITRE ATT&amp;amp;CK® framework:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubescape scan framework mitre&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scan specific namespaces:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubescape scan --include-namespaces development,staging,production&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scan local YAML files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubescape scan /path/to/directory-or-directory&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Take a look at the example.&lt;br&gt;
Scan git repository&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scan Kubernetes manifest files from a Git repository:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubescape scan https://github.com/kubescape/kubescape&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskyy3gprjefluzxq612a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskyy3gprjefluzxq612a.png" alt="Image description" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipvi3ens1yzd4zxuyfo0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipvi3ens1yzd4zxuyfo0.png" alt="Image description" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1s7ki9rsytfhjv3ehx3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1s7ki9rsytfhjv3ehx3v.png" alt="Image description" width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Kubescape offers a powerful and user-friendly way to safeguard your Kubernetes clusters from development to runtime. With features like compliance auditing, hardening recommendations, and continuous monitoring, it fills a crucial need in Kubernetes security. For teams looking to integrate security seamlessly across their workflows, Kubescape is an essential tool in their DevSecOps pipeline.&lt;/p&gt;

&lt;p&gt;About Author :&lt;br&gt;
Hi 👋, I’m Farshad Nick (Farshad nickfetrat)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;📝 I regularly write articles on packops.dev and packops.ir
💬 Ask me about Devops , Cloud , Kubernetes , Linux
📫 How to reach me on my linkedin
Here is my Github repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>kubernetes</category>
      <category>kubernetessecurity</category>
      <category>cks</category>
      <category>farshadnick</category>
    </item>
    <item>
      <title>Cleaning Up Kubernetes: A Guide to Finding Unused Resources with Kor</title>
      <dc:creator>Farshad Nickfetrat</dc:creator>
      <pubDate>Tue, 10 Sep 2024 12:16:07 +0000</pubDate>
      <link>https://dev.to/farshad_nick/cleaning-up-kubernetes-a-guide-to-finding-unused-resources-with-kor-3p82</link>
      <guid>https://dev.to/farshad_nick/cleaning-up-kubernetes-a-guide-to-finding-unused-resources-with-kor-3p82</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah3mbb50d22y9h5oavs6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah3mbb50d22y9h5oavs6.jpg" alt="Image description" width="800" height="464"&gt;&lt;/a&gt;&lt;br&gt;
If you’ve been running Kubernetes for a while, you probably know how messy things can get. When applications come and go, they often leave behind unused or forgotten resources. These “orphans” don’t serve any purpose but still sit there, wasting space and potentially even costing you money. That’s where Kor - Kubernetes Orphaned Resources Finder comes in!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Exactly Is Kor?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fr4xfeq7xamhz8mw4vk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fr4xfeq7xamhz8mw4vk.png" alt="Image description" width="578" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kor is a tool designed to help you find and clean up those orphaned resources in your Kubernetes cluster. Think of it like a janitor that keeps your Kubernetes environment tidy by identifying and removing stuff you no longer need — like old PVCs (Persistent Volume Claims), ConfigMaps, Secrets, or even abandoned Services.&lt;/p&gt;

&lt;p&gt;If you're running a busy Kubernetes cluster with tons of deployments, chances are you've got a bunch of orphaned resources sitting around. And here’s the kicker: they can actually slow down your cluster or eat up valuable resources. Kor solves that problem by tracking down those leftovers and helping you decide what to keep or toss.&lt;br&gt;
Why You Need Kor&lt;/p&gt;

&lt;p&gt;Managing Kubernetes resources can get tricky, especially when things aren’t cleaned up properly. Here’s why Kor can make your life easier:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Efficient Resource Management: Orphaned resources can clog up your cluster. Kor helps you find and get rid of them, freeing up space and resources for the stuff that actually matters.

Cost Savings: Depending on your cloud provider, orphaned resources can cost you. Things like unused volumes or IP addresses can add up in cost over time. Kor can help you avoid these hidden charges by cleaning up the mess.

Cluster Hygiene: Keeping a cluster clean and organized is key to maintaining performance. With Kor, you reduce the chances of having an overly cluttered cluster that’s harder to manage and debug.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;How Kor Works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kor operates by scanning your Kubernetes cluster and identifying any resources that no longer have any dependent objects. For example, if a PVC is sitting there without a Pod or Deployment using it, Kor will flag it as orphaned. Similarly, ConfigMaps or Secrets that no longer have references can be identified.&lt;/p&gt;

&lt;p&gt;Kor provides a report of these resources and gives you options to remove them. It’s simple, lightweight, and efficient!&lt;br&gt;
What Resources Can Kor Find?&lt;/p&gt;

&lt;p&gt;Kor is designed to find a variety of orphaned resources, including:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Persistent Volume Claims (PVCs): Old storage volumes that are no longer being used.

ConfigMaps: Configuration files that are no longer tied to active services.

Secrets: Credentials and keys left behind after services or pods are deleted.

Services: Networking endpoints that aren’t in use anymore.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;How to Get Started with Kor&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kor is easy to set up and integrate with your Kubernetes environment. Once you’ve installed it, you can run it as a CronJob or a one-time scan, depending on how frequently you want to clean up.&lt;/p&gt;

&lt;p&gt;You’ll get a detailed list of orphaned resources, and you can either manually review them or set up automated cleanups to make sure your cluster stays neat and tidy.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Installation : *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For macOS users : &lt;/p&gt;

&lt;p&gt;&lt;code&gt;brew install kor&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install the binary to your $GOBIN or $GOPATH/bin:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;go install github.com/yonahd/kor@latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Kubectl plugin &lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl krew install kor&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Kor provides various subcommands to identify and list unused resources. The available commands are:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;all - Gets all unused resources for the specified namespace or all namespaces.

configmap - Gets unused ConfigMaps for the specified namespace or all namespaces.

secret - Gets unused Secrets for the specified namespace or all namespaces.

service - Gets unused Services for the specified namespace or all namespaces.

serviceaccount - Gets unused ServiceAccounts for the specified namespace or all namespaces.

deployment - Gets unused Deployments for the specified namespace or all namespaces.

statefulset - Gets unused StatefulSets for the specified namespace or all namespaces.

role - Gets unused Roles for the specified namespace or all namespaces.

clusterrole - Gets unused ClusterRoles for the specified namespace or all namespaces (namespace refers to RoleBinding).

hpa - Gets unused HPAs for the specified namespace or all namespaces.

pod - Gets unused Pods for the specified namespace or all namespaces.

pvc - Gets unused PVCs for the specified namespace or all namespaces.

pv - Gets unused PVs in the cluster (non namespaced resource).

storageclass - Gets unused StorageClasses in the cluster (non namespaced resource).

ingress - Gets unused Ingresses for the specified namespace or all namespaces.

pdb - Gets unused PDBs for the specified namespace or all namespaces.

crd - Gets unused CRDs in the cluster (non namespaced resource).

job - Gets unused jobs for the specified namespace or all namespaces.

replicaset - Gets unused replicaSets for the specified namespace or all namespaces.

daemonset- Gets unused DaemonSets for the specified namespace or all namespaces.

finalizer - Gets unused pending deletion resources for the specified namespace or all namespaces.

networkpolicy - Gets unused NetworkPolicies for the specified namespace or all namespaces.

exporter - Export Prometheus metrics.

version - Print kor version information.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;in this scenario i want to list all unused resources in default namespace &lt;/p&gt;

&lt;p&gt;&lt;code&gt;kor all  -n default --show-reason&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And here is the result&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4zbvpx4a7gxzxfta6tu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4zbvpx4a7gxzxfta6tu.png" alt="Image description" width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;About Author :&lt;br&gt;
Hi 👋, I’m Farshad Nick (Farshad nickfetrat)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;📝 I regularly write articles on packops.dev and packops.ir

💬 Ask me about Devops , Cloud , Kubernetes , Linux

📫 How to reach me on my linkedin

Here is my Github repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
    </item>
    <item>
      <title>Centralized Package Caching for Linux: Mastering Apt-Cacher for Faster Updates</title>
      <dc:creator>Farshad Nickfetrat</dc:creator>
      <pubDate>Sun, 08 Sep 2024 08:51:22 +0000</pubDate>
      <link>https://dev.to/farshad_nick/apt-repository-with-apt-cacher-2pb2</link>
      <guid>https://dev.to/farshad_nick/apt-repository-with-apt-cacher-2pb2</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F111dpprn92xqfeao9j2m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F111dpprn92xqfeao9j2m.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Introduction&lt;/p&gt;

&lt;p&gt;APT-Cache-ng is a Life Saver for Situation which you do not want to Give Internet to Your Ubuntu Servers for Updating Packages&lt;/p&gt;

&lt;p&gt;apt cacher is a caching proxy for Debian based distributions that creates a local cache of Debian-based mirrors as well as other Linux distributions. This means that whenever a package is pulled from the official repositories, an APT cache server caches them such that if any other local machine would want to install the same package, it just pulls it from the local caching server. This helps eliminates the bottlenecks of slow internet connections.&lt;/p&gt;

&lt;p&gt;Apt-Cacher NG has been designed from scratch as a replacement for apt-cacher, but with a focus on maximizing throughput with low system resource requirements. It can also be used as replacement for apt-proxy and approx with no need to modify clients’ sources.list files.&lt;/p&gt;

&lt;p&gt;In This Article we want to implement APT-Cacher NG&lt;br&gt;
Configuration&lt;/p&gt;

&lt;p&gt;Install with Package&lt;/p&gt;

&lt;p&gt;1- Easily you can install APT-Cacher-ng by run Following Command :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;apt-get install  apt-cacher-ng&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;2- Enable APT cacher in Startup by doing :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;systemctl enable apt-cacher-ng&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install with Docker&lt;/p&gt;

&lt;p&gt;1- Update apt repository and install APT-Transport which allows us to add new repository easily&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get update&lt;br&gt;
sudo apt-get install \&lt;br&gt;
   apt-transport-https \&lt;br&gt;
   ca-certificates \&lt;br&gt;
   curl \&lt;br&gt;
   gnupg \&lt;br&gt;
   lsb-release&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
2- Add Official Docker ‘s GPG key&lt;br&gt;
&lt;code&gt;&lt;br&gt;
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;3- Add Docker Repository&lt;/p&gt;

&lt;p&gt;&lt;code&gt;echo \&lt;br&gt;
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \&lt;br&gt;
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;4- Install Docker-Ce&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get update&lt;br&gt;
sudo apt-get install docker-ce docker-ce-cli containerd.io&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;5- Install Docker-compose for Bringing Apt-cacher-ng Container up&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo chmod +x /usr/local/bin/docker-compose&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;6- Make a docker-compose.yml and Paste following Parameter :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;---&lt;br&gt;
version: '3'&lt;br&gt;
services:&lt;br&gt;
  apt-cacher-ng:&lt;br&gt;
    image: sameersbn/apt-cacher-ng&lt;br&gt;
    container_name: apt-cacher-ng&lt;br&gt;
    ports:&lt;br&gt;
    - "3142:3142"&lt;br&gt;
    volumes:&lt;br&gt;
    - apt-cacher-ng:/var/cache/apt-cacher-ng&lt;br&gt;
    restart: always&lt;br&gt;
volumes:&lt;br&gt;
  apt-cacher-ng:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;7- Bring up your Docker-cmpose by doing :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose up -d&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;8- You can Get access to APT cacher by entering your &lt;a href="http://Machine" rel="noopener noreferrer"&gt;http://Machine&lt;/a&gt; IP:3172 (192.168.110.200:3172)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrugoqbolwe9hci37eap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrugoqbolwe9hci37eap.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8–1 As you can see we can get access to Statistics (how much package cached) by clicking on Statistics and report and Configuration Page&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymxdhvo9wilx7nn5s405.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymxdhvo9wilx7nn5s405.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Client Side Config&lt;br&gt;
We have 2 option for configuring Client To Give Packages From APT-Cacher-NG&lt;/p&gt;

&lt;p&gt;1- Send All APT Repository Requests to the Proxy Section By Creating /etc/apt/apt.conf.d/02proxy File and Put Following Section To IT :&lt;/p&gt;

&lt;p&gt;`Acquire::http { Proxy "&lt;a href="http://192.168.110.200:3142" rel="noopener noreferrer"&gt;http://192.168.110.200:3142&lt;/a&gt;"; };&lt;/p&gt;

&lt;h1&gt;
  
  
  192.168.110.200 is our apt-cacher-ng ip
&lt;/h1&gt;

&lt;p&gt;`&lt;br&gt;
OR&lt;/p&gt;

&lt;p&gt;2- Appending your APT Cacher URL:PORT to Your APT Repository Like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;deb http://192.168.110.200:3142/ftp.debian.org/debian stable main contrib non-free&lt;br&gt;
deb-src http://192.168.110.200:3142/ftp.debian.org/debian stable mrnickfetratmain contrib non-free&lt;br&gt;
deb http://192.168.110..200:3142/HTTPS///get.docker.com/ubuntu docker main&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;You Can Cache Packages , Speed UP Downloading Packages and Also Not Accessing Your Servers To the Internet By Simply Using APT Cacher-NG&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
