<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: timtsoitt</title>
    <description>The latest articles on DEV Community by timtsoitt (@timtsoitt).</description>
    <link>https://dev.to/timtsoitt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/timtsoitt"/>
    <language>en</language>
    <item>
      <title>How to create local wildcard domains in MacOS using dnsmasq?</title>
      <dc:creator>timtsoitt</dc:creator>
      <pubDate>Sun, 19 Mar 2023 12:25:45 +0000</pubDate>
      <link>https://dev.to/timtsoitt/how-to-resolve-local-wildcard-domains-in-macos-h5e</link>
      <guid>https://dev.to/timtsoitt/how-to-resolve-local-wildcard-domains-in-macos-h5e</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Recently I was looking around for tutorials to setup local wildcard domains in MacOS for development purpose. Most tutorials were similar and I just could not resolve the local domains. If you are having the same issue, your should take a look of your DNS server settings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Install dnsmasq
&lt;/h3&gt;

&lt;p&gt;Dnsmasq is a DNS server which allows you to create DNS records locally. Besides, it also forwards domain queries to upstream servers, such that we can resolve public domains as well.&lt;/p&gt;

&lt;p&gt;By default, dnsmasq looks for upstream servers in &lt;code&gt;/etc/resolv.conf&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

brew install dnsmasq


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Verify your homebrew prefix
&lt;/h3&gt;

&lt;p&gt;Dnsmasq service is installed in a different location based on your Mac's chip (Intel / Apple Silicon). You can check it by running command &lt;code&gt;brew --prefix&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Since I am using M1 Macbook, the installation path is &lt;code&gt;/opt/homebrew&lt;/code&gt;. If you are using intel Macbook, be careful of the path settings and make changes accordingly.&lt;/p&gt;
&lt;h3&gt;
  
  
  Modify the dnsmasq.conf file
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

vi $(brew --prefix)/etc/dnsmasq.conf


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Go to the end of the file, uncomment the line &lt;code&gt;conf-dir=/opt/homebrew/etc/dnsmasq.d/,*.conf&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Now dnsmasq will look for local DNS records in &lt;code&gt;/opt/homebrew/etc/dnsmasq.d/&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a DNS record
&lt;/h3&gt;

&lt;p&gt;I will use &lt;strong&gt;test&lt;/strong&gt; as my local domain name. And I want dnsmasq to resolve any &lt;code&gt;test&lt;/code&gt; domain queries to IP &lt;code&gt;127.0.0.1&lt;/code&gt;. Dnsmasq will also try to resolve subdomain records as well, such as &lt;code&gt;a.test&lt;/code&gt;, &lt;code&gt;b.c.test&lt;/code&gt;, which is very convenient.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

bash -c "echo 'address=/test/127.0.0.1' &amp;gt; $(brew --prefix)/etc/dnsmasq.d/test.conf"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The name of the conf file does not matter, it is just good for management, having one file for each domain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup DNS servers
&lt;/h3&gt;

&lt;p&gt;To let your Macbook knows sending query to dnsmasq. Left click your wifi icon in the menu bar -&amp;gt; "Wi-Fi settings" -&amp;gt; "Details" -&amp;gt; go to "DNS" section. &lt;/p&gt;

&lt;p&gt;Take a record of your existing DNS servers and remove all of them. Then add "127.0.0.1" as your topmost entry such that DNS query will be sent to dnsmasq first. &lt;/p&gt;

&lt;p&gt;It is very important because if your query is send to other DNS servers first, it may respond with record is not found which explains why you are unable to query local domains.&lt;/p&gt;

&lt;p&gt;Lastly you can add back all your existing DNS servers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskkiaetgcxiwk2u768za.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskkiaetgcxiwk2u768za.png" alt="The "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now if you take a look of the &lt;code&gt;/etc/resolv.conf&lt;/code&gt; file, you can see that it is actually generated from the settings of the "DNS" section.&lt;/p&gt;

&lt;h3&gt;
  
  
  (Re)start dnsmasq
&lt;/h3&gt;

&lt;p&gt;To reflect your dnsmasq changes, you need to restart it.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&lt;h1&gt;
  
  
  brew services start dnsmasq
&lt;/h1&gt;

&lt;p&gt;brew services restart dnsmasq&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Conclusion&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;It is a short article. Thanks for reading!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5svc4x5o41w8zys6bz8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5svc4x5o41w8zys6bz8.png" alt="Dig result"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dns</category>
      <category>macos</category>
    </item>
    <item>
      <title>Argo CD and Sealed Secrets is a perfect match</title>
      <dc:creator>timtsoitt</dc:creator>
      <pubDate>Sat, 11 Jun 2022 08:51:10 +0000</pubDate>
      <link>https://dev.to/timtsoitt/argo-cd-and-sealed-secrets-is-a-perfect-match-1dbf</link>
      <guid>https://dev.to/timtsoitt/argo-cd-and-sealed-secrets-is-a-perfect-match-1dbf</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Sealed Secrets&lt;/code&gt; can works perfectly well with &lt;code&gt;Argo CD&lt;/code&gt;. It is one of the very best secret managment approachs that you should consider.&lt;/p&gt;

&lt;p&gt;The same idea can apply to both Helm, Kustomize or other GitOps tools.&lt;/p&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;There is no golden standard about secret management in Argo CD. Instead, Argo CD provides you a list of solutions &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/secret-management/#secret-management"&gt;here&lt;/a&gt; and you have to think it yourself. &lt;/p&gt;

&lt;p&gt;Although I opt-in for Sealed Secrets, I also want to show you how my decision making process guides me to such a conclusion. &lt;/p&gt;

&lt;p&gt;To start with, we need to think of two questions.&lt;/p&gt;




&lt;h3&gt;
  
  
  Where to store your secret?
&lt;/h3&gt;

&lt;p&gt;Although there are many solutions available, basically there are just two ways to store secrets, i.e. store in secret management platforms or git repositories. &lt;/p&gt;

&lt;h4&gt;
  
  
  Secret management platform approach
&lt;/h4&gt;

&lt;p&gt;I don't prefer to use secret management platforms.&lt;/p&gt;

&lt;p&gt;No matter what platform you use, it won't be K8S native because the platform lives outside of your cluster. You always needs to install some plugins in your Kubernetes clusters to read the secrets from your platform .&lt;/p&gt;

&lt;p&gt;If you opt-in for this approach, you have to manage the platforms, plugins and integrating everything together. It introduces &lt;strong&gt;too much complexity&lt;/strong&gt; and we know that complexity means more errors, bugs, human mistakes...&lt;/p&gt;

&lt;h4&gt;
  
  
  Git repository approach
&lt;/h4&gt;

&lt;p&gt;You can upload encrypted secrets together with your Kubernetes manifests to your repository. Then Argo CD will decrypt the secret when it syncs your application. Everything is bundled together which is good for management. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Sealed Secrets is one of the solution based on this approach.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  How to ingest your secret?
&lt;/h3&gt;

&lt;p&gt;Now we are moving forward to git repository approach. Let us review what solutions are available.&lt;/p&gt;

&lt;h4&gt;
  
  
  Helm Secrets
&lt;/h4&gt;

&lt;p&gt;To most people who use Helm, Helm Secrets is a popular choice. Users can encrypt &lt;code&gt;values files&lt;/code&gt; and then Argo CD will decrypt these files accordingly. &lt;/p&gt;

&lt;p&gt;However it is also too complicated to me.&lt;/p&gt;

&lt;p&gt;First of all, to integrate Helm Secrets with ArgoCD, you have to do certain modifications, which you can see their &lt;a href="https://github.com/jkroepke/helm-secrets/wiki/ArgoCD-Integration"&gt;tutoral&lt;/a&gt;. Again these complexities always leads to many issues.&lt;/p&gt;

&lt;p&gt;Moreover, there is no way to simply avoid someone committs unencrypted values files as these files are normal YAML files. &lt;/p&gt;

&lt;p&gt;Lastly, we know that ArgoCD allow us to specify multiple values files. So now you got a set of values files and encrypted values files. How to tell there is no unintended value overriding issue happen? Ideally, we want to avoid decrypting secrets files but now we may have to do so.&lt;/p&gt;

&lt;h4&gt;
  
  
  Sealed Secrets
&lt;/h4&gt;

&lt;p&gt;Sealed Secrets is a general usage secret management solution. Even if you don't use Argo CD, you can also use it to encrypt secrets.&lt;/p&gt;

&lt;p&gt;It is simple to use&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When you install Sealed Secrets to your cluster, it creates a controller that manages a RSA certificate internally. &lt;/li&gt;
&lt;li&gt;Then you use &lt;code&gt;kubeseal&lt;/code&gt; utility, a cli tool provided by Sealed Secrets, to encrypt Secret resources to SealedSecret resources. &lt;/li&gt;
&lt;li&gt;Whenever you apply SealedSecret resources, the controller will decrypt it as Secret resources.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Sample usage
echo -n bar | kubectl create secret generic mysecret --dry-run=client --from-file=foo=/dev/stdin &amp;gt; mysecret.yaml
kubeseal --format yaml -f mysecret.yaml &amp;gt; mysealedsecret.yaml
kubectl create -f mysealedsecret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let discuss the benefits of using Sealed Secrets.&lt;/p&gt;

&lt;p&gt;Firstly, &lt;strong&gt;everything is K8S native&lt;/strong&gt;. When you are doing development work, you can simply reference Secret resources. When you are ready to commit your code, you just need to use kubeseal to encrypt it by using an one-off command. &lt;/p&gt;

&lt;p&gt;Secondly, committing Secret resources can be blocked by setting &lt;a href="https://github.com/k8s-at-home/sops-pre-commit"&gt;pre-commit hook&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Thirdly, it can avoid value overriding issue. Secrets remains to be secrets.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can &lt;a href="https://github.com/bitnami-labs/sealed-secrets/blob/main/docs/bring-your-own-certificates.md"&gt;bring you own key&lt;/a&gt;. I find it is quite useful because it allows you to share the same secret to multiple clusters.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How to use Sealed Secrets with Argo CD?
&lt;/h2&gt;

&lt;p&gt;If you are familiar with Argo CD, you know it only allows you to set some values or values files. Actually, this restriction can be easily bypass by creating a &lt;strong&gt;proxy chart&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Inside the Chart.yaml, you specify the target chart in the dependency section. Then you put all the SealedSecret resources to the templates folder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Example proxy chart
├── Chart.yaml
├── templates
│   └── sealed-secret.yaml
└── values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;In Argo CD, values files need to be stored in the same repository with the Helm charts. It is violating the best practice, as explained by Argo CD &lt;a href="https://argo-cd.readthedocs.io/en/stable/user-guide/best_practices/#separating-config-vs-source-code-repositories"&gt;documentation&lt;/a&gt; itself.&lt;/p&gt;

&lt;p&gt;So ideally, you will setup at least two repositories, which makes using proxy charts a natural move.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Base charts repository&lt;/strong&gt;&lt;br&gt;
All your target charts store in here and your proxy charts will will use these charts as sub-charts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proxy charts repository&lt;/strong&gt;&lt;br&gt;
This is where Argo CD sync with your proxy charts, which is representing the desired application state.&lt;/p&gt;



&lt;p&gt;Using proxy chart is a very useful technique. Sometimes if you want to customise the chart but directly modify the target chart is not an ideal choice (maybe you are referencing a public chart), then you can use this technique to tailor-made your chart.&lt;/p&gt;

&lt;p&gt;There are many more tricks you can do with this pattern, let say in some cases you want to share the same secret to multiple deployments, you can do something like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Maybe you want to reuse the same database credential for deployments in all regions?
├── Chart.yaml
├── templates
│   └── sealed-secret.yaml
├── region-a-values.yaml
└── region-b-values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Exercise
&lt;/h2&gt;

&lt;p&gt;I have setup a minimal setup for you to experience my approach.&lt;/p&gt;

&lt;p&gt;You can checkout my repository &lt;a href="https://github.com/timtsoitt/argocd-proxy-charts"&gt;https://github.com/timtsoitt/argocd-proxy-charts&lt;/a&gt; and follow the instructions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While there is no one size fit all shoes solution, keep things as simple is always a best practice. Rather than introducing many dependencies or complexities, we see a great value of using Sealed Secret with Argo CD.&lt;/p&gt;

&lt;p&gt;If you have any ideas or questions, free feel to comment here. Also please do spend a little time to like, bookmark and share it. Thank you.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>security</category>
      <category>argocd</category>
    </item>
    <item>
      <title>My approach to manage enterprise private CA with AWS</title>
      <dc:creator>timtsoitt</dc:creator>
      <pubDate>Thu, 02 Jun 2022 14:33:04 +0000</pubDate>
      <link>https://dev.to/timtsoitt/my-approach-to-manage-enterprise-private-ca-with-aws-47b0</link>
      <guid>https://dev.to/timtsoitt/my-approach-to-manage-enterprise-private-ca-with-aws-47b0</guid>
      <description>&lt;h2&gt;
  
  
  Story
&lt;/h2&gt;

&lt;p&gt;A long time ago, the company I was working do not have any private CA. When people deployed internal applications, they created with a self-signed certificates themselves. Every time I accesses these applications, my browser always showed a "Your connection is not private warning". It was annoying, and indeed also a security risk. As a DevSecOps engineer, this was a time that I should stand out and addressed this issue. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We need a private CA to issue certificates for internal applications.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h7LI3RKg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k51n84317crwrnrm2qc0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h7LI3RKg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k51n84317crwrnrm2qc0.png" alt="A very famous warning." width="801" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Our operation is based on AWS. Luckily, AWS has a service called &lt;strong&gt;ACM Private CA&lt;/strong&gt;, which allows us to create private CA hierarchies, including root and subordinate CAs.&lt;/p&gt;

&lt;p&gt;The idea is simple. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a root CA using &lt;strong&gt;ACM Private CA&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Install the root CA to every employee's computer to make it to be trusted.&lt;/li&gt;
&lt;li&gt;Create a list of subordinate CAs for each department using &lt;strong&gt;ACM Private CA&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Use the root CA to sign these subordinate CAs. &lt;/li&gt;
&lt;li&gt;Share these subordinate CAs to different AWS accounts through AWS &lt;strong&gt;Resource Access Manager (RAM)&lt;/strong&gt;, where these AWS accounts are managed by a particular department.&lt;/li&gt;
&lt;li&gt;Issue private certificates with their owned subordinate CAs using AWS &lt;strong&gt;Certificate Manager (ACM)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Everyone gets rid of the warning.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Tutorial
&lt;/h2&gt;

&lt;p&gt;Login to your AWS account and search for &lt;strong&gt;AWS ACM&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1
&lt;/h3&gt;

&lt;p&gt;There are two options to create private CA. For the root CA, click the first one. For the subordinate CAs, click the second one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MHohGSjW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2yagm0f7rpw3xfbci6y1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MHohGSjW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2yagm0f7rpw3xfbci6y1.png" alt="Step 1" width="880" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2
&lt;/h3&gt;

&lt;p&gt;Fill in the blank as your like.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PYyLqw81--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fs81pfyh1atmbhoa3e1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PYyLqw81--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fs81pfyh1atmbhoa3e1w.png" alt="Step 2" width="880" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3
&lt;/h3&gt;

&lt;p&gt;Choose RSA 4096 for root CA, RSA 2048 for subordinate CAs. You can also explore other options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IGFLaSPi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jse8zpiv7oj26sjlro22.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IGFLaSPi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jse8zpiv7oj26sjlro22.png" alt="Step 3" width="880" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4
&lt;/h3&gt;

&lt;p&gt;These are some advanced features you can explore. For the tutorial purpose we just skip it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tLL9GcDt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/15bpog8c7yjigz0zt6st.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tLL9GcDt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/15bpog8c7yjigz0zt6st.png" alt="Step 4" width="880" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5
&lt;/h3&gt;

&lt;p&gt;Always a good practice to tag your resource.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--auD6jlJO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ba4jp0odhowx1m6qzluu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--auD6jlJO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ba4jp0odhowx1m6qzluu.png" alt="Step 5" width="880" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6
&lt;/h3&gt;

&lt;p&gt;Allow AWS to auto renew your private CAs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O4WbzJTZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ez7s4zawvx92kwkaltb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O4WbzJTZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ez7s4zawvx92kwkaltb1.png" alt="Step 6" width="880" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7
&lt;/h3&gt;

&lt;p&gt;Check the checkbox before you have reviewed all the information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mMwqKqq2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9jl9o99geva02an5281o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mMwqKqq2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9jl9o99geva02an5281o.png" alt="Step 7" width="880" height="677"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 8
&lt;/h3&gt;

&lt;p&gt;For your subordinate CAs, remember to sign it with your root CA.&lt;/p&gt;

&lt;p&gt;Click the install CA certificate button. Then choose the &lt;strong&gt;ACM private CA&lt;/strong&gt; option.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IxE3WbXM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vv41dnd2qx0fi975mcdp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IxE3WbXM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vv41dnd2qx0fi975mcdp.png" alt="Click the install CA certificate button" width="880" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select your root CA as the Parent private CA.&lt;/p&gt;

&lt;p&gt;Notice that you will need to adjust the &lt;strong&gt;Path length&lt;/strong&gt; per your need. Since a subordinate CA can be the parent of another subordinate CA, path length sets the number of subordinate CA certificates that may exist in the chain below it.&lt;/p&gt;

&lt;p&gt;For example, if you don't want people to use this subordinate CA to sign another subordinate CA, choose 0 as the value.&lt;/p&gt;

&lt;p&gt;You can find the definition in AWS documentation.&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/acm-pca/latest/userguide/PcaTerms.html#terms-pathlength"&gt;https://docs.aws.amazon.com/acm-pca/latest/userguide/PcaTerms.html#terms-pathlength&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TMe4Nsuy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pngbmp5ilfa7h5dsi8bc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TMe4Nsuy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pngbmp5ilfa7h5dsi8bc.png" alt="Set the parameters" width="880" height="771"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 9
&lt;/h3&gt;

&lt;p&gt;Now you can share your subordinary CAs to different accounts using &lt;strong&gt;AWS RAM&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u6v_XuY8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ozpyw26jd4np9hnhh3gt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u6v_XuY8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ozpyw26jd4np9hnhh3gt.png" alt="Image description" width="880" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keep the default setting.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ONXd3Xlf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7hxxe3iqw8b4oykadaji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ONXd3Xlf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7hxxe3iqw8b4oykadaji.png" alt="Image description" width="880" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set the AWS account ID. Then review and create.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GWwT8pML--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m2159yz3l3dcn2h348yf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GWwT8pML--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m2159yz3l3dcn2h348yf.png" alt="Image description" width="880" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 10
&lt;/h3&gt;

&lt;p&gt;From the destination account, you need to accept the shared subordinary CA.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cnpg_47E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x4v6f3m1ew8l34ddkcdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cnpg_47E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x4v6f3m1ew8l34ddkcdn.png" alt="Image description" width="880" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 11
&lt;/h3&gt;

&lt;p&gt;Now if you go to the &lt;strong&gt;AWS ACM&lt;/strong&gt; and request a private certificate, you will be able to use the shared subordinary CA to issue your certificate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pccHc2Re--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ii1t7gler51v5hlyglog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pccHc2Re--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ii1t7gler51v5hlyglog.png" alt="Image description" width="880" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 12
&lt;/h3&gt;

&lt;p&gt;After you issue a certificate, you can export the certificate  if you need.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W8VlglEu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ex1zp2j20gri67kkauvl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W8VlglEu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ex1zp2j20gri67kkauvl.png" alt="Image description" width="880" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 13
&lt;/h3&gt;

&lt;p&gt;Lastly do not forget to install the root CA certificate to employees' computer. There are multiple way to do so, depends on your OS, which you can search it online.&lt;/p&gt;

&lt;p&gt;Download your root CA certificate in here.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z8_Fa77I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c107ilmqqkragosbizqp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z8_Fa77I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c107ilmqqkragosbizqp.png" alt="Image description" width="880" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In security perspective,  we have a private CA now, everyone expects they won't see the warning again. If they do see the warning, then we know the certificate in use is not properly issued, or maybe it is even a bogus website.&lt;/p&gt;

&lt;p&gt;In operation perspective, everything is managed properly. We have a place to manage CAs and a place to manage our certificates. No messy self-signed certificates anymore.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Crossplane is better than Terraform in K8S world</title>
      <dc:creator>timtsoitt</dc:creator>
      <pubDate>Sat, 28 May 2022 19:16:36 +0000</pubDate>
      <link>https://dev.to/timtsoitt/crossplane-is-better-than-terraform-in-k8s-world-g79</link>
      <guid>https://dev.to/timtsoitt/crossplane-is-better-than-terraform-in-k8s-world-g79</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;Crossplane is a solution you should consider when your infrastructure is to serve your k8s applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Story
&lt;/h2&gt;

&lt;p&gt;I have been using Terraform for a very long time. It is simple to use with a very huge community support. However every solution has its shortcomings. Terraform is not really a nice solution when you have to work with k8s. &lt;/p&gt;

&lt;p&gt;My AWS infrastructure is closely related to my K8S applications. Maybe I have to upload objects to S3, or store my data in RDS. If you are using AWS EKS, you should be very familiar with &lt;strong&gt;IRSA&lt;/strong&gt; (IAM roles for service accounts) feature, it grants your service accounts with AWS permissions. Then you pods can reference these service accounts and interact with AWS APIs.&lt;/p&gt;

&lt;p&gt;So IAM roles are AWS stuffs, and service accounts are K8S stuffs. You have to create both IAM roles and service accounts using some methods.&lt;/p&gt;

&lt;p&gt;I can use Terraform to create both IAM roles and service accounts but it is very operational unfriendly. K8s deployments not just only have service accounts need to be applied. How about using Terraform to deploy all AWS infrastructure and k8s manifests? IMO, DO NOT DO IT. Argo CD, Flux CD or any GitOps tools are much better than Terraform. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Never solve a small problem by bringing another big trouble.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now back to my question, how I should integrate IAM roles with service accounts? How about doing the opposite, use k8s to provision IAM roles? &lt;/p&gt;

&lt;p&gt;And then Crossplane comes into my eyes. Simply speaking, Crossplane is a k8s style of Terraform. &lt;/p&gt;

&lt;h2&gt;
  
  
  Tutorial
&lt;/h2&gt;

&lt;p&gt;Now I am going to show you how to use Crossplane.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisite
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You need to have an admin privilege in your AWS account.&lt;/li&gt;
&lt;li&gt;A k8s cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1. Define your variables
&lt;/h3&gt;

&lt;p&gt;Specify any value you like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EKS_CLUSTER_NAME=""
AWS_REGION=""
AWS_IAM_ROLE_NAME="${EKS_CLUSTER_NAME}-crossplane-controller"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Install Crossplane using helm charts
&lt;/h3&gt;

&lt;p&gt;Everyone loves helm chart :)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add crossplane https://charts.crossplane.io/master/
helm install --create-namespace --namespace crossplane-system crossplane crossplane/crossplane --version 1.9.0-rc.0.9.g243f1f47 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Install AWS provider
&lt;/h3&gt;

&lt;p&gt;Crossplane can provision infrastructure in many platforms. Let say if you want to deploy to Azure, you can install Azure provider.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: pkg.crossplane.io/v1alpha1
kind: ControllerConfig
metadata:
  name: aws-config
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::${AWS_ACCOUNT_ID}:role/${AWS_IAM_ROLE_NAME}
spec:
  podSecurityContext:
    fsGroup: 2000
---
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
  name: provider-aws
spec:
  package: crossplane/provider-aws:v0.27.0
  controllerConfigRef:
    name: aws-config
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Create IAM role for IRSA
&lt;/h3&gt;

&lt;p&gt;Again it is a Chicken or the egg problem. Crossplane controller needs to have permissions to provision AWS resources. So we need to manually provision a IAM role for once.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SERVICE_ACCOUNT_NAME=$(kubectl get providers.pkg.crossplane.io provider-aws -o jsonpath="{.status.currentRevision}")
OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --region $AWS_REGION --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")

read -r -d '' TRUST_RELATIONSHIP &amp;lt;&amp;lt;EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_PROVIDER}:sub": "system:serviceaccount:crossplane-system:${SERVICE_ACCOUNT_NAME}"
        }
      }
    }
  ]
}
EOF
echo "${TRUST_RELATIONSHIP}" &amp;gt; trust.json

aws iam create-role \
    --role-name "${IAM_ROLE_NAME}" \
    --assume-role-policy-document file://trust.json \
    --description "IAM role for Crossplane provider-aws"

aws iam attach-role-policy --role-name "${IAM_ROLE_NAME}" --policy-arn=arn:aws:iam::aws:policy/AdministratorAccess

rm trust.json

cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: aws.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
  name: aws-provider
spec:
  credentials:
    source: InjectedIdentity
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Try it out
&lt;/h3&gt;

&lt;p&gt;Let try to deploy something.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: iam.aws.crossplane.io/v1beta1
kind: Role
metadata:
  name: crossplane-sample-role
  annotations:
spec:
  deletionPolicy: Delete
  forProvider:
      description: "A role created by Crossplane"
      assumeRolePolicyDocument: |
        {
          "Version": "2012-10-17",
          "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "ec2.amazonaws.com"
                },
                "Action": "sts:AssumeRole"
              }
          ]
        }
  providerConfigRef:
    name: aws-provider

---
apiVersion: iam.aws.crossplane.io/v1beta1
kind: Policy
metadata:
  name: crossplane-sample-policy
spec:
  deletionPolicy: Delete
  forProvider:
    name: crossplane-sample-policy
    description: A policy created by Crossplane
    document: |
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Action": [
                "eks:DescribeCluster"
            ],
            "Resource": "*"
          }
        ]
      }
  providerConfigRef:
    name: aws-provider
---
apiVersion: iam.aws.crossplane.io/v1beta1
kind: RolePolicyAttachment
metadata:
  name: crossplane-sample-role-policy-attachment
spec:
  deletionPolicy: Delete
  forProvider:
    roleNameRef:
      name: crossplane-sample-role
    policyArnRef: 
      name: crossplane-sample-policy
  providerConfigRef:
    name: aws-provider
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Cleanup
&lt;/h3&gt;

&lt;p&gt;Always a good practice to do cleanup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm uninstall crossplane --namespace crossplane-system
kubectl delete ns crossplane-system
kubectl get crd -o name | grep crossplane.io | xargs kubectl delete

aws iam detach-role-policy --role-name ${AWS_IAM_ROLE_NAME} --policy-arn=arn:aws:iam::aws:policy/AdministratorAccess
aws iam delete-role --role-name "${AWS_IAM_ROLE_NAME}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Discussion
&lt;/h2&gt;

&lt;p&gt;Imagine you need to deploy an application that needs an ALB, RDS. Now you can package all the things using k8s manifests. No more Terraform is involved and much less management overheads. &lt;/p&gt;

&lt;p&gt;Take a step forward, you can use Crossplane to replace Terraform to provision any infrastructure. Crossplane allows you to write Crossplane version Terraform modules, which are called &lt;strong&gt;Configurations&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So do I still recommend people to use Terraform? Yes I do. Developers might not really comfortable with k8s while writing Terraform is like writing a simple program to them. And Terraform has a large community supporting, new features will be supported quicker and bug fixes will be sooner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In short, if you are going to deploy some infrastructure that are only for your k8s deployments, e.g. load balancers, IAM Roles for IRSA. Consider Crossplane. &lt;/p&gt;

&lt;p&gt;If you are going to deploy some shared infrastructure or it is not relevant to your k8s deployments, e.g. Network, EC2 bastions. Use Terraform.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>crossplane</category>
      <category>terraform</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>One technique to save your AWS EKS IP addresses 10x</title>
      <dc:creator>timtsoitt</dc:creator>
      <pubDate>Fri, 20 May 2022 21:01:01 +0000</pubDate>
      <link>https://dev.to/timtsoitt/one-technique-to-save-your-aws-eks-ip-addresses-10x-2ocn</link>
      <guid>https://dev.to/timtsoitt/one-technique-to-save-your-aws-eks-ip-addresses-10x-2ocn</guid>
      <description>&lt;h2&gt;
  
  
  Story
&lt;/h2&gt;

&lt;p&gt;When I was doing a research to design AWS EKS clusters from the ground up. Networking aspect was a part of my considerations. One problem was that how many IP addresses I need in a long run. While it had no simple answer, instead I could tell you that how to save your IP addresses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequsites
&lt;/h2&gt;

&lt;p&gt;The technique I am going to introduce is based on AWS CNI plugin (Amazon VPC Container Network Interface plugin for Kubernetes), which is also the default CNI of your EKS cluster.&lt;/p&gt;

&lt;p&gt;From the offical documentation, &lt;em&gt;Using this plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.&lt;/em&gt; This is why your pods can communicate to computing resources outside your clusters, or even in other VPCs.&lt;/p&gt;

&lt;p&gt;And let me roughly explain the real-world scenerio:&lt;/p&gt;

&lt;p&gt;Each node group provisons a group of EC2 instances. Each instance has multiple ENIs, and one ENI contains multiple IP addresses. AWS CNI plugin helps you to manage these IP addresses and associate to your EKS pods. &lt;/p&gt;

&lt;h2&gt;
  
  
  EKS features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Assign address prefixes to your ENI
&lt;/h3&gt;

&lt;p&gt;By default, the number of IP addresses available to assign to pods is based on the number of IP addresses assigned to Elastic network interfaces, and the number of network interfaces attached to your Amazon EC2 node, i.e. AWS CNI plugin assigns &lt;strong&gt;IP addresses&lt;/strong&gt; to ENIs.&lt;/p&gt;

&lt;p&gt;EKS offers your another options, let AWS CNI plugin to assign &lt;strong&gt;/ 28 IP address prefixes&lt;/strong&gt; to ENIs. &lt;/p&gt;

&lt;p&gt;Let me show you a simple example.&lt;/p&gt;

&lt;p&gt;Consider that a ENI has ten slots:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you use the default mode, you can assign 10 IP addresses to it.&lt;/li&gt;
&lt;li&gt;If you use the prefixes mode, you can assign 10 /28 prefix blocks to it, i.e. 10 * 16 IP addresses = 160 IP addresses.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you find out your instances have not fully utilized, you have a choice to optimize the pod-to-instance ratio, such that you can create fewer instances.&lt;/p&gt;




&lt;p&gt;If it does not sound useful to you, prefixes mode has an addtional advantage. You can make less API calls to configure the network interfaces and IP addresses necessary for pod connectivity.&lt;/p&gt;

&lt;p&gt;In the situation of rapidly scaling, API throttling might happen. Your EKS will stuck in a dead loop when it scales too quick.&lt;/p&gt;

&lt;p&gt;Your pods need IP addresses, AWS VPC CNI will call APIs to allocate IP addresses for your pods. However API throttling prevent CNI to allocate IP addresses, then AWS VPC CNI keeps sending API calls ....&lt;/p&gt;

&lt;p&gt;In the default mode, you are assigning a single IP addresses each time. Now you can assign 16 IP addresses each time. While it is not 100% correct, you can roughly assuming that it saves 16x API calls.&lt;/p&gt;

&lt;h3&gt;
  
  
  CNI custom networking
&lt;/h3&gt;

&lt;p&gt;By default, when new network interfaces are allocated for pods, AWS CNI plugin uses the security groups and subnet of the node's primary network interface, i.e. ** both pods and nodes are created in the same subnet.**&lt;/p&gt;

&lt;p&gt;Normally, we use RFC 1918 CIDR ranges (10.0.0.0/8, 172.16.0.0/12 and 192.16.0.0/16) to allocate our IP addresses. &lt;br&gt;
However, many of you might not know, &lt;strong&gt;EKS supports additional IPv4 CIDR blocks in the 100.64.0.0/10 and 198.19.0.0/16 ranges&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The trick is simple. You can create your nodes in RFC 1918 CIDR ranges and then create your pods in 100.64.0.0/10 and 198.19.0.0/16 ranges. You can much more IP addresses to allocate now.&lt;/p&gt;




&lt;p&gt;Please be reminded that 100.64.0.0/10 and 198.19.0.0/16 are not common IP ranges. Proper routing needs to be configured if you want the pods created in 100.64.0.0/16 and 192.19.0.0/16 to be able to have &lt;strong&gt;outbound&lt;/strong&gt; connections to RFC 1918 CIDR ranges through VPC peering, transit gateway peering.&lt;/p&gt;

&lt;p&gt;Why I emphasis outbound connections? Here is another trick, you can place load balancers in RFC-1918 ip ranges, and forward traffic to non RFC-1918 ip ranges pods. This way your clients only need to have route to your load balancers, without need to know anything of your pods. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Depends on your use cases, you can enable either or both &lt;strong&gt;Assign address prefixes to your ENI&lt;/strong&gt; and &lt;strong&gt;CNI custom networking&lt;/strong&gt; feature. &lt;/p&gt;

&lt;p&gt;I also encourage you to understand what AWS VPC CNI plugin does if you are not familiar it before. It can help you to design EKS cluster in a better way.&lt;/p&gt;

&lt;p&gt;If you are interested to use these features, AWS documentation has a very detailed explanation, which I have attached below. You should also checkout considerations and prerequisites in the documentation. &lt;/p&gt;

&lt;h2&gt;
  
  
  Reading
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html"&gt;cni-increase-ip-addresses&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html"&gt;cni-custom-network&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.eksworkshop.com/beginner/160_advanced-networking/secondary_cidr/"&gt;Using additional IPv4 IP ranges&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt"&gt;eni-max-pods&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>kubernetes</category>
      <category>networking</category>
    </item>
    <item>
      <title>Python 101 - annoying UnboundLocalError</title>
      <dc:creator>timtsoitt</dc:creator>
      <pubDate>Sun, 10 Apr 2022 14:40:03 +0000</pubDate>
      <link>https://dev.to/timtsoitt/python-annoying-unboundlocalerror-4e51</link>
      <guid>https://dev.to/timtsoitt/python-annoying-unboundlocalerror-4e51</guid>
      <description>&lt;p&gt;UnboundLocalError is an error you must have encountered when you learn Python. Let us try to understand this error.&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting Point
&lt;/h2&gt;

&lt;p&gt;This code snippet can run. Although we do not assign variable x inside inner_function(), it searches variable x from enclosed scope, which is simple_function().&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def simple_function():
    x = 10

    def inner_function():
        return x

    return inner_function()


print(simple_function()) # 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we try do assignment to variable x. This time we face &lt;strong&gt;UnboundLocalError&lt;/strong&gt;. Why is that?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Unable to run
def simple_function():
    x = 10

    def inner_function():
        x = x + 10
        return x

    return inner_function()


print(simple_function()) # UnboundLocalError: local variable 'x' referenced before assignment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we do an assignment to a variable, this variable is treated as local variable to that scope. Python will shadow any variable in outer scope. For example, it will not care variable x in the simple_function.&lt;/p&gt;

&lt;p&gt;When Python interprets the part &lt;code&gt;x + 10&lt;/code&gt;, it tries to reference variable x, and then add value 10 to it. However, it does not know where the variable x is in local scope, i.e. the inner_function block. So Python prompts the UnboundLocalError error.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;If you want to reference variable x from the simple_function scope, you need to use the &lt;strong&gt;nonlocal&lt;/strong&gt; keyword.&lt;/p&gt;

&lt;p&gt;The nonlocal keyword means the variables are neither global nor &lt;strong&gt;local&lt;/strong&gt; to the function. It instructs Python to look for the variables in outer scope, the simple_function block in our case.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def simple_function():
    x = 10

    def inner_function():
        nonlocal x
        x = x + 10
        return x

    return inner_function()


print(simple_function()) # 20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Python 101 - what happens when you instantiate a class?</title>
      <dc:creator>timtsoitt</dc:creator>
      <pubDate>Sat, 09 Apr 2022 14:18:52 +0000</pubDate>
      <link>https://dev.to/timtsoitt/python-101-what-happens-when-you-instantiate-a-class-55ba</link>
      <guid>https://dev.to/timtsoitt/python-101-what-happens-when-you-instantiate-a-class-55ba</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Python is a simple language. Many people pick Python as their first language to learn. While you are coding in Python happily, have you ever curious about how a class is instantiated?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR: Learn how Python instantiate a class from concept level to reading source code.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting Point
&lt;/h2&gt;

&lt;p&gt;Let take a look of this code snippet. We define a method that return an integer. And then we call the method to get the value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def my_method():
    return 1

a = my_method()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we take a look of this code snippet. We define a class, and then we instantiate the class.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class MyClass:
    def __init__(self):
        pass


myclass = MyClass()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside the method, we need to use &lt;strong&gt;return&lt;/strong&gt; to get the value. But for the class, there is no return statement. Why we can get the instance of the class? There must be something hidden.&lt;/p&gt;

&lt;h3&gt;
  
  
  The hidden base class
&lt;/h3&gt;

&lt;p&gt;Python is a OOP language, i.e. we can use inheritance.&lt;/p&gt;

&lt;p&gt;In Python, we implement inheritance like this, which the derived class MyClass is inherited from the base class SuperClass.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class SuperClass():
   pass

class MyClass(SuperClass):
   pass
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now I want you to know every class has a default base class in Python 3, which is called &lt;strong&gt;object&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;If you have been using Python for a while, you might have an idea seeing people define a class in this way. This is because Python 2 does not auto apply class &lt;strong&gt;object&lt;/strong&gt; as the base class.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class MyClass(object):
   pass
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can use dir method to verify it. The dir method returns all attributes and methods of the specified object, without the values.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; dir(object)
['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__']

&amp;gt;&amp;gt;&amp;gt; dir(MyClass)
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can confirm that the class MyClass inherits all the things from class object.&lt;/p&gt;

&lt;p&gt;You can also use &lt;code&gt;issubclass(MyClass, object)&lt;/code&gt; to verify it.&lt;/p&gt;




&lt;p&gt;From the result of dir method, there is one method we need to be aware of, which is __new__. __new__ method is responsible to create instance from the class by allocating memory and initialising necessary fields, &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now we know the hidden return statement is inside __new__ method.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Conceptual representation. 
class object:
    def __new__(cls):
        instance = create_and_initialise(cls)
        return instance
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ok. You should spot out two points. I will answer these points one-by-one.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where is the cls.__init__ method called? &lt;/li&gt;
&lt;li&gt;What you mean by conceptual?&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Introduction of metaclass
&lt;/h3&gt;

&lt;p&gt;I mentioned every class has a hidden base class Object.&lt;/p&gt;

&lt;p&gt;Now I also want you to know every class has a hidden &lt;strong&gt;metaclass&lt;/strong&gt;, which is called &lt;strong&gt;type&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Generally speaking metaclass is a class of the class that defines the behaviour of its class instances. When you create an instance from a class, Python needs to create the class itself first, as a form of instances of the metaclass. &lt;/p&gt;

&lt;p&gt;Inside the metaclass, it defines how the class creates instances of itself in the __call__ method, which triggers __new__ method and then __init__ method from the class.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Conceptual representation. 
class type:
    def __call__(cls, *args, **kwargs):
        instance = cls.__new__(cls)
        cls.__init__(cls, *args, **kwargs)
        return instance

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Now we have the full picture of how __init__ method is triggered.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deep dive into Python implementation source code
&lt;/h2&gt;

&lt;p&gt;We know Python is an interpreted programming language, it needs an interpreter to translate your Python code to bytecode and get it running. &lt;/p&gt;

&lt;p&gt;Interpreter is a program. So we need to ask how Python interpreter is developed? The answer is CPython.&lt;/p&gt;

&lt;p&gt;CPython is the official implementation of the interpreter. As the name implies, it is written in C language. When you download Python in the official page, you are using CPython based interpreter.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There are other Python implementations, such as Jython (Java based implementation), PyPy (Python based implementation).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  How CPython is related to our topic?
&lt;/h3&gt;

&lt;p&gt;CPython does not only translate your Python source code, it also included Python standard libraries. &lt;/p&gt;

&lt;p&gt;Let say when you use print command in Python, do you notice that you never need to import any library, while in C you need to import standard library . This is because Python interpreter (CPython) help you to do so.&lt;/p&gt;

&lt;p&gt;The implementation of class &lt;em&gt;object&lt;/em&gt; and class &lt;em&gt;type&lt;/em&gt; are part of the CPython. It is not written in Python directly, which is why I mention it as conceptual implementation in pythonic way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here is the most exciting part, we are going to look into the CPython source code&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The CPython implementation of __new__ method
&lt;/h3&gt;

&lt;p&gt;Let us visit &lt;a href="https://github.com/python/cpython"&gt;CPython Github repository&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;At the time I write this article, it is Python version 3.11.0 alpha 7.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--81ZE2LV8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g49t5azbat2pb71qcxfn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--81ZE2LV8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g49t5azbat2pb71qcxfn.png" alt="Image description" width="880" height="688"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;The first file we need to look for is &lt;code&gt;/include/object.h&lt;/code&gt; file. It defines a struct called &lt;strong&gt;_object&lt;/strong&gt;. Every instance of class is &lt;strong&gt;_object&lt;/strong&gt; in C implementation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;struct _object {
    _PyObject_HEAD_EXTRA
    Py_ssize_t ob_refcnt;
    PyTypeObject *ob_type;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are three fields:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;_PyObject_HEAD_EXTRA&lt;/strong&gt;&lt;br&gt;
It is for debug usage. We can skip it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Py_ssize_t ob_refcnt&lt;/strong&gt;&lt;br&gt;
Storing reference counter for garbage collection management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PyTypeObject *ob_type&lt;/strong&gt;&lt;br&gt;
The type of the object, i.e. the class of the object.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;In &lt;code&gt;/include/pytypedefs.h&lt;/code&gt; file. We can see it defines an alias name &lt;strong&gt;PyObject&lt;/strong&gt; to struct &lt;strong&gt;_object&lt;/strong&gt;. All other CPython source code always references &lt;strong&gt;PyObject&lt;/strong&gt; instead of &lt;strong&gt;_object&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;typedef struct _object PyObject;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;In &lt;code&gt;/Objects/object.c&lt;/code&gt; file, we can find the actual implementation of the __new__ method. &lt;/p&gt;

&lt;p&gt;It returns a PyObject, such as instance of class.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PyObject *
_PyObject_New(PyTypeObject *tp)
{
    PyObject *op = (PyObject *) PyObject_Malloc(_PyObject_SIZE(tp));
    if (op == NULL) {
        return PyErr_NoMemory();
    }
    _PyObject_Init(op, tp);
    return op;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;In &lt;code&gt;/Objects/call.c&lt;/code&gt; file, we can find the actual implementation of the __call__ method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PyObject *
_PyObject_Call(PyThreadState *tstate, PyObject *callable,
               PyObject *args, PyObject *kwargs)
{
    ternaryfunc call;
    PyObject *result;

    /* PyObject_Call() must not be called with an exception set,
       because it can clear it (directly or indirectly) and so the
       caller loses its exception */
    assert(!_PyErr_Occurred(tstate));
    assert(PyTuple_Check(args));
    assert(kwargs == NULL || PyDict_Check(kwargs));

    vectorcallfunc vector_func = _PyVectorcall_Function(callable);
    if (vector_func != NULL) {
        return _PyVectorcall_Call(tstate, vector_func, callable, args, kwargs);
    }
    else {
        call = Py_TYPE(callable)-&amp;gt;tp_call;
        if (call == NULL) {
            _PyErr_Format(tstate, PyExc_TypeError,
                          "'%.200s' object is not callable",
                          Py_TYPE(callable)-&amp;gt;tp_name);
            return NULL;
        }

        if (_Py_EnterRecursiveCall(tstate, " while calling a Python object")) {
            return NULL;
        }

        result = (*call)(callable, args, kwargs);

        _Py_LeaveRecursiveCall(tstate);

        return _Py_CheckFunctionResult(tstate, callable, result, NULL);
    }
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;The line &lt;code&gt;result = (*call)(callable, args, kwargs);&lt;/code&gt; is actually calling another function called &lt;code&gt;type_call&lt;/code&gt;, which is defined in &lt;code&gt;/Objects/typeobject.c&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;static PyObject *
type_call(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
    PyObject *obj;
    PyThreadState *tstate = _PyThreadState_GET();

#ifdef Py_DEBUG
    /* type_call() must not be called with an exception set,
       because it can clear it (directly or indirectly) and so the
       caller loses its exception */
    assert(!_PyErr_Occurred(tstate));
#endif

    /* Special case: type(x) should return Py_TYPE(x) */
    /* We only want type itself to accept the one-argument form (#27157) */
    if (type == &amp;amp;PyType_Type) {
        assert(args != NULL &amp;amp;&amp;amp; PyTuple_Check(args));
        assert(kwds == NULL || PyDict_Check(kwds));
        Py_ssize_t nargs = PyTuple_GET_SIZE(args);

        if (nargs == 1 &amp;amp;&amp;amp; (kwds == NULL || !PyDict_GET_SIZE(kwds))) {
            obj = (PyObject *) Py_TYPE(PyTuple_GET_ITEM(args, 0));
            Py_INCREF(obj);
            return obj;
        }

        /* SF bug 475327 -- if that didn't trigger, we need 3
           arguments. But PyArg_ParseTuple in type_new may give
           a msg saying type() needs exactly 3. */
        if (nargs != 3) {
            PyErr_SetString(PyExc_TypeError,
                            "type() takes 1 or 3 arguments");
            return NULL;
        }
    }

    if (type-&amp;gt;tp_new == NULL) {
        _PyErr_Format(tstate, PyExc_TypeError,
                      "cannot create '%s' instances", type-&amp;gt;tp_name);
        return NULL;
    }

    obj = type-&amp;gt;tp_new(type, args, kwds);
    obj = _Py_CheckFunctionResult(tstate, (PyObject*)type, obj, NULL);
    if (obj == NULL)
        return NULL;

    /* If the returned object is not an instance of type,
       it won't be initialized. */
    if (!PyObject_TypeCheck(obj, type))
        return obj;

    type = Py_TYPE(obj);
    if (type-&amp;gt;tp_init != NULL) {
        int res = type-&amp;gt;tp_init(obj, args, kwds);
        if (res &amp;lt; 0) {
            assert(_PyErr_Occurred(tstate));
            Py_DECREF(obj);
            obj = NULL;
        }
        else {
            assert(!_PyErr_Occurred(tstate));
        }
    }
    return obj;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let translate this function in pythonic style:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Trigger __new__ method to get instance of the class.&lt;/li&gt;
&lt;li&gt;Check the returned instance is an instance of the class. &lt;/li&gt;
&lt;li&gt;If no, return the instance immediately. If yes, call __init__ method and then return the instance.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Reading CPython source code is not a trivial task. There are lots of details I do not cover. Anyway I hope you can learn more about Python after reading my article.&lt;/p&gt;

&lt;p&gt;If you like my article, please give me some reactions as an encouragement. Thank you :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://realpython.com/cpython-source-code-guide/"&gt;CPython source code guide&lt;/a&gt;&lt;br&gt;
&lt;a href="https://eli.thegreenplace.net/2012/04/16/python-object-creation-sequence"&gt;Python object creation sequence&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
    </item>
    <item>
      <title>Use Cluster API to provision Kubernetes clusters in anywhere!</title>
      <dc:creator>timtsoitt</dc:creator>
      <pubDate>Sat, 26 Mar 2022 15:13:36 +0000</pubDate>
      <link>https://dev.to/timtsoitt/use-cluster-api-to-provision-kubernetes-clusters-22c4</link>
      <guid>https://dev.to/timtsoitt/use-cluster-api-to-provision-kubernetes-clusters-22c4</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;Learn how to use Cluster API to provision multiple EKS clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Cluster API?
&lt;/h2&gt;

&lt;p&gt;Provisioning Kubernetes clusters is never an easy task. When there are 1000+ clusters, you definitely want to have a standardised approach to ease your life. If you have this concern, you come to the right place! Cluster API is what you need!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Some of you might know tools like kOps, Kubespray. You can imagine Cluster API as their alternative solution, but more powerful!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;According to the official page, "Cluster API is a Kubernetes sub-project focused on &lt;strong&gt;providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters.&lt;/strong&gt;"&lt;/p&gt;

&lt;p&gt;Here are some highlighted points of Cluster API:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pure YAML-based.&lt;/strong&gt; 
Kubernetes style. Super handy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Support any mainstream infrastructure provider.&lt;/strong&gt; 
Provision your Kubernetes clusters in cloud/on-premise environments in the same place.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Managed Kubernetes services support.&lt;/strong&gt; 
AWS EKS, Azure AKS, GCP GKE all are supported.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bring your own infrastructure.&lt;/strong&gt; 
Reuse existing infrastructures. Focus on provisioning Kubernetes clusters.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is awesome, right? &lt;/p&gt;

&lt;p&gt;To demonstrate how to use Cluster API, I am going to show you how to use it to create AWS EKS clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Concept
&lt;/h3&gt;

&lt;p&gt;Before you continue, there are some concepts you need to understand first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure Provider&lt;/strong&gt;&lt;br&gt;
Cluster API project defines a set of APIs in the form of Custom Resource Definitions (CRDs). Each provider has to follow the API to specify how to provision infrastructure accordingly.&lt;/p&gt;

&lt;p&gt;You can use &lt;code&gt;clusterctl config repositories&lt;/code&gt; command to get a list of supported providers and their repository configuration.&lt;/p&gt;

&lt;p&gt;For AWS, its implementation is &lt;strong&gt;Cluster API Provider AWS (CAPA)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Management Cluster&lt;/strong&gt;&lt;br&gt;
A Kubernetes cluster that manages the lifecycle of Workload Clusters. In this cluster, you use Cluster API to further provision Workload Clusters in any infrastructure provider.&lt;/p&gt;

&lt;p&gt;Creating Management Cluster is a "the chicken or the egg first" problem. There are two methods available:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Find an existing Kubernetes cluster and transform it as Management Cluster.&lt;/li&gt;
&lt;li&gt;"Bootstrap &amp;amp; Pivot" method. Bootstrap a Kubernetes cluster as a temporary Management Cluster, then use Cluster API inside the temporary Management Cluster to provision a target Management Cluster. Finally move all the Cluster API resources from the temporary Management Cluster to the target Management Cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To show you the comprehensive usage of Cluster API, I am going to use the "Bootstrap &amp;amp; Pivot" method. In which you can learn how to migrate Management Cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workload Cluster&lt;/strong&gt;&lt;br&gt;
A Kubernetes cluster whose lifecycle is managed by a Management Cluster.&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisite
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;Obviously, there are some tools you need to install first:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kubectl - Everyone should know it so well :)&lt;/li&gt;
&lt;li&gt;Kind - A tool for running local Kubernetes clusters using Docker container "nodes".&lt;/li&gt;
&lt;li&gt;clusterctl - CLI tool of Cluster API (CAPI)&lt;/li&gt;
&lt;li&gt;clusterawsadm - CLI tool of Cluster API Provider AWS (CAPA)
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install kubectl clusterctl kind

wget -O /usr/local/bin/clusterawsadm /usr/local/bin/ https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v1.3.0/clusterawsadm-darwin-amd64 
chmod +x /usr/local/bin/clusterawsadm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Setup your AWS credential
&lt;/h3&gt;

&lt;p&gt;I won't talk much about it as there are multiple ways to setup your credential. &lt;/p&gt;

&lt;p&gt;However there are two things need to be reminded:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Your credential must have administrative permissions.&lt;/strong&gt; Creating EKS cluster involve IAM resources creation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use a permanent credential.&lt;/strong&gt; 
Your credential will be stored inside the Management Cluster. And it will use the credential to monitor status of Workload Clusters. &lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;All the used codes can be found in my &lt;a href="https://github.com/timtsoitt/cluster-api-demo"&gt;Github repository&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Bootstrap a temporary Management Cluster
&lt;/h3&gt;

&lt;p&gt;We will use Kind to create the temporary Management Cluster.&lt;/p&gt;

&lt;p&gt;Run these commands to create the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind create cluster --name kind
kubectl config use-context kind-kind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;You can use alternative distributions, such as minikube, microk8s, k3s, k0s. It is up to your preference, the concept is pretty much the same. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Create AWS IAM resources
&lt;/h3&gt;

&lt;p&gt;Create or update an AWS CloudFormation stack for bootstrapping Kubernetes Cluster API and Kubernetes AWS Identity and Access Management (IAM) permissions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_REGION=

clusterawsadm bootstrap iam create-cloudformation-stack --region ${AWS_REGION}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  (Optional) Create AWS network resources
&lt;/h3&gt;

&lt;p&gt;EKS is based on several network infrastructure elements.&lt;/p&gt;

&lt;p&gt;You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Let CAPA to provision network infrastructures for you&lt;/strong&gt;. Good for people who want to experiment how to use Cluster API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bring your own network infrastructure&lt;/strong&gt;. Basically you have to create a VPC, NAT gateways, Internet gateways, route tables and subnets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No worry, I will cover both approaches in this tutorial.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install CAPI and CAPA to your Management Cluster
&lt;/h3&gt;

&lt;p&gt;Use &lt;code&gt;cluster init&lt;/code&gt; to install core Cluster API resources, along with CAPA resources in your Kind cluster.&lt;/p&gt;

&lt;p&gt;Since managed node group creation is an experimental feature, your need to enable it before installation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_REGION=

# Allow CAPA to create EKS managed node groups
export EXP_MACHINE_POOL=true
export EKSEnableIAM=true

# Create the base64 encoded AWS credentials using clusterawsadm.
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile --region ${AWS_REGION})

clusterctl init --infrastructure aws
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the Management Cluster will use the AWS_B64ENCODED_CREDENTIALS to provision EKS resources. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To enable experimental features in an existing Management Cluster, please visit &lt;a href="https://cluster-api.sigs.k8s.io/tasks/experimental-features/experimental-features.html#enabling-experimental-features-on-existing-management-clusters"&gt;here&lt;/a&gt; for the instructions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Security Trick
&lt;/h4&gt;

&lt;p&gt;Since the credential is already loaded into the CAPA controller, you can delete it by using &lt;code&gt;clusterawsadm controller zero-credentials&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When your CAPA controller is restarted, or your Management Cluster is migrated to a new Kubernetes Cluster, you need to update it by using these commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
clusterawsadm controller update-credentials
clusterawsadm controller rollout-controller
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the target Management Cluster in AWS EKS
&lt;/h3&gt;

&lt;p&gt;You need to remind of two things.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Choose an &lt;strong&gt;EKS supported&lt;/strong&gt; Kubernetes version. &lt;br&gt;
As of now, latest version that EKS supports is 1.21&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name your EKS cluster based on &lt;strong&gt;EKS naming rules&lt;/strong&gt;. &lt;br&gt;
You can use any of the following characters: the set of Unicode letters, digits, hyphens and underscores.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I have prepared two yaml files for two options respectively:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;I want Cluster API to manage everything for me&lt;/strong&gt; -&amp;gt; use managed-management-cluster.yaml&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;I want to bring my own infrastructure&lt;/strong&gt; -&amp;gt; use byoi-management-cluster.yaml. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Be careful that you may need to modify some fields in the YAML file, such as AWS account ID, vpc id and subnet ids.&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;kubectl apply -f byoi-management-cluster.yaml&lt;/code&gt; to provision your Management Cluster.&lt;/p&gt;




&lt;p&gt;Now we need to get the kubeconfig file to access our newly created Management Cluster.&lt;/p&gt;

&lt;p&gt;Cluster API provides a native method to export the kubeconfig file. However there is a minor &lt;a href="https://stackoverflow.com/questions/71318743/kubectl-versions-error-exec-plugin-is-configured-to-use-api-version-client-auth"&gt;incompatible issue&lt;/a&gt;. We can use sed to solve it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export CLUSTER_API_CLUSTER_NAME=byoi-mgnt-cl

kubectl --namespace=capi get secret ${CLUSTER_API_CLUSTER_NAME}-user-kubeconfig \
   -o jsonpath={.data.value} | base64 --decode \
   &amp;gt; ${CLUSTER_API_CLUSTER_NAME}.kubeconfig

sed -i '' -e 's/v1alpha1/v1beta1/' ${CLUSTER_API_CLUSTER_NAME}.kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Pivot
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;If you create your Management Cluster in your desired target Kubernetes cluster. You can stop in here.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you are using temporary Management Cluster, now it is the time to move our Cluster API resources from the bootstrap temporary cluster to the target EKS cluster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Indeed, the Cluster API resources we are moving is the target Management Cluster itself.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Remember to enable experimental features
export EXP_MACHINE_POOL=true
export EKSEnableIAM=true

# Before you move the resources, you need to install Cluster # API in the target Management Cluster first.
clusterctl init --infrastructure aws --kubeconfig ${CLUSTER_API_CLUSTER_NAME}.kubeconfig 

clusterctl move -n capi --to-kubeconfig=${CLUSTER_API_CLUSTER_NAME}.kubeconfig 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Clusterctl move
&lt;/h4&gt;

&lt;p&gt;If you create Cluster API managed clusters across multiple namespaces, you need to run the &lt;code&gt;clusterctl move&lt;/code&gt; command for each namespace.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;clusterctl move -n ${OTHER_NAMESPACE} --to-kubeconfig=${CLUSTER_API_CLUSTER_NAME}.kubeconfig 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  (Bonus) Create Workload Clusters
&lt;/h3&gt;

&lt;p&gt;Remember of goal of using Cluster API is to &lt;strong&gt;provisioning, upgrading, and operating multiple Kubernetes clusters&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;In my repo and you can see there is a file called &lt;code&gt;byoi-workload-cluster.yaml&lt;/code&gt;. Now try to provision the EKS Workload Cluster from the EKS Management Cluster by running &lt;code&gt;kubectl apply -f byoi-workload-cluster.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;And notice that the EKS version of the Workload Cluster is v1.20. After you provision the Workload Cluster, you can modify the version to v1.21 and then apply the file again. Upgrading your EKS cluster is that simple!&lt;/p&gt;

&lt;h3&gt;
  
  
  Clean up resources
&lt;/h3&gt;

&lt;p&gt;To delete the temporary Management Cluster, use &lt;code&gt;kind delete cluster --name kind&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To delete Cluster API provisioned cluster, use &lt;code&gt;kubectl delete cluster {cluster name}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To delete EKS managed node groups, use &lt;code&gt;kubectl delete  machinepools.cluster.x-k8s.io {node group name}&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulation! You have moved the first step toward Cluster API. Feel free to do more cool stuffs, provision clusters in other providers.&lt;/p&gt;

&lt;p&gt;If you like my article, please give me some reactions, or leave me message in below. Thank you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reading materials
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://cluster-api.sigs.k8s.io/user/concepts.html"&gt;CAPI - Concepts&lt;/a&gt;&lt;br&gt;
&lt;a href="https://cluster-api-aws.sigs.k8s.io/topics/eks/enabling.html#machine-pools"&gt;CAPA - Enabling EKS managed Machine Pools&lt;/a&gt;&lt;br&gt;
&lt;a href="https://cluster-api-aws.sigs.k8s.io/crd/index.html"&gt;CAPI - Reference&lt;/a&gt;&lt;br&gt;
&lt;a href="https://kube.academy/courses/cluster-api"&gt;CAPI free course&lt;/a&gt;&lt;/p&gt;

</description>
      <category>clusterapi</category>
      <category>kubernetes</category>
      <category>aws</category>
      <category>eks</category>
    </item>
    <item>
      <title>Firewalls cannot save you from DDoS attacks!</title>
      <dc:creator>timtsoitt</dc:creator>
      <pubDate>Sat, 19 Mar 2022 18:45:59 +0000</pubDate>
      <link>https://dev.to/timtsoitt/firewall-cannot-save-you-from-ddos-attack-2i63</link>
      <guid>https://dev.to/timtsoitt/firewall-cannot-save-you-from-ddos-attack-2i63</guid>
      <description>&lt;p&gt;You might argue that my headline is incorrect as WAFs can provide some level of DDoS protection. The reason I write this headline is that there are many people somehow treat firewall as panacea. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is DDOS?
&lt;/h2&gt;

&lt;p&gt;To start with, you have to understand there are two types of DDoS attack, they are volumetric DDoS attacks and non-volumetric DDoS attacks. &lt;/p&gt;

&lt;h3&gt;
  
  
  Non-volumetric DDoS attacks
&lt;/h3&gt;

&lt;p&gt;If someone says firewall can help to protect you from DDoS attacks, they are actaully referring to non-volumetric DDoS attacks. Bascailly Attackers will try to send out a specifically crafted packets, causing your applications fail to respond to legitmative requests, or even crashing your applications.&lt;/p&gt;

&lt;p&gt;One famous attack is called low &amp;amp; slow attack. When the server respond to the requests from the DDoS attack, attackers will manipulate the connection by send out response in a possible slowest rate, slightly under connection timeout to avoid connection reset. Since there is a connection limit, when these manupulated connection occupy all the available connections, legitmative request can neven establish connection to the server.&lt;/p&gt;

&lt;p&gt;These kinds of non-volumetric DDoS attacks usually come with signitures, such as having the same source ip, same payload. Whenever Web Application Firewall (WAF) identify the signitures, it can block malicious requests from reaching your server.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Volumetric DDoS attacks
&lt;/h3&gt;

&lt;p&gt;As the name implies, volumetric means the attacker is trying to send overwhelming traffic to your server. The objective of this attack is to exhaust your network bandwidth. There is no way to stop volumetric DDoS attacks. The golden rule is you have more available bandwidth than the bandwidth that attackers are capable to consume.&lt;/p&gt;

&lt;p&gt;In reality, you won't have such amount of bandwidth available becuase you have to pay for it. While a big cooperation maybe owning 10Gb bandwidth, nowadays biggest DDoS attacks is talking about terabit level. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Think of a trillion of people (DDoS attacks) are surrounding your house (Server). Although they don’t have the key to enter your house (Firewall blocks the connections), your family members (Normal users) are also in trouble to enter your house because the road is super crowded now (Bandwidth exhausted). You can’t force these people to leave because the road is not owned by you. (Bandwidth is owned by ISP and you just pay for the right of using the bandwidth)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Where is the myth coming from?
&lt;/h2&gt;

&lt;p&gt;To answer this question, I think of several possible reasons.&lt;/p&gt;

&lt;h3&gt;
  
  
  Firewall this word is too overloaded
&lt;/h3&gt;

&lt;p&gt;There are many type of firewalls, it can be hardware firewalls, host firewalls (Windows Defender, iptables, ufw). &lt;/p&gt;

&lt;p&gt;People sometimes interchange the word of WAF and firewall. However to laymans, most probably firewall means Windows Defender to them. It is common that the knowledge they preceived eventually becomes &lt;strong&gt;Windows Defender can mitigate DDoS attacks&lt;/strong&gt;. It is a bit funny, the sad truth is that I have heard of similar statements multiple times in the past.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use of inaccurate word
&lt;/h3&gt;

&lt;p&gt;Use of Word should be accurate. However it does not always apply to real world conversation. To people who are familar with DDoS attacks, they often will use the word &lt;strong&gt;mitigate&lt;/strong&gt;, i.e to mitigate DDoS attacks.&lt;/p&gt;

&lt;p&gt;I have heard of many alternative words when people are talking about mitigating DDoS attacks, such as &lt;strong&gt;stop&lt;/strong&gt;, &lt;strong&gt;prevent&lt;/strong&gt;. These alternative words are really confusing (and also incorrect). It is not surprising that when people often say/hear &lt;strong&gt;stop&lt;/strong&gt; DDoS attacks, they will really believe DDoS attacks can be stopped. &lt;/p&gt;

&lt;p&gt;And now the statement becomes &lt;strong&gt;Windows Defender can stop DDoS attacks&lt;/strong&gt;...&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In practical there is no way to &lt;strong&gt;stop&lt;/strong&gt; DDoS attacks, not to mention &lt;strong&gt;prevent&lt;/strong&gt; DDoS attacks. Instead, DDOS protection focuses on minimizing the impact of DDoS attacks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How to mitigate DDoS attacks?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Short version&lt;/strong&gt;&lt;br&gt;
Cloudflare.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long version&lt;/strong&gt;&lt;br&gt;
There is a term called Script Kiddle. It refers to some amateur hackers using existing softwares to hack people for fun. There are lots of hacking tools off the shelf to initiate DDoS attacks. If you are suffering from volumetric ddos attacks, the truth is it might be just a young kid running hacking tools behind.&lt;/p&gt;

&lt;p&gt;Initiating DDoS attacks is such easy and buying bandwidth is costly. Paying for companies (such as Cloudflare) who are specilaized in DDoS protection is a much more viable and cheaper choice. Moreover, these kind of DDoS protection ofter embedded with WAF capability. By subscribing to DDoS protection service, you have protection from both non-volumetric and volumetric DDoS attacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Please use these statements from now on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WAFs can be used to mitigate non-volumetric DDoS attacks.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;and&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contact DDoS protection service providers if you need to mitigate DDoS attacks.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Readings
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://research.nccgroup.com/wp-content/uploads/2020/07/non-volumetric-distributed-denial-of-service-ddos.pdf"&gt;Introduction of DDoS&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.a10networks.com/blog/5-most-famous-ddos-attacks/%5D"&gt;Record high of DDOS&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Choose Cosign over Notary</title>
      <dc:creator>timtsoitt</dc:creator>
      <pubDate>Sat, 19 Mar 2022 15:20:32 +0000</pubDate>
      <link>https://dev.to/timtsoitt/choose-cosign-over-notary-2146</link>
      <guid>https://dev.to/timtsoitt/choose-cosign-over-notary-2146</guid>
      <description>&lt;h2&gt;
  
  
  Disclaimer
&lt;/h2&gt;

&lt;p&gt;This article is not trying to compare the pros &amp;amp; cons between Cosign and Notary. Rather, it is a sharing that you should opt-in Cosign if you have some technical needs like I do.&lt;/p&gt;

&lt;p&gt;I also attached some websites at the end of this article. I recommend you to read these websites before you move on. &lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;You might be heard of SolarWinds hack and you know what is Software Supply Chain Attack. And so-called software can include libraries, linux packages, plugins, etc. &lt;/p&gt;

&lt;p&gt;As there are many softwares distributed every single second worldwide, it becomes a hot topic that how we can trust the software we downloaded is safe. One idea is that we want someone we trust to make a claim by signing their signiture to the software. And everytime we want to download/update the software, we will check whether the signiture exists.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Software authors can sign the software to claim that it is a authroized release.&lt;/li&gt;
&lt;li&gt;Secuirty auditors can sign the software to claim that it has passed their secuirty audit.&lt;/li&gt;
&lt;li&gt;Inhouse secuirty team can sign the software to claim that it is authroized to be used inside the company.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the major software type that we are so familar are container images. Cosign and Notary both are mainstream solutions for signing container images. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There are Notary/v1 and Notary/v2. Notary/v2 is not ready for general use so I won't talk much about it. In this article, when you see the term Notary, I am referring to Notary/v1.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Considerations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Able to sign any OCI artifacts
&lt;/h3&gt;

&lt;p&gt;In the past, when we say we publish something to the registry, we always refer to container images. However it does not apply anymore. Nowadays many modern registries are OCI compatible, where we are free to store any OCI artifacts. And more amazingly, any arbitary file can be stored as an OCI artifact. Theotically, you can even store a video to OCI registries.&lt;/p&gt;

&lt;p&gt;If you are working on K8S cluster, you should be familar with helm charts. And yes, you can store helm charts to OCI regitry. It is vital because we can reuse the CI/CD mindset that we apply to docker images, to helm charts. It works like a charm when we pair with GitOps.&lt;/p&gt;

&lt;p&gt;So now you are so care about the safety of your software supply chain, and it is natural that you want all container images to be signed. What about helm charts? You probably want helm charts to be signed too. (I hope no one will think that unauthorized release of helm charts is trivial.) &lt;/p&gt;

&lt;p&gt;Now I can tell you that you should just go for Cosign and leave Notary alone. Why? &lt;strong&gt;Notary only can be used to sign container images, not OCI artifacts.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In additional to helm charts, you can store SBOMs, image scanning results as OCI artifacts with your container images. The use cases of OCI are far more than you can imagine.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;OK. You might tell me that you do not use helm charts at all. Then should you still consider Notary? Continue to read the following section.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Easy to use
&lt;/h3&gt;

&lt;p&gt;So now you want to sign your container images. Definately you want to do it in a automation style (please agree on me). And I can tell you that Cosign is absolutely better than Notary with no doubt.&lt;/p&gt;

&lt;p&gt;Cosign itself is a binary. You can install it using any package managers like homebrew and apt. It also means you can use it in any CI/CD solution like Github Actions, Azure Pipeline by adding a step to install it. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Cosign is super simple.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use Cosign to generate a keypair in local environment.&lt;/li&gt;
&lt;li&gt;Store the generated private key inside your CI/CD platform. &lt;/li&gt;
&lt;li&gt;Use Cosign to sign your OCI artifacts.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cosign generate-key-pair
cosign sign --key cosign.key dlorenc/demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;On the contrary, Notary requires you to self-host a Notary service. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here are some issues you should be aware of.&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Learning Curve
&lt;/h4&gt;

&lt;p&gt;Notary is based on the TUF framework. TUF is powerful. The alternative word of powerful is complicated. You have to understand how TUF works to make a proper use of Notary. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;After you have spend (so much) time to understand TUF well, are you ready to educate your users what TUF is?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Setup complexity
&lt;/h4&gt;

&lt;p&gt;A Notary service consists of Notary server, Notary signer, Notary client, MySQL database. And you also need to setup MTLS between Notary server and Notary signer. It might not be a big issue, but it is still an effort you should pay attention to.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Some people might argue that the setup is simple. However be mindful of Cosign, it just requires you to install a binary.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Operational burden
&lt;/h4&gt;

&lt;p&gt;How are you going to handle if your Notary service is unavailable? Your pipelines might be broken when it can't verify signiture of the container images, and then all your downstream deployments screw up. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Some of you might be so experienced in designing a HA solution, and can taking a good care of the MySQL database in case of region outage. However I will prefer to use risk avoidance strategy by not using Notary.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Network connectivity
&lt;/h4&gt;

&lt;p&gt;Notary service is critical. Unless you are releasing container images to the public, most probably you are hosting Notary as an internal-facing service to reduce attack surface. &lt;/p&gt;

&lt;p&gt;If you are using public worker in your CI/CD solution, now you have to deploy some self-hosted workers such that it can reach your Notary service.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Cosign does not have this issue.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Secret management
&lt;/h4&gt;

&lt;p&gt;You have to manage a set of secrets, such as TLS certificates and TUF keys. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For Cosign, you also need to manage the private key. It is obviously much simpler than Notary.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;Now you can think about is the advantage of using Notary outweight the effort you spend on it? &lt;/p&gt;

&lt;p&gt;To me, the answer is &lt;strong&gt;NO&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Although I opt in for Cosign, I do not think Notary is bad. As I have mentioned, TUF (Notary) is powerful. If your team have resources to well-study TUF and want to secure container image supply chain as much as possible, using Notary is a good choice to you. And I do encourage people to learn what TUF is even you do not use Notary.&lt;/p&gt;

&lt;p&gt;In the future, I will write another article to explain how you can use Cosign to secure helm chart K8S deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fun
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Status of Notary/v2
&lt;/h3&gt;

&lt;p&gt;There are other issues come with Notary/v1. This is why Notary/v2 project is proposed. Notary/v2 has released its first alpha version, the implementation is called &lt;a href="https://github.com/notaryproject/notation"&gt;Notation&lt;/a&gt;. Personally, I think the user experience of using Notary/v2 is so similar to Cosign.&lt;/p&gt;

&lt;h3&gt;
  
  
  Try Notary/v1
&lt;/h3&gt;

&lt;p&gt;DockerHub is hosting an offcical Notary server in &lt;a href="https://notary.docker.io"&gt;https://notary.docker.io&lt;/a&gt; and you can verify all offical published images listed in &lt;a href="https://hub.docker.com/search?q=&amp;amp;type=image&amp;amp;image_filter=official"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Verifying the offical alpine image&lt;br&gt;
&lt;code&gt;notary -s https://notary.docker.io -d ~/.docker/trust list docker.io/library/alpine&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Readings
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dlorenc.medium.com/oci-artifacts-explained-8f4a77945c13"&gt;What is OCI?&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.cloudsavvyit.com/14680/how-to-index-your-docker-images-dependencies-with-syft/"&gt;What is SBOMs?&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html"&gt;Publish helm charts to AWS ECR&lt;/a&gt;&lt;br&gt;
&lt;a href="https://theupdateframework.io/"&gt;What is TUF?&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.cloudsavvyit.com/14716/how-docker-image-signing-will-evolve-with-notary-v2/"&gt;Notary/v2 v.s Notary/v1&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dlorenc.medium.com/notary-v2-and-cosign-b816658f044d"&gt;Notary/v2 v.s. Cosign&lt;/a&gt;&lt;/p&gt;

</description>
      <category>notary</category>
      <category>cosign</category>
    </item>
  </channel>
</rss>
