<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Maël Valais</title>
    <description>The latest articles on DEV Community by Maël Valais (@maelvls).</description>
    <link>https://dev.to/maelvls</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/maelvls"/>
    <language>en</language>
    <item>
      <title>Logging into Synology NAS with Personal Google Accounts</title>
      <dc:creator>Maël Valais</dc:creator>
      <pubDate>Wed, 17 Apr 2024 17:50:00 +0000</pubDate>
      <link>https://dev.to/maelvls/logging-into-synology-nas-with-personal-google-accounts-16go</link>
      <guid>https://dev.to/maelvls/logging-into-synology-nas-with-personal-google-accounts-16go</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;My parents and I share a DS923+. We mainly use it for storing photos. To save energy, the NAS is stopped at night and restarted in the morning. Restarting the NAS means that you need to log in to the UI at least once every day.&lt;/p&gt;

&lt;p&gt;And since my parents don't use the Synology often, they would first struggle remembering what the username thing is about. And then, they would forget about their password, which would lead to me having to reset it because Synology hasn't implemented a way to reset the password over email:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdzmhqx778d4dd6n7qsg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdzmhqx778d4dd6n7qsg.jpeg" alt="Synology login screen where you are asked for your username but don't remember it" width="800" height="667"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hb92a6f6ncsl68nj0of.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hb92a6f6ncsl68nj0of.jpeg" alt="Synology login screen where you are asked to enter your password after having entered your username" width="800" height="667"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Over the past two years, my parents forgot their passwords a couple of times. It led me to look for an alternative way to log into the NAS... why not use their Google accounts using the single sign-on mechanism?&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges with Local Accounts and SSO
&lt;/h2&gt;

&lt;p&gt;A couple of years ago, Synology introduced SSO (single sign-on). Since 7.2, DSM supports generic OIDC providers, and supports logging into local users (it used to be only possible for LDAP users).&lt;/p&gt;

&lt;p&gt;Since my parents are always signed in into their Google account, I figured it would be possible to use OIDC with Google... Except it won't work with local accounts.&lt;/p&gt;

&lt;p&gt;Here is what I tried: I created a My OIDC configuration for Google looked like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdreq3mhk5ztdp4c9vl3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdreq3mhk5ztdp4c9vl3v.png" alt="Credentials page in API and Services in GCP's Console" width="800" height="766"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, I configured my Synology to use Google's OIDC endpoint:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61zbp5k6xvb0te23oydk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61zbp5k6xvb0te23oydk.png" alt="Configuration of the SSO Client using Google OIDC in DSM" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The problem arose with the "Username claim". I want to log into my local account &lt;code&gt;mael.valais&lt;/code&gt;, but none of the claims in Google's ID tokens contain that username. Here is an example of a Google ID token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"iss"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://accounts.google.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"aud"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1234987819200.apps.googleusercontent.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"sub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"10769150350006150715113082367"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"jsmith@example.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"email_verified"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"iat"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1353601026&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"exp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1353604926&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Forking Dex to use it as an OIDC middleware for Google OIDC
&lt;/h2&gt;

&lt;p&gt;I figured I could use Dex to act as a middleware between Synology's OIDC client and Google's OIDC server. My goal was to "augment" Google's JWTs with Synology's usernames by looking up the user by email.&lt;/p&gt;

&lt;p&gt;Dex isn't as flexible as I would have hoped. To make it work, I had to fork it to change the internals of the Google OIDC connector.&lt;/p&gt;

&lt;p&gt;Fork: &lt;a href="https://github.com/maelvls/dex/tree/google-to-synology-sso" rel="noopener noreferrer"&gt;https://github.com/maelvls/dex/tree/google-to-synology-sso&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This fork is a fork of the fork presented in &lt;a href="https://github.com/dexidp/dex/pull/2954" rel="noopener noreferrer"&gt;https://github.com/dexidp/dex/pull/2954&lt;/a&gt;. It builds on the idea of the &lt;code&gt;ExtendPayload&lt;/code&gt; interface, which I slightly adjusted to pass the original claims since I needed access to the email contained in the JWT provided by Google.&lt;/p&gt;

&lt;p&gt;With this fork, you will need to set three more environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;SYNO_PASSWD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;redacted
&lt;span class="nv"&gt;SYNO_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mael.valais
&lt;span class="nv"&gt;SYNO_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://127.0.0.1:5000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the OIDC flow with Google is done and before Dex issues its own JWT, I added some code to add the claim &lt;code&gt;username&lt;/code&gt;. With this modified Dex, the JWT looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"at_hash"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"-j6HZYvzDaqkQB2KxIgSyw"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"aud"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"caddy"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"c_hash"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"8SK3tobDYgaI3cnDzkmi5g"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mael65@gmail.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"email_verified"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"exp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1713387587&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"iat"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1713301187&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"iss"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://login.mysynodomain.dev/dex"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Maël Valais"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"nonce"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MFZFSkESL1XqdQmbvr0T43Kn7v0CzLap"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"sub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ChUxMDAzNjk3OTQzNjg3MDAwOTk5MTISBmdvb2dsZQ"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"username"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mael.valais"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the updated configuration in DSM:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84w3ryoo7jiufs3e5qt1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84w3ryoo7jiufs3e5qt1.png" alt="Configuration of the SSO Client using Dex in DSM" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the fork of Dex
&lt;/h2&gt;

&lt;p&gt;Create a file &lt;code&gt;dex.yaml&lt;/code&gt; on your NAS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://login.mysynodomain.dev/dex&lt;/span&gt;

&lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sqlite3&lt;/span&gt;
  &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dex.sqlite&lt;/span&gt;

&lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.0.0.0:5556&lt;/span&gt;

&lt;span class="na"&gt;logger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;debug&lt;/span&gt;

&lt;span class="na"&gt;oauth2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;skipApprovalScreen&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
   &lt;span class="na"&gt;alwaysShowLoginScreen&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

&lt;span class="na"&gt;staticClients&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;synology&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Synology'&lt;/span&gt;
  &lt;span class="na"&gt;redirectURIs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://mysynodomain.dev/'&lt;/span&gt;
  &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo&lt;/span&gt; &lt;span class="c1"&gt;# Use openssl rand -hex 16 to generate this.&lt;/span&gt;

&lt;span class="na"&gt;connectors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;google&lt;/span&gt;
  &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;google&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Google&lt;/span&gt;
  &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://accounts.google.com&lt;/span&gt;
    &lt;span class="na"&gt;clientID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$GOOGLE_CLIENT_ID&lt;/span&gt;
    &lt;span class="na"&gt;clientSecret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$GOOGLE_CLIENT_SECRET&lt;/span&gt;
    &lt;span class="na"&gt;redirectURI&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://login.mysynodomain.dev/dex/callback&lt;/span&gt;

&lt;span class="c1"&gt;# I have disabled email login.&lt;/span&gt;
&lt;span class="na"&gt;enablePasswordDB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, run Dex:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--name&lt;/span&gt; dex &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/dex.yaml:/dex.yaml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/dex.sqlite:/dex.sqlite &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;SYNO_PASSWD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;redacted &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;SYNO_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mael.valais &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;SYNO_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://127.0.0.1:5000 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;GOOGLE_CLIENT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;207842732284-l7nhetlsvimmds80fa2knir8fundp3h4.apps.googleusercontent.com &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;GOOGLE_CLIENT_SECRET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;redacted &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 5556:5556 &lt;span class="se"&gt;\&lt;/span&gt;
  ghcr.io/maelvls/dex:google-to-synology-sso-v2@sha256:252713d98c8369612994fbbed6f257d79dc35ff84b2cbb6952a11d63c57b64bb serve /dex.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;With this command, you will be using Dex images that I built:&lt;/p&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ghcr.io/maelvls/dex
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;I wouldn't recommend using random Docker images from the internet, especially since this is about authentication. I might be a malicious actor trying to steal your Synology credentials! But if you still want to proceed, here is an image! Note that I am not monitoring the image for security vulnerabilities, and do not guarantee that it is secure. Use at your own risk!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Docker image
&lt;/h2&gt;

&lt;p&gt;The Docker image is available on GitHub Container Registry: &lt;a href="https://ghcr.io/maelvls/dex" rel="noopener noreferrer"&gt;https://ghcr.io/maelvls/dex&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rebuilding the image yourself and pushing it to your Synology NAS
&lt;/h3&gt;

&lt;p&gt;First, install &lt;code&gt;zig&lt;/code&gt; and &lt;code&gt;ko&lt;/code&gt;. That will allow you to cross-compile Dex to &lt;code&gt;linux/amd64&lt;/code&gt; on macOS without Buildx (cross-compiling is required because Dex's sqlite library needs CGO)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;ko zig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Clone the fork:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/maelvls/dex &lt;span class="nt"&gt;--branch&lt;/span&gt; google-to-synology-sso
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, build the image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;CC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"zig cc -target x86_64-linux"&lt;/span&gt; &lt;span class="nv"&gt;CXX&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"zig c++ -target x86_64-linux"&lt;/span&gt; &lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nv"&gt;KO_DOCKER_REPO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ghcr.io/maelvls/dex &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nv"&gt;KO_DEFAULTBASEIMAGE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;alpine &lt;span class="se"&gt;\&lt;/span&gt;
  ko build ./cmd/dex &lt;span class="nt"&gt;--bare&lt;/span&gt; &lt;span class="nt"&gt;--tarball&lt;/span&gt; /tmp/out.tar &lt;span class="nt"&gt;--push&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, copy the image to your NAS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh yournas /usr/local/bin/docker load &amp;lt;/tmp/out.tar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  (Just so that I don't forget) Here is how I pushed &lt;code&gt;ghcr.io/maelvls/dex&lt;/code&gt; to GitHub Container Registry
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;google-to-synology-sso-v6
git tag &lt;span class="nv"&gt;$VERSION&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Release &lt;/span&gt;&lt;span class="nv"&gt;$VERSION&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
git push maelvls &lt;span class="nv"&gt;$VERSION&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;CC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"zig cc -target x86_64-linux"&lt;/span&gt; &lt;span class="nv"&gt;CXX&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"zig c++ -target x86_64-linux"&lt;/span&gt; &lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nv"&gt;KO_DOCKER_REPO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ghcr.io/maelvls/dex &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nv"&gt;KO_DEFAULTBASEIMAGE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;alpine &lt;span class="se"&gt;\&lt;/span&gt;
  ko build ./cmd/dex &lt;span class="nt"&gt;--bare&lt;/span&gt; &lt;span class="nt"&gt;--push&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;--tags&lt;/span&gt; &lt;span class="nv"&gt;$VERSION&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-annotation&lt;/span&gt; &lt;span class="s2"&gt;"org.opencontainers.image.created=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; +%Y-%m-%dT%H:%M:%SZ&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-annotation&lt;/span&gt; &lt;span class="s2"&gt;"org.opencontainers.image.url=https://maelvls.dev/synology-sso-with-personal-google-account/"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-annotation&lt;/span&gt; &lt;span class="s2"&gt;"org.opencontainers.image.source=https://github.com/maelvls/dex"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-annotation&lt;/span&gt; &lt;span class="s2"&gt;"org.opencontainers.image.version=&lt;/span&gt;&lt;span class="nv"&gt;$VERSION&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-annotation&lt;/span&gt; &lt;span class="s2"&gt;"org.opencontainers.image.revision=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;git rev-parse HEAD&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-annotation&lt;/span&gt; &lt;span class="s2"&gt;"org.opencontainers.image.vendor=Maël Valais"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-annotation&lt;/span&gt; &lt;span class="s2"&gt;"org.opencontainers.image.title=google-to-synology-sso"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-annotation&lt;/span&gt; &lt;span class="s2"&gt;"org.opencontainers.image.description=Fork of Dex to use Synology SSO with Google accounts"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-annotation&lt;/span&gt; &lt;span class="s2"&gt;"org.opencontainers.image.documentation=https://maelvls.dev/synology-sso-with-personal-google-account/"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-annotation&lt;/span&gt; &lt;span class="s2"&gt;"org.opencontainers.image.authors=Maël Valais &amp;lt;mael.valais@gmail.com&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-annotation&lt;/span&gt; &lt;span class="s2"&gt;"org.opencontainers.image.licenses=Apache-2.0"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-annotation&lt;/span&gt; &lt;span class="s2"&gt;"org.opencontainers.image.ref.name=google-to-synology-sso"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the history below to know the image hashes.&lt;/p&gt;

&lt;h3&gt;
  
  
  History
&lt;/h3&gt;

&lt;h4&gt;
  
  
  June 21st, 2025: v5
&lt;/h4&gt;

&lt;p&gt;Release: &lt;a href="https://github.com/maelvls/google-to-synology-sso/releases/tag/google-to-synology-sso-v5" rel="noopener noreferrer"&gt;google-to-synology-sso-v5&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I somehow didn't realize that I was hardcoding the Synology URL. In this&lt;br&gt;
version, I've added &lt;code&gt;SYNO_URL&lt;/code&gt; (I thought I had already added it, but I&lt;br&gt;
hadn't!).&lt;/p&gt;

&lt;p&gt;I've also renamed the fork to google-to-synology-sso to help with&lt;br&gt;
discoverability.&lt;/p&gt;

&lt;p&gt;The image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ghcr.io/maelvls/dex:google-to-synology-sso-v5@sha256:e805a95be565268421ccdb2271dfc0d85ae12b6b53cf82c47b294d34891ff3d1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  June 14th, 2025: v4
&lt;/h4&gt;

&lt;p&gt;Release: &lt;a href="https://github.com/maelvls/google-to-synology-sso/releases/tag/google-to-synology-sso-v4" rel="noopener noreferrer"&gt;google-to-synology-sso-v4&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The dex container kept crashing due to i/o timeouts when Dex was trying to connect to the Synology API. I fixed that by adding a retry mechanism with an exponential backoff and a maximum of 10 retries and maximum of 1 hour between retries.&lt;/p&gt;

&lt;p&gt;The image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ghcr.io/maelvls/dex:google-to-synology-sso-v4@sha256:f8bf15901c2b994337994c4f60c48c154437af656cbe85701cb8d1d7d94127ba
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  June 4th, 2025: v3
&lt;/h4&gt;

&lt;p&gt;Release: &lt;a href="https://github.com/maelvls/google-to-synology-sso/releases/tag/google-to-synology-sso-v3" rel="noopener noreferrer"&gt;google-to-synology-sso-v3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reduced the Synology SSO loading time from 10 seconds to 1 second. The reason it was so slow is that I wasn't caching the Synology users and was fetching them every time someone was logging into Synology. The image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ghcr.io/maelvls/dex:google-to-synology-sso-v3@sha256:d0d889e32400ef70529daef32e7a77bf9da021cbaff9954589db2204a5c49335
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  June 1st, 2025: v2
&lt;/h4&gt;

&lt;p&gt;Release: &lt;a href="https://github.com/maelvls/google-to-synology-sso/releases/tag/google-to-synology-sso-v2" rel="noopener noreferrer"&gt;google-to-synology-sso-v2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;google-to-synology-sso-v1&lt;/code&gt; tag was buggy, the &lt;code&gt;ExtendPayload&lt;/code&gt; func wasn't being called correctly. I've pushed &lt;code&gt;google-to-synology-sso-v2&lt;/code&gt; to fix that. Here is the new image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ghcr.io/maelvls/dex:google-to-synology-sso-v2@sha256:252713d98c8369612994fbbed6f257d79dc35ff84b2cbb6952a11d63c57b64bb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Apr 12nd, 2024: v1
&lt;/h4&gt;

&lt;p&gt;Release: &lt;a href="https://github.com/maelvls/google-to-synology-sso/releases/tag/google-to-synology-sso-v1" rel="noopener noreferrer"&gt;google-to-synology-sso-v1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ghcr.io/maelvls/dex:google-to-synology-sso-v1@sha256:345c8fec6b222c308759f21864c6af3b16c373801fd5e0b7ad4b131a743d3b07
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With this method, my parents can log into the NAS with their Google account and no longer have to remember their Synology username and password.&lt;/p&gt;

&lt;p&gt;Although it works, I wish I didn't have to fork Dex to customize the claims it puts into the JWT payload. I came across a couple of designs that would aim to make Dex more extendable, but none have been implemented yet.&lt;/p&gt;

&lt;p&gt;The login flow is much smoother now: click "Login with Google", select the Google account, and you're in! Just two screens:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5m71bux0bvi6vd1oyh0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5m71bux0bvi6vd1oyh0.jpeg" alt="Synology login screen that shows a button that says Login with Google" width="800" height="667"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuxj8zvuvicfbupxn6wn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuxj8zvuvicfbupxn6wn.png" alt="Google screen allowing you to select a Google account" width="800" height="1059"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>mitmproxy hangs on TLS renegotiation: a debugging story</title>
      <dc:creator>Maël Valais</dc:creator>
      <pubDate>Sat, 25 Sep 2021 08:31:00 +0000</pubDate>
      <link>https://dev.to/maelvls/mitmproxy-hangs-on-tls-renegotiation-a-debugging-story-56n1</link>
      <guid>https://dev.to/maelvls/mitmproxy-hangs-on-tls-renegotiation-a-debugging-story-56n1</guid>
      <description>&lt;p&gt;I frequently use mitmproxy to inspect what HTTP requests and responses look like under the hood. Inspecting HTTP flows comes in handy with tools that tend to hide the actual JSON error messages.&lt;/p&gt;

&lt;p&gt;One tool I have been using a lot is &lt;a href="https://github.com/Venafi/vcert" rel="noopener noreferrer"&gt;&lt;code&gt;vcert&lt;/code&gt;&lt;/a&gt;. &lt;code&gt;vcert&lt;/code&gt; allows you to request X.509 certificates from Venafi Trust Protection Platform (TTP) as well as Venafi Cloud.&lt;/p&gt;

&lt;p&gt;In the following example, I give an unknown client ID &lt;code&gt;duh&lt;/code&gt;. &lt;code&gt;vcert&lt;/code&gt; just tells me about the error in the initial HTTP header:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vcert getcred &lt;span class="nt"&gt;--username&lt;/span&gt; foo &lt;span class="nt"&gt;--password&lt;/span&gt; bar &lt;span class="nt"&gt;--client-id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;duh &lt;span class="nt"&gt;--verbose&lt;/span&gt;
vCert: 2021/09/19 19:23:56 Getting credentials...
vCert: 2021/09/19 19:23:56 Got 400 Bad Request status &lt;span class="k"&gt;for &lt;/span&gt;POST https://venafi-tpp.platform-ops.jetstack.net/vedauth/authorize/oauth
vCert: 2021/09/19 19:24:05 unexpected status code on TPP Authorize. Status: 400 Bad Request
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;I am sure that the HTTP API is returning more than just the HTTP header &lt;code&gt;400 Bad Request&lt;/code&gt;!&lt;/p&gt;

&lt;p&gt;But when I try using mitmproxy, I get the following error:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ HTTPS_PROXY=:9090 vcert getcred --username foo --password bar --client-id=duh --verbose
vCert: 2021/09/19 19:29:20 Getting credentials...
vCert: 2021/09/19 19:29:50 Post "https://venafi-tpp.platform-ops.jetstack.net/vedauth/authorize/oauth": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fvcert-hanging.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fvcert-hanging.png" alt="vcert-hanging"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking at the "Detail" tab for the HTTP call recorded in mitmproxy, I can see that the HTTP request was acknowledged by the TPP server (see the "Request complete" time). For some reason, the response never arrives.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fmitmproxy-request-received.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fmitmproxy-request-received.png" alt="mitmproxy-request-received"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us dig a bit deeper and see what is the difference between "with" and "without" mitmproxy using &lt;a href="https://www.wireshark.org/" rel="noopener noreferrer"&gt;WireShark&lt;/a&gt;. Without mitmproxy, &lt;code&gt;vcert&lt;/code&gt; succeeds:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fvcert-without-mitmproxy-encrypted.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fvcert-without-mitmproxy-encrypted.png" alt="vcert-without-mitmproxy-encrypted"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But with mitmproxy, &lt;code&gt;vcert&lt;/code&gt; hangs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fvcert-with-mitmproxy-encrypted.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fvcert-with-mitmproxy-encrypted.png" alt="vcert-with-mitmproxy-encrypted"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The difference between both flows seems to be the "Encrypted Handshake Message" that occurs right after the HTTP requests has been sent.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note that you can see the above two PCAP traces by downloading &lt;a href="//vcert-with-and-without-mitmproxy-through-iap.pcapng"&gt;vcert-with-and-without-mitmproxy-through-iap.pcapng&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, let us decrypt this TLS flow. WireShark being a passive capture tool (unlike mitmproxy), it won't be able to decrypt traffic. Fortunately, mitmproxy is able to dump the master secret (see the page &lt;a href="https://docs.mitmproxy.org/stable/howto-wireshark-tls/" rel="noopener noreferrer"&gt;Wireshark and SSL/TLS Master Secrets&lt;/a&gt;. After giving the master secret to WireShark, we can see the decrypted traffic:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fvcert-with-mitmproxy-decrypted.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fvcert-with-mitmproxy-decrypted.png" alt="vcert-with-mitmproxy-decrypted"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above screenshot validates the fact that the "Application Data" that we could see in the previous two screenshots was in fact the HTTP request.&lt;/p&gt;

&lt;p&gt;We also notice that the mysterious "Encrypted Handshake Message" is a Hello Request, signifying a TLS renegotiation as described in &lt;a href="https://datatracker.ietf.org/doc/html/rfc5246#section-7.4.1.1" rel="noopener noreferrer"&gt;RFC 5246&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As a side note, here is what the master secret look like when mitmproxy records it (that's the file I passed to WireShark):&lt;/p&gt;


&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Content of sslkeylogfile&lt;/span&gt;
CLIENT_RANDOM 5bdb7b7d88a325a08ee922a89b7b8acbbcda36f7d800c4e2f763b4689cfd870b 083f22af3099558997f784c47c4145dd9155c20f97fa5701bffe7003d80c7ad31801cfabb9be5838bf8f4f58e6b971f7
SERVER_HANDSHAKE_TRAFFIC_SECRET caa526a0b8e35551a2feee9de685e74547a7b0abc3ae5422a4bebe42d74132b4 7ac126e6a9b651ee9585a902248ac5bc5cdfa3a8989852f491b52ec013a9be170dce24076c305c5b9e7209c325a9f530
CLIENT_HANDSHAKE_TRAFFIC_SECRET caa526a0b8e35551a2feee9de685e74547a7b0abc3ae5422a4bebe42d74132b4 37fa20e94a385b79901a0615ef7c2de091e98757e85958deb9a242a3b0bed74e3be290d816555a3da816dd60d4f0c9f3
EXPORTER_SECRET caa526a0b8e35551a2feee9de685e74547a7b0abc3ae5422a4bebe42d74132b4 d436ff8840245be887e15d11000a98d971df4f1f0d825b7eb470a37fe2e9f7d1759967fe52e055cb9b5e74373e36f276
SERVER_TRAFFIC_SECRET_0 caa526a0b8e35551a2feee9de685e74547a7b0abc3ae5422a4bebe42d74132b4 615847eb91285a78981cf5072a9597e2ef4549c92d695818b9e05c270950b5ec88fcb62969b353b38ea2cd18597ee01d
CLIENT_TRAFFIC_SECRET_0 caa526a0b8e35551a2feee9de685e74547a7b0abc3ae5422a4bebe42d74132b4 4705201529b85b5cbdcbd74dfdf5cf2019ff9e520de551d2aaaa14962668ed3aa0a79416d1795d5eb53be93ddeb077de
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;The master secret is comprised of 96 hexadecimal characters, which corresponds to 48 bytes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It seems like mitmproxy does not support server-side initiated TLS renegotiation. Looking at its source code, it seems like everything after the end of the first handshake is considered to be data, and subsequent Handshake Messages make mitmproxy "hang".&lt;/p&gt;

&lt;p&gt;Now, why would TLS renegotiation be necessary after the HTTP request is sent? The issue named &lt;a href="https://github.com/Venafi/vcert/issues/148" rel="noopener noreferrer"&gt;Venafi Issuer error when configuring cert-manager. "local error: tls: no renegotiation"&lt;/a&gt; gives us a clue: Microsoft IIS, which is the web server used by Venafi TPP, is capable of authenticating an incoming connection using a client certificate depending on the HTTP request's URL path. Looking at the IIS configuration, we can see that the &lt;code&gt;/vedauth&lt;/code&gt; endpoint has its "client certificate" settings configured to "Accept":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fssl-settings-accept.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fssl-settings-accept.png" alt="ssl-settings-accept"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us see this behavior in action by creating a TLS tunnel using openssl. With the path &lt;code&gt;/vedsdk&lt;/code&gt;, we can see that openssl does the initial TLS handshake and no renegotiation seem to happen after sending the HTTP request:&lt;/p&gt;


&lt;div class="ltag_asciinema"&gt;
  
&lt;/div&gt;



&lt;p&gt;The HTTP request I made was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;HTTP /vedsdk/ HTTP/1.1
Host: venafi-tpp.platform-ops.jetstack.net
Content-Length: 0
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;With the endpoint &lt;code&gt;/vedauth&lt;/code&gt;, which is the one for which mitmproxy is hanging on, openssl does the initial TLS handshake, sends the HTTP request and then a TLS renegotiation is triggered:&lt;/p&gt;


&lt;div class="ltag_asciinema"&gt;
  
&lt;/div&gt;



&lt;p&gt;The request was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;HTTP /vedauth/ HTTP/1.1
Host: venafi-tpp.platform-ops.jetstack.net
Content-Length: 0
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the first example (&lt;code&gt;/vedsdk&lt;/code&gt;), the HTTP response is sent immediately without any TLS negotiation. In the second example (&lt;code&gt;/vedauth&lt;/code&gt;), a TLS renegotiation is performed right after the HTTP request is sent, and the HTTP response is sent over this new TLS session.&lt;/p&gt;

&lt;p&gt;The only explanation for this behavior is that this allows IIS to refuse clients that present an invalid client certificates only for paths for which client certificates are enabled, instead of refusing to serve all paths. I would need to learn a bit more about TLS and how IIS is able to give the information about the identity of a client certificate to the HTTP layer to know whether it should look for an &lt;code&gt;Authorization: Bearer foo&lt;/code&gt; HTTP header.&lt;/p&gt;

&lt;p&gt;The only workaround I have come up with is to turn the client certificate option to "Ignore":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fssl-settings-ignore.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fssl-settings-ignore.png" alt="ssl-settings-ignore"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This time, vcert over mitmproxy works!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fvcert-working.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmaelvls.dev%2Fmitmproxy-tls-renegotiation-debugging-story%2Fvcert-working.png" alt="vcert-working"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One thing I would like to do in the future is to add support for TLS renegotiation to mitmproxy!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding the Available condition of a Kubernetes deployment</title>
      <dc:creator>Maël Valais</dc:creator>
      <pubDate>Tue, 07 Jul 2020 14:33:00 +0000</pubDate>
      <link>https://dev.to/maelvls/understanding-the-available-condition-of-a-kubernetes-deployment-51li</link>
      <guid>https://dev.to/maelvls/understanding-the-available-condition-of-a-kubernetes-deployment-51li</guid>
      <description>&lt;p&gt;The various conditions that may appear in a deployment status are not documented in the &lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/"&gt;API reference&lt;/a&gt;. For example, we can see what are the fields for &lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#deploymentcondition-v1-apps"&gt;DeploymentConditions&lt;/a&gt;, but it lacks the description of what conditions can appear in this field.&lt;/p&gt;

&lt;p&gt;In this post, I dig into what the &lt;code&gt;Available&lt;/code&gt; condition type is about and how it is computed.&lt;/p&gt;

&lt;p&gt;Since the &lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/"&gt;API reference&lt;/a&gt; does not contain any information about conditions, the only way to learn more is to dig into the Kubernetes codebase; quite deep into the code, &lt;a href="https://github.com/kubernetes/kubernetes/blob/3615291/pkg/apis/apps/types.go#L461-L473"&gt;we can read some comments&lt;/a&gt; about the three possible conditions for a deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// Available means the deployment is available, ie. at least the minimum available&lt;/span&gt;
&lt;span class="c"&gt;// replicas required are up and running for at least minReadySeconds.&lt;/span&gt;
&lt;span class="n"&gt;DeploymentAvailable&lt;/span&gt; &lt;span class="n"&gt;DeploymentConditionType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Available"&lt;/span&gt;

&lt;span class="c"&gt;// Progressing means the deployment is progressing. Progress for a deployment is&lt;/span&gt;
&lt;span class="c"&gt;// considered when a new replica set is created or adopted, and when new pods scale&lt;/span&gt;
&lt;span class="c"&gt;// up or old pods scale down. Progress is not estimated for paused deployments or&lt;/span&gt;
&lt;span class="c"&gt;// when progressDeadlineSeconds is not specified.&lt;/span&gt;
&lt;span class="n"&gt;DeploymentProgressing&lt;/span&gt; &lt;span class="n"&gt;DeploymentConditionType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Progressing"&lt;/span&gt;

&lt;span class="c"&gt;// ReplicaFailure is added in a deployment when one of its pods fails to be created&lt;/span&gt;
&lt;span class="c"&gt;// or deleted.&lt;/span&gt;
&lt;span class="n"&gt;DeploymentReplicaFailure&lt;/span&gt; &lt;span class="n"&gt;DeploymentConditionType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"ReplicaFailure"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The description given to the &lt;code&gt;Available&lt;/code&gt; condition type is quite mysterious:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;At least the minimum available replicas required are up.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What does "minimum available replicas" mean? Is this minimum 1? I cannot see any &lt;code&gt;minAvailable&lt;/code&gt; field in the deployment spec, so my initial guess was that it would be 1.&lt;/p&gt;

&lt;p&gt;Before going further, let's the description attached to the reason &lt;a href="https://github.com/kubernetes/kubernetes/blob/3615291/pkg/controller/deployment/util/deployment_util.go#L96-L97"&gt;&lt;code&gt;MinimumReplicasAvailable&lt;/code&gt;&lt;/a&gt;. Apparently, this reason is the only reason for the &lt;code&gt;Available&lt;/code&gt; condition type.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// MinimumReplicasAvailable is added in a deployment when it has its minimum&lt;/span&gt;
&lt;span class="c"&gt;// replicas required available.&lt;/span&gt;
&lt;span class="n"&gt;MinimumReplicasAvailable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"MinimumReplicasAvailable"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The description doesn't help either. Let's see what the &lt;a href="https://github.com/kubernetes/kubernetes/blob/3615291/pkg/controller/deployment/sync.go#L513-L516"&gt;deployment sync&lt;/a&gt; function does:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;availableReplicas&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;deploy&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxUnavailable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;deployment&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;deployment&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Spec&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Replicas&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;minAvailability&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;deploy&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewCondition&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Available"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"True"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"MinimumReplicasAvailable"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Deployment has minimum availability."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;deploy&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SetCondition&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;minAvailability&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: in the real code, the max unavailable is on the right side of the inequality. I find it easier to reason about this inequality when the single value to the right is the desired replica number.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ahhh, the actual logic being &lt;code&gt;Available&lt;/code&gt;! If &lt;code&gt;maxUnavailable&lt;/code&gt; is 0, then it becomes obvious: the "minimum availability" means that number of available replicas is greater or equal to the number of replicas in the spec; the deployment has minimum availability if and only if the following inequality holds:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;available&lt;/th&gt;
&lt;th&gt;+&lt;/th&gt;
&lt;th&gt;acceptable unavailable&lt;/th&gt;
&lt;th&gt;≥&lt;/th&gt;
&lt;th&gt;desired&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1️⃣&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;2️⃣&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;3️⃣&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Let's take an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt; &lt;span class="c1"&gt;# 3️⃣ desired&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt; &lt;span class="c1"&gt;# 2️⃣ acceptable unavailable&lt;/span&gt;
&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;availableReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8&lt;/span&gt; &lt;span class="c1"&gt;# 1️⃣ available&lt;/span&gt;
  &lt;span class="na"&gt;conditions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Available"&lt;/span&gt;
      &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;True"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the inequality holds which means this deployment has "minimum availability" (= &lt;code&gt;Available = True&lt;/code&gt;):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;code&gt;availableReplicas&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;+&lt;/th&gt;
&lt;th&gt;&lt;code&gt;maxUnavailable&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;≥&lt;/th&gt;
&lt;th&gt;&lt;code&gt;replicas&lt;/code&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Default value for &lt;code&gt;maxUnavailable&lt;/code&gt; is 25%
&lt;/h2&gt;

&lt;p&gt;Now, what happens when &lt;code&gt;maxUnavailable&lt;/code&gt; is not set? The official documentation &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable"&gt;maxUnavailable&lt;/a&gt; says:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;maxUnavailable&lt;/code&gt; is an optional field that specifies the maximum number of Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage by rounding down. The value cannot be 0 if &lt;code&gt;maxSurge&lt;/code&gt; is 0. &lt;strong&gt;The default value is 25%&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desired Pods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available at all times during the update is at least 70% of the desired Pods.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's take an example with a deployment that has no &lt;code&gt;maxUnavailable&lt;/code&gt; field set, and imagine that 4 pods are unavailable due to a resource quota that only allows for 5 pods to start:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;availableReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;unavailableReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;conditions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Available"&lt;/span&gt;
      &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;False"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This time, the inequality does not hold:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;code&gt;status.availableReplicas&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;+&lt;/th&gt;
&lt;th&gt;&lt;code&gt;spec.strategy.rollingUpdate.maxUnavailable&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;≱&lt;/th&gt;
&lt;th&gt;&lt;code&gt;spec.replicas&lt;/code&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;lower(25% * 10) = 2&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Let's dig a bit more and see how &lt;a href="https://github.com/kubernetes/kubernetes/blob/3615291/pkg/controller/deployment/util/deployment_util.go#L434-L445"&gt;&lt;code&gt;MaxAvailable&lt;/code&gt;&lt;/a&gt; is defined:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// MaxUnavailable returns the maximum unavailable pods a rolling deployment&lt;/span&gt;
&lt;span class="c"&gt;// can take.&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;MaxUnavailable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;deployment&lt;/span&gt; &lt;span class="n"&gt;apps&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Deployment&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;IsRollingUpdate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;deployment&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="n"&gt;deployment&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Spec&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Replicas&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;maxUnavailable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;ResolveFenceposts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;deployment&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Spec&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Strategy&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RollingUpdate&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxSurge&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;deployment&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Spec&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Strategy&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RollingUpdate&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxUnavailable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;deployment&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Spec&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Replicas&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;maxUnavailable&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;deployment&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Spec&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Replicas&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;deployment&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Spec&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Replicas&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;maxUnavailable&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The core of the logic behind maxUnavailable is in &lt;a href="https://github.com/kubernetes/kubernetes/blob/3615291/pkg/controller/deployment/util/deployment_util.go#L874-L902"&gt;&lt;code&gt;ResolveFenceposts&lt;/code&gt;&lt;/a&gt; (note: I simplified the code a bit to make it more readable):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// ResolveFenceposts resolves both maxSurge and maxUnavailable. This needs to happen in one&lt;/span&gt;
&lt;span class="c"&gt;// step. For example:&lt;/span&gt;
&lt;span class="c"&gt;//&lt;/span&gt;
&lt;span class="c"&gt;// 2 desired, max unavailable 1%, surge 0% - should scale old(-1), then new(+1), then old(-1), then new(+1)&lt;/span&gt;
&lt;span class="c"&gt;// 1 desired, max unavailable 1%, surge 0% - should scale old(-1), then new(+1)&lt;/span&gt;
&lt;span class="c"&gt;// 2 desired, max unavailable 25%, surge 1% - should scale new(+1), then old(-1), then new(+1), then old(-1)&lt;/span&gt;
&lt;span class="c"&gt;// 1 desired, max unavailable 25%, surge 1% - should scale new(+1), then old(-1)&lt;/span&gt;
&lt;span class="c"&gt;// 2 desired, max unavailable 0%, surge 1% - should scale new(+1), then old(-1), then new(+1), then old(-1)&lt;/span&gt;
&lt;span class="c"&gt;// 1 desired, max unavailable 0%, surge 1% - should scale new(+1), then old(-1)&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;ResolveFenceposts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;maxSurge&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;maxUnavailable&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;instr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IntOrString&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;desired&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;surge&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;       &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;instr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetValueFromIntOrPercent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;instr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ValueOrDefault&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;maxSurge&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;instr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;FromInt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; &lt;span class="n"&gt;desired&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;unavailable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;instr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetValueFromIntOrPercent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;instr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ValueOrDefault&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;maxUnavailable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;instr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;FromInt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; &lt;span class="n"&gt;desired&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;surge&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;unavailable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;false&lt;/code&gt; boolean turns the integer rounding "up", which means &lt;code&gt;0.5&lt;/code&gt; will be rounded to &lt;code&gt;0&lt;/code&gt; instead of &lt;code&gt;1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;maxUnavailable&lt;/code&gt; and &lt;code&gt;maxSurge&lt;/code&gt; (they call them "fenceposts" values) are simply read from the deployment's spec. In the following example, the deployment will become &lt;code&gt;Available = True&lt;/code&gt; only if there are at least 5 replicas available:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0%&lt;/span&gt; &lt;span class="c1"&gt;# 🔰 10 * 0.0 = 0 replicas&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Hands-on example with the &lt;code&gt;Available&lt;/code&gt; condition
&lt;/h2&gt;

&lt;p&gt;Imagine that we have a namespace named &lt;code&gt;restricted&lt;/code&gt; that only allows for 200 MiB, and our pod requires 50 MiB. The first 4 pods will be successfully created, but the fifth one will fail.&lt;/p&gt;

&lt;p&gt;Let us first apply the following manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;restricted&lt;/span&gt;
&lt;span class="s"&gt;--------&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ResourceQuota&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;restricted&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mem-cpu-demo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hard&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;requests.memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;200Mi&lt;/span&gt;
&lt;span class="s"&gt;--------&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;restricted&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="c1"&gt;# 🔰&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:alpine&lt;/span&gt;
          &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;50Mi"&lt;/span&gt; &lt;span class="c1"&gt;# the 5th pod will fail (on purpose)&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a few seconds, the &lt;code&gt;Available&lt;/code&gt; condition stabilizes to &lt;code&gt;False&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# kubectl -n restricted get deploy test -oyaml&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="c1"&gt;# 🔰&lt;/span&gt;
&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;conditions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;lastTransitionTime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2020-07-07T14:04:27Z"&lt;/span&gt;
      &lt;span class="na"&gt;lastUpdateTime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2020-07-07T14:04:27Z"&lt;/span&gt;
      &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment does not have minimum availability.&lt;/span&gt;
      &lt;span class="na"&gt;reason&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MinimumReplicasUnavailable&lt;/span&gt;
      &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;False"&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Available&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;lastTransitionTime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2020-07-07T14:04:27Z"&lt;/span&gt;
      &lt;span class="na"&gt;lastUpdateTime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2020-07-07T14:04:27Z"&lt;/span&gt;
      &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;pods&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"test-7df57bd99d-5qw47"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;is&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;forbidden:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;exceeded&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;quota:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;mem-cpu-demo,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;requested:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;requests.memory=50Mi,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;used:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;requests.memory=200Mi,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;limited:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;requests.memory=200Mi'&lt;/span&gt;
      &lt;span class="na"&gt;reason&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;FailedCreate&lt;/span&gt;
      &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;True"&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ReplicaFailure&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;lastTransitionTime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2020-07-07T14:04:27Z"&lt;/span&gt;
      &lt;span class="na"&gt;lastUpdateTime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2020-07-07T14:04:38Z"&lt;/span&gt;
      &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ReplicaSet "test-7df57bd99d" is progressing.&lt;/span&gt;
      &lt;span class="na"&gt;reason&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ReplicaSetUpdated&lt;/span&gt;
      &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;True"&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Progressing&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
  &lt;span class="na"&gt;availableReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
  &lt;span class="na"&gt;readyReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
  &lt;span class="na"&gt;unavailableReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;updatedReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are asking for at most 0 unavailable replicas and there is 1 unavailable replica (due to the resource quota). Thus, the "minimum availability" inequality does not hold which means the deployment has the condition &lt;code&gt;Available = False&lt;/code&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;code&gt;availableReplicas&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;+&lt;/th&gt;
&lt;th&gt;&lt;code&gt;rollingUpdate.maxUnavailable&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;≱&lt;/th&gt;
&lt;th&gt;&lt;code&gt;replicas&lt;/code&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Update 9 July 2020:&lt;/strong&gt; added a paragraph on the default value for &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable"&gt;maxUnavailable&lt;/a&gt;, and fixed the yaml example where I had mixed &lt;code&gt;unavailableReplicas&lt;/code&gt; with &lt;code&gt;maxUnavailable&lt;/code&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Pull-through Docker registry on Kind clusters</title>
      <dc:creator>Maël Valais</dc:creator>
      <pubDate>Fri, 03 Jul 2020 13:13:00 +0000</pubDate>
      <link>https://dev.to/maelvls/pull-through-docker-registry-on-kind-clusters-cpo</link>
      <guid>https://dev.to/maelvls/pull-through-docker-registry-on-kind-clusters-cpo</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;em&gt;(August 2023)&lt;/em&gt; As mentioned by Wolfgang Schnerring &lt;a href="https://github.com/maelvls/maelvls.github.io/issues/7#issuecomment-688277166"&gt;in this comment&lt;/a&gt;, the method presented in this document is very limited: many images are still being pulled directly and it only proxies a single "upstream" registry. Nowadays, &lt;code&gt;docker-registry-proxy&lt;/code&gt; is a better alternative: &lt;a href="https://github.com/rpardini/docker-registry-proxy#kind-cluster"&gt;https://github.com/rpardini/docker-registry-proxy#kind-cluster&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;to create a local pull-through registry to speed up image pulling in a &lt;a href="https://kind.sigs.k8s.io/"&gt;Kind&lt;/a&gt; cluster, run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; proxy &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always &lt;span class="nt"&gt;--net&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kind &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;REGISTRY_PROXY_REMOTEURL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://registry-1.docker.io registry:2
  kind create cluster &lt;span class="nt"&gt;--config&lt;/span&gt; /dev/stdin &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
  kind: Cluster
  apiVersion: kind.x-k8s.io/v1alpha4
  containerdConfigPatches:
    - |-
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint = ["http://proxy:5000"]
&lt;/span&gt;&lt;span class="no"&gt;  EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.docker.com/registry/configuration/#proxy"&gt;you can't&lt;/a&gt; use this pull-through proxy registry to push your own images (e.g. to &lt;a href="https://github.com/tilt-dev/kind-local"&gt;speed up Tilt builds&lt;/a&gt;), but you can create two registries (one for caching, the other for local images). See this section for more context; the lines are:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; proxy &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always &lt;span class="nt"&gt;--net&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kind &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;REGISTRY_PROXY_REMOTEURL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://registry-1.docker.io registry:2
  docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; registry &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always &lt;span class="nt"&gt;--net&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kind &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-p&lt;/span&gt; 5000:5000 registry:2
  kind create cluster &lt;span class="nt"&gt;--config&lt;/span&gt; /dev/stdin &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
  kind: Cluster
  apiVersion: kind.x-k8s.io/v1alpha4
  containerdConfigPatches:
    - |-
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint = ["http://proxy:5000"]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5000"]
        endpoint = ["http://registry:5000"]
&lt;/span&gt;&lt;span class="no"&gt;  EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;in case you often create &amp;amp; delete Kind clusters, using a local registry that serves as a proxy avoids redundant downloads&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;KIND_EXPERIMENTAL_DOCKER_NETWORK&lt;/code&gt; is useful but remember that the default network (&lt;code&gt;bridge&lt;/code&gt;) doesn't have DNS resolution for container hostnames&lt;/li&gt;
&lt;li&gt;the Docker default network (&lt;code&gt;bridge&lt;/code&gt;) &lt;a href="https://stackoverflow.com/questions/41400603/dockers-embedded-dns-on-the-default-bridged-network"&gt;has limitations&lt;/a&gt; as [detailed by Docker][default-bridge].&lt;/li&gt;
&lt;li&gt;If you play with &lt;a href="https://cluster-api.sigs.k8s.io/"&gt;ClusterAPI&lt;/a&gt; with its &lt;a href="https://github.com/kubernetes-sigs/cluster-api/tree/master/test/infrastructure/docker"&gt;Docker provider&lt;/a&gt;, you might not be able to use a local registry due to the clusters being created on the default network, which means the "proxy" hostname won't be resolved (but we could work around that).&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a href="https://kind.sigs.k8s.io/"&gt;Kind&lt;/a&gt; is an awesome tool that allows you to spin up local Kubernetes clusters locally in seconds. It is perfect for Kubernetes developers or anyone who wants to play with controllers.&lt;/p&gt;

&lt;p&gt;One thing I hate about Kind is that images are not cached between two Kind containers. Even worse: when deleting and re-creating a cluster, all the downloaded images disappear.&lt;/p&gt;

&lt;p&gt;In this post, I detail my discoveries around local registries and why the default Docker network is a trap.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contents:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Kind has no image caching mechanism&lt;/li&gt;
&lt;li&gt;Creating a caching proxy registry&lt;/li&gt;
&lt;li&gt;Creating a Kind cluster that knows about this caching proxy registry&lt;/li&gt;
&lt;li&gt;Check that the caching proxy registry works&lt;/li&gt;
&lt;li&gt;Docker proxy vs. local registry&lt;/li&gt;
&lt;li&gt;Improving the ClusterAPI docker provider to use a given network&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Kind has no image caching mechanism
&lt;/h2&gt;

&lt;p&gt;Whenever I re-create a Kind cluster and try to install ClusterAPI, all the (quite heavy) images have to be re-downloaded. Just take a look at all the images that get re-downloaded:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# That's the cluster created using 'kind create cluster'&lt;/span&gt;
% docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; kind-control-plane crictl images
IMAGE                                                                      TAG      SIZE
quay.io/jetstack/cert-manager-cainjector                                   v0.11.0  11.1MB
quay.io/jetstack/cert-manager-controller                                   v0.11.0  14MB
quay.io/jetstack/cert-manager-webhook                                      v0.11.0  14.3MB
us.gcr.io/k8s-staging-capi-docker/capd-manager/capd-manager-amd64          dev      53.5MB
us.gcr.io/k8s-artifacts-prod/cluster-api/cluster-api-controller            v0.3.0   20.3MB
us.gcr.io/k8s-artifacts-prod/cluster-api/kubeadm-bootstrap-controller      v0.3.0   19.6MB
us.gcr.io/k8s-artifacts-prod/cluster-api/kubeadm-control-plane-controller  v0.3.0   21.1MB

&lt;span class="c"&gt;# I also use a ClusterAPI-created cluster (relying on CAPD):&lt;/span&gt;
% docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; capd-capd-control-plane-l4tx7 crictl images &lt;span class="nb"&gt;ls
&lt;/span&gt;docker.io/calico/cni                  v3.12.2             8b42391a46731       77.5MB
docker.io/calico/kube-controllers     v3.12.2             5ca01eb356b9a       23.1MB
docker.io/calico/node                 v3.12.2             4d501404ee9fa       89.7MB
docker.io/calico/pod2daemon-flexvol   v3.12.2             2abcc890ae54f       37.5MB
docker.io/metallb/controller          v0.9.3              4715cbeb69289       17.1MB
docker.io/metallb/speaker             v0.9.3              f241be9dae666       19.2MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's a total of 418 MB that get re-downloaded every time I restart both clusters!&lt;/p&gt;

&lt;p&gt;Unfortunately, there is no way to re-use the image registry built into your default Docker engine (both on Linux and on macOS). One solution to this problem is to &lt;a href="https://kind.sigs.k8s.io/docs/user/local-registry/"&gt;spin up an intermediary Docker registry&lt;/a&gt; in a side container; as long as this container exists, all the images that have already been downloaded once can be served from cache.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a caching proxy registry
&lt;/h2&gt;

&lt;p&gt;We want to create a registry with a simple Kind cluster; let's start with the registry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; proxy &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always &lt;span class="nt"&gt;--net&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kind &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;REGISTRY_PROXY_REMOTEURL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://registry-1.docker.io registry:2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--net kind&lt;/code&gt; is required because Kind creates its containers in a separate network; it does that the because the "bridge" has [limitations][default-bridge] and [doesn't allow you][dns-services] to use container names as DNS names:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;By default, a container inherits the DNS settings of the host, as defined in the &lt;code&gt;/etc/resolv.conf&lt;/code&gt; configuration file. Containers that use the default bridge network get a copy of this file, whereas containers that use a custom network use Docker’s embedded DNS server, which forwards external DNS lookups to the DNS servers configured on the host.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;which means that the container runtime (containerd) that runs our Kind cluster won't be able to resove the address &lt;code&gt;proxy:5000&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;REGISTRY_PROXY_REMOTEURL&lt;/code&gt; is required due to the fact that by default, the registry won't forward requests. It simply tries to find the image in &lt;code&gt;/var/lib/registry/docker/registry/v2/repositories&lt;/code&gt; and returns 404 if it doesn't find it.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Using the &lt;a href="https://docs.docker.com/registry/configuration/#proxy"&gt;pull-through&lt;/a&gt; feature (I call it "caching proxy"), the registry will proxy all requests coming from all mirror prefixes and cache the blobs and manifests locally. To enable this feature, we set &lt;code&gt;REGISTRY_PROXY_REMOTEURL&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Other interesting bit about &lt;code&gt;REGISTRY_PROXY_REMOTEURL&lt;/code&gt;: this environement variable name is mapped from &lt;a href="https://docs.docker.com/registry/configuration/#proxy"&gt;the registry YAML config API&lt;/a&gt;. The variable&lt;/p&gt;


&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;REGISTRY_PROXY_REMOTEURL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://registry-1.docker.io
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;is equivalent to the following YAML config:&lt;/p&gt;


&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;proxy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;remoteurl&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://registry-1.docker.io&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;p&gt;⚠️ The registry can't be both in normal mode ("local proxy") and in caching proxy mode at the same time, see below.&lt;/p&gt;

&lt;p&gt;[default-bridge]: &lt;a href="https://docs.docker.com/network/bridge/#use-the-default-bridge-network"&gt;https://docs.docker.com/network/bridge/#use-the-default-bridge-network&lt;/a&gt;&lt;br&gt;
[dns-services]: &lt;a href="https://docs.docker.com/config/containers/container-networking/#dns-services"&gt;https://docs.docker.com/config/containers/container-networking/#dns-services&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Creating a Kind cluster that knows about this caching proxy registry
&lt;/h2&gt;

&lt;p&gt;The second step is to create a Kind cluster and tell the container runtime to use a specific registry; here is the command to create it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster &lt;span class="nt"&gt;--config&lt;/span&gt; /dev/stdin &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
  - |-
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
      endpoint = ["http://proxy:5000"]
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;code&gt;containerdConfigPatches&lt;/code&gt; is a way to semantically patch &lt;code&gt;/etc/containerd/config.conf&lt;/code&gt;. By default, this file looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; kind-control-plane &lt;span class="nb"&gt;cat&lt;/span&gt; /etc/containerd/config.toml
&lt;span class="o"&gt;[&lt;/span&gt;plugins]
  &lt;span class="o"&gt;[&lt;/span&gt;plugins.&lt;span class="s2"&gt;"io.containerd.grpc.v1.cri"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
    &lt;span class="o"&gt;[&lt;/span&gt;plugins.&lt;span class="s2"&gt;"io.containerd.grpc.v1.cri"&lt;/span&gt;.registry]
      &lt;span class="o"&gt;[&lt;/span&gt;plugins.&lt;span class="s2"&gt;"io.containerd.grpc.v1.cri"&lt;/span&gt;.registry.mirrors]
        &lt;span class="o"&gt;[&lt;/span&gt;plugins.&lt;span class="s2"&gt;"io.containerd.grpc.v1.cri"&lt;/span&gt;.registry.mirrors.&lt;span class="s2"&gt;"docker.io"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
          endpoint &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"https://registry-1.docker.io"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note 2:&lt;/strong&gt; the mirror prefix (&lt;code&gt;docker.io&lt;/code&gt;) can be omitted for images stored on Docker Hub. For other registries such as &lt;code&gt;gcr.io&lt;/code&gt;, this mirror prefix has to be given. Here is a table with some examples of image names that are first prepended with "docker.io" if the mirror prefix is not present, and we get the final address by mapping these mirror prefixes with mirror entries:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;image name&lt;/th&gt;
&lt;th&gt;"actual" image name&lt;/th&gt;
&lt;th&gt;registry address w.r.t. mirrors&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;alpine&lt;/td&gt;
&lt;td&gt;docker.io/alpine&lt;/td&gt;
&lt;td&gt;&lt;a href="https://registry-1.docker.io/v2/library/alpine/manifests/latest"&gt;https://registry-1.docker.io/v2/library/alpine/manifests/latest&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;gcr.io/istio-release/pilot&lt;/td&gt;
&lt;td&gt;gcr.io/istio-release/pilot&lt;/td&gt;
&lt;td&gt;&lt;a href="https://gcr.io/v2/istio-release/pilot/manifests/1.9.1"&gt;https://gcr.io/v2/istio-release/pilot/manifests/1.9.1&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;foo.org/something/someimage&lt;/td&gt;
&lt;td&gt;foo.org/something/someimage&lt;/td&gt;
&lt;td&gt;&lt;a href="https://foo.org/v2/something/someimage/manifests/latest"&gt;https://foo.org/v2/something/someimage/manifests/latest&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Check that the caching proxy registry works
&lt;/h2&gt;

&lt;p&gt;Let's see if the proxy registry works by running a pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% kubectl run foo &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nicolaka/netshoot
% docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; proxy &lt;span class="nb"&gt;ls&lt;/span&gt; /var/lib/registry/docker/registry/v2/repositories
nicolaka
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also see through the registry logs that everything is going well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# docker logs proxy | tail
time="2020-07-26T14:52:44.2624761Z" level=info msg="Challenge established with upstream : {https registry-1.docker.io /v2/}" go.version=go1.11.2 http.request.host="proxy:5000" http.request.id=15e9ac86-7d79-4883-a8ce-861a7484887c http.request.method=HEAD http.request.remoteaddr="172.18.0.2:57588" http.request.uri="/v2/nicolaka/netshoot/manifests/latest" http.request.useragent="containerd/v1.4.0-beta.1-34-g49b0743c" vars.name="nicolaka/netshoot" vars.reference=latest
time="2020-07-26T14:52:45.4195817Z" level=info msg="Adding new scheduler entry for nicolaka/netshoot@sha256:04786602e5a9463f40da65aea06fe5a825425c7df53b307daa21f828cfe40bf8 with ttl=167h59m59.9999793s" go.version=go1.11.2 instance.id=ba959eb9-2fa3-47c0-beb7-91480c8a31ee service=registry version=v2.7.1
172.18.0.2 - - [26/Jul/2020:14:52:43 +0000] "HEAD /v2/nicolaka/netshoot/manifests/latest HTTP/1.1" 200 1999 "" "containerd/v1.4.0-beta.1-34-g49b0743c"
time="2020-07-26T14:52:45.4204299Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="proxy:5000" http.request.id=15e9ac86-7d79-4883-a8ce-861a7484887c http.request.method=HEAD http.request.remoteaddr="172.18.0.2:57588" http.request.uri="/v2/nicolaka/netshoot/manifests/latest" http.request.useragent="containerd/v1.4.0-beta.1-34-g49b0743c" http.response.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.response.duration=1.6697112s http.response.status=200 http.response.written=1999
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Docker proxy vs. local registry
&lt;/h2&gt;

&lt;p&gt;A bit later, I discovered that &lt;a href="https://docs.docker.com/registry/configuration/#proxy"&gt;you can't&lt;/a&gt; push to a proxy registry. &lt;a href="https://tilt.dev/"&gt;Tilt&lt;/a&gt; is a tool I use to ease the process of developping in a containerized environment (and it works best with Kubernetes); it &lt;a href="https://github.com/tilt-dev/kind-local"&gt;relies on a local registry&lt;/a&gt; in order to cache build containers even when restarting the Kind cluster.&lt;/p&gt;

&lt;p&gt;Either the registry is used as a "local registry" (where you can push images), or it is used as a pull-through proxy. So instead of configuring one single "proxy" registry, I configure two registries: one for local images, one for caching.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; proxy &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always &lt;span class="nt"&gt;--net&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kind &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;REGISTRY_PROXY_REMOTEURL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://registry-1.docker.io registry:2
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; registry &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always &lt;span class="nt"&gt;-p&lt;/span&gt; 5000:5000 &lt;span class="nt"&gt;--net&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kind registry:2
kind create cluster &lt;span class="nt"&gt;--config&lt;/span&gt; /dev/stdin &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
  - |-
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
      endpoint = ["http://proxy:5000"]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5000"]
      endpoint = ["http://registry:5000"]
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that we do use a port-forwarding proxy (&lt;code&gt;-p 5000:5000&lt;/code&gt;) so that we can push images "from the host", e.g.:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% docker tag alpine localhost:5000/alpine
% docker push localhost:5000/alpine
The push refers to repository &lt;span class="o"&gt;[&lt;/span&gt;localhost:5000/alpine]
50644c29ef5a: Pushed
latest: digest: sha256:a15790640a6690aa1730c38cf0a440e2aa44aaca9b0e8931a9f2b0d7cc90fd65 size: 528

&lt;span class="c"&gt;# Let's see if this image is also available from the cluster:&lt;/span&gt;
% docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; kind-control-plane crictl pull localhost:5000/alpine
Image is up to &lt;span class="nb"&gt;date &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;sha256:a24bb4013296f61e89ba57005a7b3e52274d8edd3ae2077d04395f806b63d83e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you use Tilt, you might also want to tell Tilt that it can use the local registry. I find it a bit weird to have to set an annotation (hidden Tilt API?) but whatever. If you set this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind get nodes | xargs &lt;span class="nt"&gt;-L1&lt;/span&gt; &lt;span class="nt"&gt;-I&lt;/span&gt;% kubectl annotate node % tilt.dev/registry&lt;span class="o"&gt;=&lt;/span&gt;localhost:5000 &lt;span class="nt"&gt;--overwrite&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then Tilt will use &lt;code&gt;docker push localhost:5000/you-image&lt;/code&gt; (from your host, not from the cluster container) in order to speed up things. Note that there is a proposal (&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry"&gt;KEP 1755&lt;/a&gt;) that aims at standardizing the discovery of local registries using a configmap. Tilt already supports it, so you may use it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Improving the ClusterAPI docker provider to use a given network
&lt;/h2&gt;

&lt;p&gt;When I play with ClusterAPI, I usually use the CAPD provider (ClusterAPI Provider Docker). This provider &lt;a href="https://github.com/kubernetes-sigs/cluster-api/blob/master/test/infrastructure/docker"&gt;is kept in-tree&lt;/a&gt; inside the cluster-api projet.&lt;/p&gt;

&lt;p&gt;I want to use the caching mechanism presented above. But to do that, I need to make sure the clusters created by CAPD are not created on the default network (&lt;a href="https://sigs.k8s.io/cluster-api/test/infrastructure/docker/docker/kind_manager.go#L178"&gt;current implementation&lt;/a&gt; creates CAPD clusters on the default "bridge" network).&lt;/p&gt;

&lt;p&gt;I want to be able to customize the network on which the CAPD provider creates the container that make up the cluster. Imagine that we could pass the network name as part of a &lt;a href="https://github.com/kubernetes-sigs/cluster-api/blob/6821939410c37743b45c36ec91d94c37dba1998e/test/e2e/data/infrastructure-docker/cluster-template.yaml#L26-L35"&gt;DockerMachineTemplate&lt;/a&gt; (the content of the &lt;code&gt;spec&lt;/code&gt; is defined in code &lt;a href="https://github.com/kubernetes-sigs/cluster-api/blob/2ac3728d26593f7c54520999477aad45934e1c59/test/infrastructure/docker/api/v1alpha3/dockermachine_types.go#L30-L55"&gt;here&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infrastructure.cluster.x-k8s.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DockerMachineTemplate&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;capd-control-plane&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;extraMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock&lt;/span&gt;
          &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock&lt;/span&gt;

      &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind&lt;/span&gt; &lt;span class="c1"&gt;# 🔰 This field does not exist yet.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Update 26 July 2020&lt;/strong&gt;: added a section about local registry vs. caching proxy. Reworked the whole post (less noise, more useful information).&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Using mitmproxy to understand what kubectl does under the hood</title>
      <dc:creator>Maël Valais</dc:creator>
      <pubDate>Wed, 01 Jul 2020 17:17:00 +0000</pubDate>
      <link>https://dev.to/maelvls/using-mitmproxy-to-understand-what-kubectl-does-under-the-hood-36om</link>
      <guid>https://dev.to/maelvls/using-mitmproxy-to-understand-what-kubectl-does-under-the-hood-36om</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/30a0WrfaS2A"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;In Oct 2019, Ahmet Alp Balkan wrote &lt;a href="https://ahmet.im/blog/kubectl-man-in-the-middle"&gt;this blog post&lt;/a&gt; that explains how to use &lt;code&gt;mitmproxy&lt;/code&gt; to observe the requests made by &lt;code&gt;kubectl&lt;/code&gt;. But I couldn't use the tutorial for two reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I use &lt;code&gt;kind&lt;/code&gt; to create local clusters which means I hit the Go &lt;code&gt;net/http&lt;/code&gt; limitation (skips proxying for hosts &lt;code&gt;localhost&lt;/code&gt; and &lt;code&gt;127.0.0.1&lt;/code&gt;, see &lt;a href="https://maelvls.dev/go-ignores-proxy-localhost/"&gt;this blog post&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;I use client certs authentication, which can't work with the method presented by Ahmet; it can only work for header-based authentication (e.g. token) but not for client certs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the following, I detail how I managed to make all that work.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Let's use a separate kubeconfig file to avoid messing with ~/.kube/config.&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;~/.kube/kind-kind

&lt;span class="c"&gt;# Now, let's create a cluster.&lt;/span&gt;
kind create cluster

&lt;span class="c"&gt;# Let's see what the content of $KUECONFIG is:&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="nv"&gt;$KUBECONFIG&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that the kubeconfig uses a client cert as a way to authenticate to the apiserver.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;clusters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;certificate-authority-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EWXlPVEE1TURJeU4xb1hEVE13TURZeU56QTVNREl5TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS25qCmhmRzBvenZVb05jMXY1STkvYm13dFBqb2QvK0RyczF4TFZOcWgxQjhFcTY1S3lnQjBSbDdQcTJueUhyRVRnTmsKZ2VPQ2RzRURObGpKOE12SC9GbE9nRkUvS0dqK2hwN2hwc0dHRExReWFUOExUY25JN0FNL2t3KzY5R0hidlN3Nwo2V0Y1VERTMDU1RkRqdnRveGdHcERycmZQZTk5bXN5Zmk3aWtteDk5MmRyMHFQd0xxanJpZHNkWU52MUZqU1Y1CndkRlFISGxBS2hBcmlUWmpQMnhNL3poOENBOFhndjF0UUxVVk5IS1hrSG5UYWlkeFY1MkduaUVaQmd0d2tSK3oKQ3hQempVZFAxQ1JRYzU0YmxDYW9FQVdTc0NYTUVPREhTSnowSi9CWHJCU2JaeGZIakd0Y0k0bEhBUmx2aURNawp3U3lOLy9qdE9tbWhDT29BTzBzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIMDBrYnZpaWNNT3IxdFJoTkQweVpGa3ZkT1AKUzFGOEZKK1BFd1o1WExUTVVyVG1yekVlZmgza21xWkxYUnlyK3c0Snk1a0grK1o3enBpdlp6Q3BGOEtwclJaWAp5N2Z6TkJOeWMrOHFKN3dCek0xZ21BdTRha3BlNVBYbkw3akZIcU9aUmNXTmZKcUpTci9BS05sK2c2SjAzV2pnCmJDc0NQNTNrZ1czSDZYMkRoS1JzZFRWQlc1UGVlek1YRVlON2VYbGpnWE4xcElMdEU1VE5oWVZIcTZaUDVCaDkKMmU2Y1U4bEFuNlhYQ0Ira2RsTkhudGp1cTlSL2lPQVJRaWpVSVlWSFdCT1NyWGp6TnhkTklEakRRd3lKUnViMApDVEhrcDg4eUlNaEV1Nk1OV1VCUU82ZnVqbURHbUJJbTlzdzJpSnI1UlArSUVUVnQyWFc0Q2dHT1cwVT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=&lt;/span&gt;
      &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://127.0.0.1:53744&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind-helix&lt;/span&gt;
&lt;span class="na"&gt;contexts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind-helix&lt;/span&gt;
      &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind-helix&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind-helix&lt;/span&gt;
&lt;span class="na"&gt;current-context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind-helix&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Config&lt;/span&gt;
&lt;span class="na"&gt;preferences&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;span class="na"&gt;users&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind-helix&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;client-certificate-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJYTZXc25HT0RGZW93RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBMk1qa3dPVEF5TWpkYUZ3MHlNVEEyTWprd09UQXlNekJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXRkbkxsb0V1cnFoSmRrMGgKME5VcXpFSHhUbmVHS040QTNtWDBobmFLcE1TT3hISlBQcTB6V0t4WEVxNTZPTkdhSkhvS0VTRUM2Yjh2MGcwTQo5dXVGOXJyQ1VORkpXMmdHcFArRG96TEpVS3RtN2F0WFZRYXNVQ2tjbFRtQ003aHlqOGZTTlRDd2lxT3RNOUFvCnFVVTJYd1k0a2xhL0RuMkRBc1V1VmNiUTdITDA1N0tFbjZvVzlFVy9mQ2hMVE5OTWhmelpkNzFISGs3MFNOZDMKTG5jTXNjNmp2eS9kN2MwbjM4ek45SjVzWVZOTjNsS1YvOU5MT2FaTEQ0bSsyVU5XV1FVazRuci8rZjlaZk1IMwphVlFibDZSWEZwaW1jYWg4UjRJTmhRNkhYbmVEbUI2dUl3RGdjQnhhQUtoNFVPVlZwODB1Uy9pYUVLV1VSWE1PCkg4YXBZUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJaWtmbis5TkJQc2FNSzNVMU5pWmtBaWxsM1pWVXJDdzNsWgpIeGRGbm9MZGxZYmtPeFVEN25EK3lrNXhZWW4yaDc3WUU4NlU5czJ3aitkdFJTR2Ezc3p4WVJQbkt5MDN3L1M4CkhqZzhwSkZOZFhHWlNPUWJDQTBmT1BONXl5MlpYRlVWd0JwZ3VTMC9nTkNUUkRPUDl3QmNjQXhiSGRmSjMxQzQKa2hCemF4QXI5UEliMVBzUlhqS2ZSRnkvcllwYWRBYVhhYmZMOFRvbTJ4cGpLVDYreGxoL3lJb2tZbVhxSnlXRgp1SGFmWG1qUEdyRDFoZUo2UnR5U0xRM0dKUXQxWmVPK3BaYlJlRjcxU0ZRSjRQdFhtTHhGMjZQL3dlV3F0NWFJClhTK0ZnanRYbzZqUG1mWWRweDZ3bWp3UHRaZ1pRdXhSL2tsejVvQnErNnFaYUZDWXg4RT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=&lt;/span&gt;
      &lt;span class="na"&gt;client-key-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdGRuTGxvRXVycWhKZGswaDBOVXF6RUh4VG5lR0tONEEzbVgwaG5hS3BNU094SEpQClBxMHpXS3hYRXE1Nk9OR2FKSG9LRVNFQzZiOHYwZzBNOXV1RjlyckNVTkZKVzJnR3BQK0RvekxKVUt0bTdhdFgKVlFhc1VDa2NsVG1DTTdoeWo4ZlNOVEN3aXFPdE05QW9xVVUyWHdZNGtsYS9EbjJEQXNVdVZjYlE3SEwwNTdLRQpuNm9XOUVXL2ZDaExUTk5NaGZ6WmQ3MUhIazcwU05kM0xuY01zYzZqdnkvZDdjMG4zOHpOOUo1c1lWTk4zbEtWCi85TkxPYVpMRDRtKzJVTldXUVVrNG5yLytmOVpmTUgzYVZRYmw2UlhGcGltY2FoOFI0SU5oUTZIWG5lRG1CNnUKSXdEZ2NCeGFBS2g0VU9WVnA4MHVTL2lhRUtXVVJYTU9IOGFwWVFJREFRQUJBb0lCQVFDVkNJOXZJeVB0QkFKZwpyOG44NmhhUEc2UDFtTU1jandUTFAyZHRJNDF3aDU0eHBUVUl1czJQNkgzYjA1NWJIbnhqVkprWGZLUjBpTGxhClBsUFhzU0l6R00vVGlCSEVsYmFNVnRPOVZndml6dllsNWZ4R3RKZFhncm5vR2g5NDM3c1Qxc0dSMGZ0OVE3TFkKK2NtNUgvMzFWcFhhYUxsZjJNRWI3aG1STnNWV1lXWm9MUHJ2QUJPemVmUnpKU0RONHU0M3M0eVpnNHJoL3IzWQo3YUhYOS9CSHpJRk93WTRNL1BRQzdYaFhDMXBIeWNPY2lSWVhFTkpuMTdQN2NOSkdoMnJ4dnc2OVQ1QW9rdXY5Ckcxd2lUNmVrVmZuaVYrSlVnL1lwcTNSRGxNdVFETDRZdCtGNG1zVGJtN3NFZk5yVVMzWGZFMFdmcjB2Wk1YN2sKMnE3dXkxNEJBb0dCQU5jb1BxVHNBa3N5YzR6UWtmQmNzb0Y1ekdJM3NQTUFGTHRSMGF6S2x1Q1V3VGMzYVk5dgpiTE05OTBVK3VpVzdtRHF0VHpZMVNVSzZGNW0yMzJoSW5hTnBCTUVJcDZHQ0I3dis5WWZIQURVSkljaVdodmJhCmpIY0M5Qjl4SG5uUTBydU5aNUErdWVtRE9EZkRKUllIVUhrMEh3MlJpTE1qRFhVRmhBanJBUGJ4QW9HQkFOaGUKL3BHb2FWOWtUREtNOWZwTTZUQkZwMFVtb2lSb3crVG9TWjhWeVBmMkl1VlNpa0dtaXUydHU3MzZCZkJtVzNtQgpGWmFRc01rNkMydExkK1NBQ0lSbktQTUN4czYzR05WNkJ0aXRIRFlBaGNtRCtvU3VpaFRrd3l1WXlpQml4aFovCms1UDY3UHFKVkQ2YnhkNDV6YlFsS2VNaUVtMUhnQUVGSEF0UFBqbHhBb0dBYzNnaHhwankwakNOV3ZGRW9WN2UKWGlaanpnSmRjTXlHVTlHaFdiNlFJbzh5OHROR1Q3aFkrZ2t6ZjNJZXJNbDA5V2kxcmo0Q3gxRGdBWnJuWXl3MQpqZEY2djY1SmFLQkVUbHlTb1AvbjJJN0NGc2pTUGdFa2lXcUlZYWR2MTZoK3NERS9kMlp5bUNQWU0vVURIa05tCnFPV1VGTkFhTVNtS3UxYnVlV3JGNWNFQ2dZQnJvNTVySWVnQjM2aVVnVkdoU24rN1Z2dG15RmhqV29jUnFvbHQKamUzamhWeEl6eTRlaU5hV2RSWnY1U0R0UGs2RmZMVWJxVEY1ZWRuU2I4SGVOOStFMXJrbFk1MDVteGJNcEo4aApUY1U2RERxQ1RKamxSdHRFbDZXTVc3ODZLMGsyU2hORnk4LzJ0empreUtPLzhPdW5rZEZyd0RpQWl0QmdNWVdKCkRzdjYwUUtCZ1FDR2htYnZrODRaYm1sdERxYW9hOVR6YU5UT3FrN3dtb0tDby9FVEg5NEtNVVZMVmk3bHhpN1kKcXFQYUdiSm9aYkZGd0NvWW12Rk5jU21nYk9SNjdCVVBkdUEzMHo0VythV1pmQVkwZGpUR08yTC8zVVhoWHdHegpVUGpZTXZZQU5sdzFmZTVmZmk3UExJSGJvZXFhTmhhaDliRVJXdFNRbm95NnYzV2hlNTh6VkE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's run mitmproxy using the above client cert and keys:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mitmproxy &lt;span class="nt"&gt;-p&lt;/span&gt; 9000 &lt;span class="nt"&gt;--ssl-insecure&lt;/span&gt; &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;client_certs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;&lt;span class="o"&gt;(&lt;/span&gt;kubectl config view &lt;span class="nt"&gt;--minify&lt;/span&gt; &lt;span class="nt"&gt;--flatten&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;go-template&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{{(index (index .users 0).user "client-key-data")}}'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; kubectl config view &lt;span class="nt"&gt;--minify&lt;/span&gt; &lt;span class="nt"&gt;--flatten&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;go-template&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{{(index (index .users 0).user "client-certificate-data")}}'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's make sure that we have some DNS alias to 127.0.0.1 to work around &lt;a href="https://maelvls.dev/go-ignores-proxy-localhost/"&gt;the Go proxy limitation&lt;/a&gt; which skips proxying for &lt;code&gt;127.0.0.1&lt;/code&gt; and &lt;code&gt;localhost&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"^127.0.0.1.*local$"&lt;/span&gt; /etc/hosts &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'echo "127.0.0.1 local" &amp;gt;&amp;gt; /etc/hosts'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally you can run &lt;code&gt;kubectl&lt;/code&gt; to see the logs of &lt;code&gt;kube-scheduler&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% &lt;span class="nv"&gt;HTTPS_PROXY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;:9000 kubectl &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s2"&gt;"s|127.0.0.1|local|"&lt;/span&gt; &lt;span class="nv"&gt;$KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nt"&gt;--insecure-skip-tls-verify&lt;/span&gt; logs &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kube-scheduler,tier&lt;span class="o"&gt;=&lt;/span&gt;control-plane
I0701 11:11:11.585249       1 registry.go:150] Registering EvenPodsSpread predicate and priority &lt;span class="k"&gt;function
&lt;/span&gt;I0701 11:11:11.585329       1 registry.go:150] Registering EvenPodsSpread predicate and priority &lt;span class="k"&gt;function
&lt;/span&gt;I0701 11:11:12.976581       1 serving.go:313] Generated self-signed cert &lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="nt"&gt;-memory&lt;/span&gt;
W0701 11:11:17.407371       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication &lt;span class="k"&gt;in &lt;/span&gt;kube-system.  Usually fixed by &lt;span class="s1"&gt;'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'&lt;/span&gt;
W0701 11:11:17.407465       1 authentication.go:297] Error looking up &lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="nt"&gt;-cluster&lt;/span&gt; authentication configuration: configmaps &lt;span class="s2"&gt;"extension-apiserver-authentication"&lt;/span&gt; is forbidden: User &lt;span class="s2"&gt;"system:kube-scheduler"&lt;/span&gt; cannot get resource &lt;span class="s2"&gt;"configmaps"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;API group &lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;the namespace &lt;span class="s2"&gt;"kube-system"&lt;/span&gt;
W0701 11:11:17.407483       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0701 11:11:17.407513       1 authentication.go:299] To require authentication configuration lookup to succeed, &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;--authentication-tolerate-lookup-failure&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false
&lt;/span&gt;I0701 11:11:17.478349       1 registry.go:150] Registering EvenPodsSpread predicate and priority &lt;span class="k"&gt;function
&lt;/span&gt;I0701 11:11:17.478483       1 registry.go:150] Registering EvenPodsSpread predicate and priority &lt;span class="k"&gt;function
&lt;/span&gt;W0701 11:11:17.491616       1 authorization.go:47] Authorization is disabled
W0701 11:11:17.491729       1 authentication.go:40] Authentication is disabled
I0701 11:11:17.491883       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on &lt;span class="o"&gt;[&lt;/span&gt;::]:10251
I0701 11:11:17.500576       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0701 11:11:17.500678       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0701 11:11:17.500919       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0701 11:11:17.500931       1 shared_informer.go:223] Waiting &lt;span class="k"&gt;for &lt;/span&gt;caches to &lt;span class="nb"&gt;sync &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0701 11:11:17.718085       1 shared_informer.go:230] Caches are synced &lt;span class="k"&gt;for &lt;/span&gt;client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0701 11:11:17.801300       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I0701 11:11:34.659176       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It works!! Here is what mitmproxy is showing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;18:17:42 GET  HTTPS   &lt;span class="nb"&gt;local&lt;/span&gt; /api/v1/namespaces/kube-system/pods/kube-scheduler-helix-control-plane      200 …plication/json  5.2k 119ms
18:17:42 GET  HTTPS   &lt;span class="nb"&gt;local&lt;/span&gt; /api/v1/namespaces/kube-system/pods/kube-scheduler-helix-control-plane/log  200      text/plain 2.46k 227ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Epic journey with statically and dynamically-linked libraries (.a, .so)</title>
      <dc:creator>Maël Valais</dc:creator>
      <pubDate>Sat, 30 May 2020 18:45:00 +0000</pubDate>
      <link>https://dev.to/maelvls/epic-journey-with-statically-and-dynamically-linked-libraries-a-so-1khn</link>
      <guid>https://dev.to/maelvls/epic-journey-with-statically-and-dynamically-linked-libraries-a-so-1khn</guid>
      <description>&lt;p&gt;Between May and June 2016, I worked with &lt;a href="https://github.com/polazarus/ocamlyices"&gt;ocamlyices2&lt;/a&gt;, an OCaml package that binds to the &lt;a href="https://github.com/SRI-CSL/yices2"&gt;Yices&lt;/a&gt; C++ library. Both projects (as well as many Linux projects) are built using the "Autotools" suite.&lt;/p&gt;

&lt;p&gt;The Autotools suite includes tools like &lt;a href="https://www.gnu.org/software/autoconf/"&gt;autoconf&lt;/a&gt;, &lt;a href="https://www.gnu.org/software/automake"&gt;automake&lt;/a&gt; and &lt;a href="https://www.gnu.org/software/libtool/"&gt;libtool&lt;/a&gt;. These tools generate a bunch of shell scripts and Makefiles using shell scripts and the &lt;a href="https://en.wikipedia.org/wiki/M4_(computer_language)"&gt;M4&lt;/a&gt; macro language. The user of your projects ends up two simple commands to build your project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./configure
make
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Why did I bother with this? As part of my PhD, I worked on a tool, &lt;a href="https://github.com/touist/touist"&gt;touist&lt;/a&gt;, which uses SMT solvers like Yices. The &lt;code&gt;touist&lt;/code&gt; CLI was written in OCaml (a popular language among academics, including my supervisor), which meant I had to go through hoops to interoperate with C/C++ solvers like Yices.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My first challenge with ocamlyices2 was the fact that I needed a statically-linked &lt;code&gt;libyices.a&lt;/code&gt;. And since Yices depends on &lt;a href="https://gmplib.org/"&gt;GMP&lt;/a&gt;, I had to dive deep into Yices2's &lt;code&gt;configure.ac&lt;/code&gt; and find a way to select the static version of GMP.&lt;/p&gt;

&lt;p&gt;But building a static library &lt;code&gt;libyices.a&lt;/code&gt; was not enough. I was to build in PIC mode (position-independant code, enabled with &lt;code&gt;-fPIC&lt;/code&gt; in gcc). The position-independant code is required when you want to embed a static library into a dynamically-linked library. That's due to the fact that OCaml requires both the &lt;code&gt;.so&lt;/code&gt; and &lt;code&gt;.a&lt;/code&gt; versions of the "stub" library (a stub library is a C library that wraps another C library using OCaml C primitives). And naturally, Yices' build system had not been written to support building a static PIC &lt;code&gt;libyices.a&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I remember these days as an epic struggle against old build systems. This experience taught me everything about &lt;code&gt;autoconf&lt;/code&gt;, Makefiles, position-independant code, &lt;code&gt;gcc&lt;/code&gt;, &lt;code&gt;ldd&lt;/code&gt; and &lt;code&gt;libtool&lt;/code&gt;. And in this post, I want to share these discoveries and how I progressed into contributing to the &lt;a href="https://github.com/polazarus/ocamlyices"&gt;ocamlyices2&lt;/a&gt; project.&lt;/p&gt;

&lt;p&gt;Here is a diagram showing the dependencies between libraries. Ocamlyices2 depends on Yices and Yices depends on GMP.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                            build:   dune
             touist         lang:    ocaml
                |           output:  touist binary (statically linked)
                |
                |
                |depends on
                |
                |
                |
                v
           ocamlyices2      build:   autoconf + make
                |           lang:    ocaml + C
                |           output:  libyices2_stubs.a
                |
                |depends on
                |
                |
                |
                v           build:   autoconf + hacky make
              yices         lang:    C++
                |           output:  libyices.a (with PIC enabled)
                |
                |
                |depends on
                |
                |
                |
                v           build:   autoconf + automake + libtool
               gmp          lang:    C, assembly
                            output:  libgmp.a (with PIC enabled)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The most important thing about this diagram is that since we need a PIC-enabled static library &lt;code&gt;libyices.a&lt;/code&gt; in order to build the "binding libraries": the shared library &lt;code&gt;yices2.cmxs&lt;/code&gt; and the static library &lt;code&gt;libyices2_stubs.a&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;And in order to get this PIC-enabled &lt;code&gt;libyices.a&lt;/code&gt;, I needed to make sure that the GMP library picked by the Yices &lt;code&gt;./configure&lt;/code&gt; would be static and PIC-enabled.&lt;/p&gt;

&lt;p&gt;In early 2016, the Yices build system was limited to building a shared library with no support for cross-compilation (a requirement to build on Windows) and no support for enforcing PIC in &lt;code&gt;libgmp.a&lt;/code&gt; and &lt;code&gt;libyices.a&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Yices2 &amp;amp; autoconf: an attempt at fixing a limited build system
&lt;/h2&gt;

&lt;p&gt;I remember the warmth that we had in May 2016. My daughter had turned three and we were still living in a tiny appartment since I was technically still a student.&lt;/p&gt;

&lt;p&gt;My first patch to the ocamlyices2 project took over a month of intense work. The stake was immense: I aimed at revamping the whole Yices2 build system. The original build system didn't allow developers to statically build &lt;code&gt;libyices.a&lt;/code&gt;. More generally, it was a pain to work with: most configuration was happening at &lt;code&gt;./configure&lt;/code&gt; time, but a ton of things had still to be passed at make time (e.g., &lt;code&gt;make VAR=value&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;This change (&lt;a href="https://github.com/polazarus/ocamlyices2/commit/25f5eb15"&gt;25f5eb15&lt;/a&gt;) brought a ton of features. But the most important ones were:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Better &lt;code&gt;./configure &amp;amp;&amp;amp; make&lt;/code&gt; experience&lt;/strong&gt;. Instead of having some parts of the build configuration being passed as Makefile variables, I moved everything to nice flags that you would pass to &lt;code&gt;./configure&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proper PIC support&lt;/strong&gt;. Sometimes, Yices' &lt;code&gt;./configure&lt;/code&gt; would pick up a &lt;code&gt;libgmp.a&lt;/code&gt; that would not be "PIC". So I wrote a new &lt;a href="https://en.wikipedia.org/wiki/M4_(computer_language)"&gt;M4&lt;/a&gt; macro, &lt;code&gt;CSL_CHECK_STATIC_GMP_HAS_PIC&lt;/code&gt;, that would do the PIC check using libtool. For example, the developer would be able to ask for a static library with PIC support:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   ./configure &lt;span class="nt"&gt;--with-pic&lt;/span&gt; &lt;span class="nt"&gt;--enable-static&lt;/span&gt; &lt;span class="nt"&gt;--disable-shared&lt;/span&gt; &lt;span class="nt"&gt;--with-pic-gmp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;/with-pic/libgmp.a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Static &lt;code&gt;libyices.a&lt;/code&gt;&lt;/strong&gt;. The original build system did not&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Using &lt;code&gt;libtool&lt;/code&gt;&lt;/strong&gt;. Instead of hand-crafted targets for building the static &amp;amp; dynamic libraries, &lt;code&gt;libtool&lt;/code&gt; allowed me to build both with very little effort (at the cost of slightly longer build times). I remember trying to fiddle with the Makefile to be able to get PIC/non-PIC as well as dynamic/static &lt;code&gt;.o&lt;/code&gt; units. Using &lt;code&gt;libtool&lt;/code&gt; made it so easier to build simultanously the static and dynamic versions of &lt;code&gt;libyices&lt;/code&gt; (&lt;code&gt;libyices.a&lt;/code&gt; and &lt;code&gt;libyices.so&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-compilation&lt;/strong&gt;. I wanted to be able to release binaries for Windows users, which meant that I needed to cross-compile from a Cygwin environment to a Win32 executable.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I also fixed weird bugs like a failure on Alpine 3.5 due to a race condition in &lt;code&gt;ltmain.sh&lt;/code&gt;. Yes, you heard well! A race condition in a build system!&lt;/p&gt;

&lt;p&gt;Not sure, but &lt;a href="https://github.com/polazarus/ocamlyices2/commit/25f5eb15"&gt;25f5eb15&lt;/a&gt; might be my biggest commit ever:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight email"&gt;&lt;code&gt;&lt;span class="nt"&gt;Author&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="na"&gt; Maël Valais &amp;lt;mael.valais@gmail.com&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;Date&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="na"&gt;   Fri May 5 10:31:37 2017 +0200&lt;/span&gt;
&lt;span class="nt"&gt;Commit&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="na"&gt; 25f5eb15&lt;/span&gt;

    libyices: build system: moved all config from Makefile to ./configure

    What I tried to fix:
    * Impossible to produces static archive libyices.a for the mingw32 host.
    * So much configuration done in Makefile instead of ./configure.
    * The 'make build OPTION= MODE=' is kind of unexpected and not really standard.
    * As an end-user builder, it is hard to guess that my system-installed libgmp.a
      is not PIC. The build system should help me with that.
    * CFLAGS and CPPFLAGS cannot be overritten by the end-user builder at 'make'
      time. See 'Preset Output Variables' in the autoconf manual.
    * `smt2_commands.c`, `smt2_parser.c` and `smt2_term_stack.c` are all including
      `smt2_commands.h`, and in `smt2_commands.h` there is a definition of a type,
      so there is a clash of symbols at link time. Solution 1: use 'extern' in .h
      and definition in .c. Solution 2: typedef the enum.
    * The build system should be fully parallel-proof. Note that 'make -j' is a
      thread bomb; prefer using 'make -j4' if you have 4 processors.
    * Why removing libyices.a when using 'dist'? (rm -f $(distdir)/lib/libyices.a)
      libyices.a should be distributed to the end-users along with the shared lib.
    * Why does libgmp.a contain more than libgmp.so? I guess it is because some
      function are needed by the tests and the tests are never dynamically linked
      to the shared libyices.so... For example, the test `test_type_matching` calls
      a function 'delete_pstore' that is not exported. This means we cannot use the
      created libyices.so! We must instead use the object files directly.

    Features of the new build system:
    * It is now possible to select what you want to build using --enable-static
      and --enable-shared. By default, both are built. You can speed up the build
      by disabling one of the modes, for example --disable-shared.
    * Running ./configure will tell you if the libgmp.a found/given with
      --with-static-gmp or --with-pic-gmp is PIC or not.
    * Libtool now handles all shared library naming with version number and symbolic
      links. on linux, .so, on mac os .dylib, on windows .a, .dll.a and .dll...
    * It is now possible to choose if you want PIC-only code in the static
      library libyices.a using --with-pic.
    * The ./configure now configures and the Makefile makes; moved all the
      configuration steps that were done in the Makefile.
    * It is not required anymore to pass OPTION when using make. OPTION is now
      handled by passing for example --host=i686-w64-mingw32 when running
      ./configure, instead of 'make OPTION=mingw32',
    * It is not required anymore to pass MODE in make. MODE is now handled by
      the argument --with-mode=release (for example) when running ./configure.
    * Removed the confusing mess of build/&amp;lt;host&amp;gt;/... and configs/make.&amp;lt;host&amp;gt;.
      Build objects are simply put in build/.
    * Merged the many Makefiles that were sharing a lot of code in common.
    * Standardized the 'make' target and experience:
      - make build for building binaries and library (no need for OPTION or MODE)
      - make dist (non-standard) for showing the results of the build as it would
        be distributed
      - make distclean for removing any files created by ./configure
      - make lib if you only want the library
      - make install and uninstall for installing/uninstalling (DESTDIR supported)
    * On Windows, we can now build a static library libyices.a.
    * On Windows, shared and static libraries can be built at once. The static
      version of libyices.a can be renamed using --with-static-name.
    * DESTDIR works as expected: it will reproduce the hierarchy using the prefix
      when running 'make install DESTDIR=/path'
    * A nice summary of the configuration is now printed when running ./configure.
      It allows to check if libgmp.a is PIC or not, and helps to have a clear
      view of what is going to happen.
    * Moved version number of Yices in configure.ac
    * If the user wants to use --with-pic-gmp but his libgmp.a is not PIC, give him
      an indication of what command to run to build the with-PIC libgmp.a.
    * Moved the gmaketest into configure; warn the user if 'make' is not gnu make.
    * Parallel build is now fully supported (make -j)
    * when using --with-static-gmp (and other similar flags), try to find gmp.h
      in . and ../include automatically, and fall back with the system-wide gmp.h
    * check for gmp.h even when no --with-static-gmp-include-dir is given
    * moved all csl_* functions into autoconf/m4/csl_check_libs.m4 so that they
      can be reused somewhere else
    * compute dependencies only if not in release mode and at compile time
      instead of ahead of time. This saves time during compilation, because deps
      are not necessary if the .c or .h files are not changed (i.e., if the builder
      is the end-user). If the builder is a developer, then he will set
      --with-mode={debug,profile...} and this will trigger the deps to be computed.
    * 'make test' compiles and runs all tests in tests/unit
    * 'make test_api12' will compile and run the test tests/unit/test_api12.c
    * removed version_*.c file as it is rebuilt at make time
    * gperf is now only necessary when changing the tokens.txt or keywords.txt files,
      the end-user builder does not have to have gperf installed.

    Side notes:
    * CPPFLAGS, CFLAGS, LDFLAGS and any other makefile variable can be overwritten
      using 'make LDFLAGS=...' (was already the case with the previous version of the
      build system)

    Known issues:
    * on Mac OS X, linking executables agains non-PIC libgmp.a will throw the
      following warning:
      ld: warning: PIE disabled. Absolute addressing (perhaps -mdynamic-no-pic)
      not allowed in code signed PIE, but used in ___gmpn_mul_1 from libgmp.a(mul_1.o).
      To fix this warning, don't compile with -mdynamic-no-pic or link with -Wl,-no_pie

    Todo:
    * produce libyices.def on Windows
    * make sure the test on libgmp-10.dll is future-proof (remove the '-10')
    * fix the 'echo summary'
    * I did not test the checks made configure.ac on mcsat and libpoly; we should
      do the same tests as done on libgmp.a (for checking that it is PIC) and
      add a helping message at the end of configure.ac.
    * For the tests, I read that two kind of tests were compiled:
      - 'tests' where the tests are linked to the non-PIC libgmp.a and static libgmp.a
      - 'tests-static' where the tests are linked to the PIC libgmp.a and shared
        GMP library.
      I changed the second one: it links to the shared version of libyices and shared
      version of GMP. But building the with-PIC libyices.a would be really easy
      (and it is still possible using the flag --with-pic).
    * 'make dist' is not a staging area (for now) for 'make install'. They both
      install from built objects.
    * it is not possible to compile the tests against the shared library
      because many symbols which are used in the tests are not exported ('export'
      in the C code). They exist if we do 'nm' but they are just not usable.

    * pstore issue:

    The visibility is T (visible) in static mode and t (hidden) in shared mode. This
    is because 'abstract_values.c' has not been 'exported' (=added in `yices_api.c`).
    Here is an example of the difference (functions from )
    ```


    # nm build/lib/libyices.a | grep pstore
    0000000000000090 T _delete_pstore
    0000000000000000 T _init_pstore
    # nm build/lib/.libs/libyices.2.dylib | grep pstore
    0000000000037e10 t _delete_pstore
    0000000000037d80 t _init_pstore


    ```

    I tweaked the ltmain.sh to be able to pass different CPPFLAGS for the compilation
    of static and shared objects by the `%.o: %.c` rule. CPPFLAGS must be different
    because of Windows dlls: `-DNOYICES_DLL -D__GMP_LIBGMP_DLL=0` for example.

    I added two variables LT_STATIC_CFLAGS and LT_SHARED_CFLAGS. They allow me
    to pass CFLAGS to libtool for .c -&amp;gt; .o targets. It allows me to pass things
    like -DYICES_STATIC, -DNOYICES_DLL and -D_LIBGMP_DLL.

    ```

 diff
    diff --git a/ext/yices/autoconf/ltmain.sh b/ext/yices/autoconf/ltmain.sh
    index bf5d83b..7b7dd3a 100644
    --- a/ext/yices/autoconf/ltmain.sh
    +++ b/ext/yices/autoconf/ltmain.sh
    @@ -3500,10 +3500,10 @@ compiler."
           fbsd_hideous_sh_bug=$base_compile

           if test no != "$pic_mode"; then
    -       command="$base_compile $qsrcfile $pic_flag"
    +       command="$base_compile $LT_SHARED_CFLAGS $qsrcfile $pic_flag"
           else
            # Don't build PIC code
    -       command="$base_compile $qsrcfile"
    +       command="$base_compile $LT_SHARED_CFLAGS $qsrcfile"
           fi

           func_mkdir_p "$xdir$objdir"
    @@ -3552,9 +3552,9 @@ compiler."
         if test yes = "$build_old_libs"; then
           if test yes != "$pic_mode"; then
            # Don't build PIC code
    -       command="$base_compile $qsrcfile$pie_flag"
    +       command="$base_compile $LT_STATIC_CFLAGS $qsrcfile$pie_flag"
           else
    -       command="$base_compile $qsrcfile $pic_flag"
    +       command="$base_compile $LT_STATIC_CFLAGS $qsrcfile $pic_flag"
           fi
           if test yes = "$compiler_c_o"; then
            func_append command " -o $obj"


    ```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
`&lt;/p&gt;

&lt;p&gt;What a massive commit message, right?!&lt;/p&gt;

&lt;p&gt;Oviously, I still needed to use all the new features that I had just addeed to the Yices &lt;code&gt;./configure&lt;/code&gt;. So I proposed a second patch (&lt;a href="https://github.com/polazarus/ocamlyices2/commit/ccb5a563"&gt;ccb5a563&lt;/a&gt;) with changes to the ocamlyices2's own &lt;code&gt;./configure&lt;/code&gt;; that took the form of new flags so that I would be able to build a static &lt;code&gt;libyices.a&lt;/code&gt; by specifying the static version of the GMP library &lt;code&gt;libgmp.a&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I also used the new cross-compilation capability of the Yices &lt;code&gt;./configure&lt;/code&gt; so that it would be possible to build ocamlyices2 on Windows. The compilation would rely on the POSIX-compliant &lt;a href="https://www.cygwin.com/"&gt;Cygwin&lt;/a&gt; suite and cross-compile to a native Win32 executable using &lt;a href="http://www.mingw.org/"&gt;MinGW32&lt;/a&gt; (which a port of GCC).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`eml&lt;br&gt;
Author: Maël Valais &lt;a href="mailto:mael.valais@gmail.com"&gt;mael.valais@gmail.com&lt;/a&gt;&lt;br&gt;
Date:   Fri May 5 16:18:46 2017 +0200&lt;br&gt;
Commit: ccb5a563&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ocamlyices2: only build the static stub using static libgmp.a.

The library libgmp.a will be searched in system dirs or you can use the flag
--with-static-gmp= when running ./configure for setting your own libgmp.a. If
no libgmp.a is found, the shared library is used. You can force the use of
shared gmp library with --with-shared-gmp.

If --with-shared-gmp is not given, the libgmp.a that has been found
will be included in the list of installed files. The reason is because
if we want to build a shared-gmp-free binary, zarith will sometimes pick the
shared library (with -lgmp) over the static lbirary libgmp.a.

Including libgmp.a in the distribution of ocamlyices2 is a convenience for
creating gmp-shared-free binaries.

Why do we prefer using a static version of libgmp.a?
===================================================

This is because we build a non-PIC static version of libyices.a. If
we wanted to build both static and shared stubs, we should either
- build a PIC libyices.a but it would conflict with the non-PIC one
- build a shared library libyices.so.

For now, I chose to just skip the shared stubs (dllyices2_stubs.so).

Also:

* turn on -fPIC (in configure.ac) only if non-static gmp
* added a way to link statically to libgmp.a (--with-static-gmp)
* use -package instead of -I/lib for compiling *.c in ocamlc

This option uses the change I made to the build system of libyices.
Why? Because I want the possibility of producing binaries that do
not need any dll alongside.

Guess the host system and pass it to libyices ./configure
=========================================================
It is now possible to use ./configure for mingw32 cross-compilation.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;A month past and I soon realized that GMP was not embedded at all in &lt;code&gt;libyices.a&lt;/code&gt;. I could see the symbols as undefined (&lt;code&gt;U&lt;/code&gt;) when running &lt;code&gt;nm libyices.a&lt;/code&gt;. It took me a while to figure this out... back to tweaking the fragile Yices &lt;code&gt;./configure&lt;/code&gt;!&lt;/p&gt;

&lt;p&gt;Along the way, I also realized how different Linux distributions are. Arch Linux is notably lacking support for partial linking (&lt;code&gt;ld -r&lt;/code&gt;). Which meant I had to add a flag for this specific purpose in commit &lt;a href="https://github.com/polazarus/ocamlyices2/commit/38200b0a"&gt;38200b0a&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`eml&lt;br&gt;
Author: Maël Valais &lt;a href="mailto:mael.valais@gmail.com"&gt;mael.valais@gmail.com&lt;/a&gt;&lt;br&gt;
Date:   Fri Jun 16 21:18:37 2017 +0200&lt;br&gt;
Commit: 38200b0a&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;libyices: added option --without-gmp-embedded

This option will disable the partial linking that allows to embed
GMP into libyices.a (only with --enable-static).

For example, on Arch linux, the partial linking command

    ld -r -lgmp *.o -o libyices.o

would fail even if libgmp.so is correclty installed. It seems that 'ld -r'
would only work with a static libgmp.a (but the arch linux repo only installs
the shared gmp library).

Why this `ld -r`? This command is the only way I found to compile a
static libyices.a from either a shared or a static libgmp and produce
a gmp-depend-free libyices.a.

Two solutions:

1. Drop the necessity for building a libyices.a free of gmp dependency.
   In this case, I could remove the `ld -r`.
   It would then create a libyices.a that depends on libgmp.a/so.
2. Separate the gmp-dependency-free libyices.a from the normal
   gmp-dependent libyices.a. For example, I could use the option
   `--without-gmp-embedded`

So I went with solution (2).

This option disables the embedding of GMP inside libyices.a. This
'embedding' is made using partial linking (ld -r) which seems to
be failing on Arch Linux when using the shared GMP library.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Although I had already added a check (&lt;code&gt;CSL_CHECK_STATIC_GMP_HAS_PIC&lt;/code&gt; in &lt;a href="https://github.com/polazarus/ocamlyices2/commit/25f5eb15"&gt;25f5eb15&lt;/a&gt;) to make sure that the user-provided libgmp was PIC when using &lt;code&gt;--with-pic&lt;/code&gt;, I realized that it was trickier that what I thought in &lt;a href="https://github.com/polazarus/ocamlyices2/commit/55c8e92a"&gt;55c8e92a&lt;/a&gt;...&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`eml&lt;br&gt;
Author: Maël Valais &lt;a href="mailto:mael.valais@gmail.com"&gt;mael.valais@gmail.com&lt;/a&gt;&lt;br&gt;
Date:   Sat Jun 17 14:09:12 2017 +0200&lt;br&gt;
Commit: 55c8e92a&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;libyices: with --enable-static and --with-pic, enforce PIC libgmp.a

One problem I came across was that most of the time, when I was doing

    ./configure --enable-static --with-pic

the produced libyices.a would still contain non-PIC libgmp.a, although
being itself PIC. In this commit, I enforce that if --with-pic is given,
then:

1) either --with-pic-gmp has been given, in this case we use that for
   creating the PIC libyices.a;
2) or --with-pic-gmp has not been given and thus we try to simply use the
   shared gmp through -lgmp.

Reminder: we also check that the system libgmp.a or the libgmp.a given with
--with-static-gmp is PIC. If it is the case, the PIC libgmp.a will be used
for --with-pic.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, I also had to fix ocamlyices2's &lt;code&gt;./configure&lt;/code&gt; since &lt;code&gt;ld -r&lt;/code&gt; (partial linking) was not supported on Arch Linux (see [PR's CI failure](pull request](&lt;a href="https://github.com/ocaml/opam-repository/pull/9086"&gt;https://github.com/ocaml/opam-repository/pull/9086&lt;/a&gt;). I remember waiting for hours for the CI to run on all imaginable systems: Debian, Ubuntu, Suse, Arch Linux, Alpine, CentOS, Fedora, macOS and Windows...&lt;/p&gt;

&lt;p&gt;Commits &lt;a href="https://github.com/polazarus/ocamlyices2/commit/df7c89a1"&gt;df7c89a1&lt;/a&gt; and &lt;a href="https://github.com/polazarus/ocamlyices2/commit/70dc5de5"&gt;70dc5de5&lt;/a&gt;) fix the partial linking issue on Arch Linux:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`eml&lt;br&gt;
Author: Maël Valais &lt;a href="mailto:mael.valais@gmail.com"&gt;mael.valais@gmail.com&lt;/a&gt;&lt;br&gt;
Date:   Sat Jun 17 16:52:59 2017 +0200&lt;br&gt;
Commit: df7c89a1&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ocamlyices2: added --with-libyices and --with-libyices-include-dir

These options allow to give your own libyices.a and the include directory
where the libyices headers are.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Author: Maël Valais &lt;a href="mailto:mael.valais@gmail.com"&gt;mael.valais@gmail.com&lt;/a&gt;&lt;br&gt;
Date:   Sat Jun 17 17:01:58 2017 +0200&lt;br&gt;
Commit: 70dc5de5&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ocamlyices2: use --without-gmp-embedded by default

After giving it some thoughts, the need for having a self-contained libyices.a
(which would only need -lyices, no -lgmp needed) in ocamlyices2 is pointless
as 'zarith' will still need '-lgmp' anyway.

The Makefile will still put libgmp.a and libyices.a inside src/ so that
the static version of gmp is used (with -L.) instead of the shared version.

Rationale: disabling the partial linking fixes the build on Arch Linux, which
(I re-tested on a docker image) cannot accept partial linking with -lgmp when
only libgmp.so is available. Here is the failing command:

    ld -r *.o -lgmp -o libyices.o
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final result of the Yices2 build system
&lt;/h2&gt;

&lt;p&gt;The experience with the re-written &lt;code&gt;./configure&lt;/code&gt; (&lt;a href="https://github.com/polazarus/ocamlyices2/tree/master/ext/yices"&gt;here&lt;/a&gt;) is very different from the original one. When the user wants to compile the library with PIC, they get a warning if one of the dependencies is not PIC. There is a much finer control over what the user wants: dynamic vs. static, PIC vs. non-PIC. But also more control over dependencies like GMP, since the user must be able to pass a static or dynamic version of libgmp.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;./configure&lt;/code&gt; that you can see &lt;a href="https://github.com/polazarus/ocamlyices2/tree/master/ext/yices"&gt;here&lt;/a&gt; gained many features like &lt;code&gt;--with-shared-gmp&lt;/code&gt; or &lt;code&gt;--with-pic&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;sh&lt;br&gt;
% ./configure --help&lt;br&gt;
&lt;/code&gt;configure' configures Yices 2.5.2 to adapt to many kinds of systems.&lt;/p&gt;

&lt;p&gt;Usage: ./configure [OPTION]... [VAR=VALUE]...&lt;/p&gt;

&lt;p&gt;System types:&lt;br&gt;
  --build=BUILD     configure for building on BUILD [guessed]&lt;br&gt;
  --host=HOST       cross-compile to build programs to run on HOST [BUILD]&lt;/p&gt;

&lt;p&gt;Optional Features:&lt;br&gt;
  --enable-shared[=PKGS]  build shared libraries [default=yes]&lt;br&gt;
  --enable-static[=PKGS]  build static libraries [default=yes]&lt;br&gt;
  --disable-libtool-lock  avoid locking (might break parallel builds)&lt;br&gt;
  --enable-mcsat          Enable support for MCSAT. This requires the libpoly&lt;br&gt;
                          library.&lt;/p&gt;

&lt;p&gt;Optional Packages:&lt;br&gt;
  --with-pic[=PKGS]       try to use only PIC/non-PIC objects [default=use&lt;br&gt;
                          both]&lt;br&gt;
  --with-static-gmp=&lt;br&gt;
                          Full path to a static GMP library (e.g., libgmp.a)&lt;br&gt;
  --with-static-gmp-include-dir=&lt;br&gt;
                          Directory of include file "gmp.h" compatible with&lt;br&gt;
                          static GMP library&lt;br&gt;
  --with-pic-gmp=   Full path to a relocatable GMP library (e.g.,&lt;br&gt;
                          libgmp.a)&lt;br&gt;
  --with-pic-gmp-include-dir=&lt;br&gt;
                          Directory of include file "gmp.h" compatible with&lt;br&gt;
                          relocatable GMP library&lt;br&gt;
  --with-static-libpoly=&lt;br&gt;
                          Full path to libpoly.a&lt;br&gt;
  --with-static-libpoly-include-dir=&lt;br&gt;
                          Path to include files compatible with libpoly.a&lt;br&gt;
                          (e.g., /usr/local/include)&lt;br&gt;
  --with-pic-libpoly=&lt;br&gt;
                          Full path to a relocatable libpoly.a&lt;br&gt;
  --with-pic-libpoly-include-dir=&lt;br&gt;
                          Path to include files compatible with the&lt;br&gt;
                          relocatable libpoly.a&lt;br&gt;
  --with-shared-gmp       By default, a static version of the GMP library will&lt;br&gt;
                          be searched. This option forces the use of the&lt;br&gt;
                          shared version. This applies for both shared and&lt;br&gt;
                          static libraries.&lt;br&gt;
  --without-gmp-embedded  (Only when --enable-static) By default, the static&lt;br&gt;
                          library libyices.a created will be partially linked&lt;br&gt;
                          (ld -r) so that the GMP library is not needed&lt;br&gt;
                          afterwards (i.e., only -lyices is needed). If you&lt;br&gt;
                          want to disable the partial linking (and thus -lgmp&lt;br&gt;
                          and -lyices will be needed), you can use this flag.&lt;br&gt;
  --with-mode=MODE        The mode used during compilation/distribution. It&lt;br&gt;
                          can be one of release, debug, devel, profile, gcov,&lt;br&gt;
                          valgrind, purify, quantify or gperftools. (default:&lt;br&gt;
                          release)&lt;br&gt;
  --with-static-name=name (Windows only) when building simultanously shared&lt;br&gt;
                          and static libraries, allows you to give a different&lt;br&gt;
                          name for the static version of libyices.a.&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I also added a ton of diagnostic information that appears at the end of &lt;code&gt;./configure&lt;/code&gt;. That's very useful when you want to make sure that &lt;code&gt;./configure&lt;/code&gt; has picked up the right version of &lt;code&gt;libgmp&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`plain&lt;br&gt;
configure: Summary of the configuration:&lt;br&gt;
EXEEXT:&lt;br&gt;
SED:                        /usr/bin/sed&lt;br&gt;
LN_S:                       ln -s&lt;br&gt;
MKDIR_P:                    /usr/local/opt/coreutils/libexec/gnubin/mkdir -p&lt;br&gt;
CC:                         gcc&lt;br&gt;
LD:                         /Library/Developer/CommandLineTools/usr/bin/ld&lt;br&gt;
AR:                         ar&lt;br&gt;
RANLIB:                     ranlib&lt;br&gt;
STRIP:                      strip&lt;br&gt;
GPERF:                      gperf&lt;br&gt;
NO_STACK_PROTECTOR:         -fno-stack-protector&lt;br&gt;
STATIC_GMP:                 /usr/local/lib/libgmp.a&lt;br&gt;
STATIC_GMP_INCLUDE_DIR:&lt;br&gt;
PIC_GMP:                    /usr/local/lib/libgmp.a&lt;br&gt;
PIC_GMP_INCLUDE_DIR:&lt;br&gt;
ENABLE_MCSAT:               no&lt;br&gt;
STATIC_LIBPOLY:&lt;br&gt;
STATIC_LIBPOLY_INCLUDE_DIR:&lt;br&gt;
PIC_LIBPOLY:&lt;br&gt;
PIC_LIBPOLY_INCLUDE_DIR:&lt;/p&gt;

&lt;p&gt;Version:                    Yices 2.5.2&lt;br&gt;
Host type:                  x86_64-apple-darwin19.4.0&lt;br&gt;
Install prefix:             /Users/mvalais/code/ocamlyices2/ext/yices&lt;br&gt;
Build mode:                 release&lt;/p&gt;

&lt;p&gt;For both static and shared library:&lt;br&gt;
  CPPFLAGS:                  -DMACOSX -DNDEBUG&lt;br&gt;
  CFLAGS:                    -fvisibility=hidden -Wall -Wredundant-decls -O3 -fomit-frame-pointer -fno-stack-protector&lt;br&gt;
  LDFLAGS:&lt;/p&gt;

&lt;p&gt;For static library          libyices.a:&lt;br&gt;
  Enable:                   yes&lt;br&gt;
  STATIC_CPPFLAGS:           -DYICES_STATIC&lt;br&gt;
  STATIC_LIBS:&lt;br&gt;
  Libgmp.a found:           yes&lt;br&gt;
  Libgmp.a path:            /usr/local/lib/libgmp.a&lt;br&gt;
  Libgmp.a is pic:          yes     (non-PIC is faster for the static library)&lt;br&gt;
  PIC mode for libyices.a:  default&lt;br&gt;
  Use shared gmp instead of libgmp.a:  no&lt;br&gt;
  Embed gmp in libyices.a:  yes&lt;/p&gt;

&lt;p&gt;For shared library:&lt;br&gt;
  Enable:                   yes&lt;br&gt;
  SHARED_CPPFLAGS:&lt;br&gt;
  SHARED_LIBS:&lt;br&gt;
  Libgmp.a with PIC found:  yes&lt;br&gt;
  Libgmp.a path:            /usr/local/lib/libgmp.a&lt;br&gt;
  Use shared gmp instead of libgmp.a: no&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;A final word about the &lt;code&gt;Makefile&lt;/code&gt;: since all the build configuration is handled by &lt;code&gt;./configure&lt;/code&gt;, you don't have to pass any variables at &lt;code&gt;make&lt;/code&gt; time anymore, which really helps when you need to &lt;code&gt;make&lt;/code&gt; multiple times in a row and don't want to type the variables every single time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inpecting static and dynamic libraries
&lt;/h2&gt;

&lt;p&gt;Here are two tips that I learned along the way. First, I very often need to know what libraries an executable is depending on:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`sh&lt;/p&gt;

&lt;h1&gt;
  
  
  macOS
&lt;/h1&gt;

&lt;p&gt;% otool -L /bin/ls&lt;br&gt;
/bin/ls:&lt;br&gt;
    /usr/lib/libutil.dylib (compatibility version 1.0.0, current version 1.0.0)&lt;br&gt;
    /usr/lib/libncurses.5.4.dylib (compatibility version 5.4.0, current version 5.4.0)&lt;br&gt;
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1281.100.1)&lt;/p&gt;

&lt;h1&gt;
  
  
  Linux (Alpine Linux 3.9)
&lt;/h1&gt;

&lt;p&gt;ldd /bin/ls&lt;br&gt;
    /lib/ld-musl-x86_64.so.1 (0x7fc9b2c84000)&lt;br&gt;
    libc.musl-x86_64.so.1 =&amp;gt; /lib/ld-musl-x86_64.so.1 (0x7fc9b2c84000)&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I also had to dig into the symbols of libraries in order to make sure that static libraries contained all the needed symbols:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;sh&lt;br&gt;
% nm ext/libyices_pic_no_gmp/lib/libyices.a&lt;br&gt;
ext/libyices_pic_no_gmp/lib/libyices.a(libyices.o):&lt;br&gt;
00000000000ef6c0 t _convert_rba_tree&lt;br&gt;
0000000000049a20 t _convert_simple_value&lt;br&gt;
00000000000c2f70 t _convert_term_to_bit&lt;br&gt;
00000000000d97e0 t _convert_term_to_conditional&lt;br&gt;
0000000000049700 t _convert_term_to_val&lt;br&gt;
0000000000049b30 t _convert_val&lt;br&gt;
00000000000028b0 T _yices_or&lt;br&gt;
0000000000002fa0 T _yices_or2&lt;br&gt;
0000000000002ca0 T _yices_or3&lt;br&gt;
0000000000003480 T _yices_pair&lt;br&gt;
000000000002ad30 t _yices_parse&lt;br&gt;
0000000000006420 T _yices_parse_bvbin&lt;br&gt;
00000000000064b0 T _yices_parse_bvhex&lt;br&gt;
0000000000004200 T _yices_parse_float&lt;br&gt;
0000000000004180 T _yices_parse_rational&lt;br&gt;
000000000000b150 T _yices_parse_term&lt;br&gt;
000000000000b090 T _yices_parse_type&lt;br&gt;
                 U _memcpy&lt;br&gt;
                 U _memset&lt;br&gt;
                 U _memset_pattern16&lt;br&gt;
                 U ___error&lt;br&gt;
0000000000145280 S ___gmp_0&lt;br&gt;
000000000017dcf0 D ___gmp_allocate_func&lt;br&gt;
00000000001099e0 T ___gmp_assert_fail&lt;br&gt;
0000000000109980 T ___gmp_assert_header&lt;br&gt;
0000000000145460 S ___gmp_binvert_limb_table&lt;br&gt;
000000000014527c S ___gmp_bits_per_limb&lt;br&gt;
0000000000109a60 T ___gmp_default_allocate&lt;br&gt;
0000000000109ae0 T ___gmp_default_free&lt;br&gt;
0000000000109a90 T ___gmp_default_reallocate&lt;br&gt;
0000000000145290 S ___gmp_digit_value_tab&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The letter before the symbol is the "symbol type" (from &lt;code&gt;man nm&lt;/code&gt;):&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Each symbol name is preceded by its value (blanks if undefined). This value is followed by one of the following characters, representing the symbol type:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;U = undefined,&lt;/li&gt;
&lt;li&gt;T (text section symbol),&lt;/li&gt;
&lt;li&gt;D (data section symbol),&lt;/li&gt;
&lt;li&gt;S (symbol in a section other than those above).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the symbol is local (non-external), the symbol's type is instead represented by the corresponding lowercase letter. A lower case u in a dynamic shared library indicates a undefined reference to a private external in another module in the same library.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For example, the symbol &lt;code&gt;_yices_parse_float&lt;/code&gt; is an external symbol, meaning that this symbol isn't static to &lt;code&gt;libyices.a&lt;/code&gt;. On the other side, &lt;code&gt;_convert_simple_value&lt;/code&gt; is statically defined (&lt;code&gt;t&lt;/code&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Contributing the new Yices build system to upstream
&lt;/h2&gt;

&lt;p&gt;On 16 June 2016, I sent an email to Bruno Dutertre, one of the developers at SRI (the company behind the Yices SMT solver). I proposed all these changes with links to the various patches on GitHub. Unfortunately, it didn't work out, and the reason might be that the whole patch was enormous and very hard to review.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hi Maël,&lt;/p&gt;

&lt;p&gt;Thanks for the message and for your efforts. We'll look into your updates.&lt;/p&gt;

&lt;p&gt;We know that the Yices build system is unconventional because most of the work is done in the Makefiles rather that in the configure script. There are historical reason for this (and it should be able to build PIC libraries without problems).&lt;/p&gt;

&lt;p&gt;By the way, Yices is now open-source (GPL) on github: &lt;a href="https://github.com/SRI-CSL/yices2"&gt;https://github.com/SRI-CSL/yices2&lt;/a&gt;. Take a look when you have time,&lt;/p&gt;

&lt;p&gt;Thanks again,&lt;/p&gt;

&lt;p&gt;Bruno&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I wish we had a unit-test framework for &lt;code&gt;autoconf&lt;/code&gt;. The &lt;code&gt;autoconf&lt;/code&gt; ecosystem generates very fragile scripts and the only way to test them is to run them with all possible flag combinations, which is pretty much impossible.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Update 31 May 2020&lt;/strong&gt;: added the ascii "dependency" diagram to give a better sense of what the challenge was.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Github Actions with a private Terraform module</title>
      <dc:creator>Maël Valais</dc:creator>
      <pubDate>Sat, 09 May 2020 14:02:00 +0000</pubDate>
      <link>https://dev.to/maelvls/github-actions-with-a-private-terraform-module-5b85</link>
      <guid>https://dev.to/maelvls/github-actions-with-a-private-terraform-module-5b85</guid>
      <description>&lt;p&gt;A common way of sharing terraform modules is to move them in a separate repo. And for companies, that means a private repo. When &lt;code&gt;terraform init&lt;/code&gt; is run, the terraform module is fetched and if this module is stored on a Github private repo, you will need to work around the authentication.&lt;/p&gt;

&lt;p&gt;Imagine that these shared modules are stored on the private Github repo "github.com/your-org/terraform-modules". Importing this module from a different repo would look something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"some_instance_of_this_module"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git@github.com:your-org/terraform-modules.git//path/to/module?ref=master"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using &lt;code&gt;git+ssh&lt;/code&gt; as a way of fetching this private module will work great locally since you might probably have a private key that Github knows about. Locally, &lt;code&gt;terraform init&lt;/code&gt; will work.&lt;/p&gt;

&lt;p&gt;But what about CI, should I create a key pair and store the private key as a secret and have the public key known by Github (or Gitlab)?&lt;/p&gt;

&lt;p&gt;Using SSH key pairs is not ideal. The key pair is tied to an individual and can't be linked to a Github App like &lt;code&gt;github-bot&lt;/code&gt;. Plus, the key pair mechanism doesn't offer access to a specific repo, which means all your company's repositories are exposed. And finally, some companies do not have port 22 open for security reasons, which means &lt;code&gt;git+https&lt;/code&gt; is their only option.&lt;/p&gt;

&lt;p&gt;To use https instead of ssh, we start by changing the way we import these modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt; module "some_instance_of_this_module" {
&lt;span class="gd"&gt;-   source = "git@github.com:your-org/terraform-modules.git//path/to/module?ref=master"
&lt;/span&gt;&lt;span class="gi"&gt;+   source = "git::https://github.com/your-org/terraform-modules.git//path/to/module?ref=master"
&lt;/span&gt; }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Local development &amp;amp; git over https
&lt;/h2&gt;

&lt;p&gt;Locally, you will have to make sure you can &lt;code&gt;git clone&lt;/code&gt; this private repo, for example, the following should work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/your-org/terraform-modules.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If it doesn't work, Github has a helpful page "&lt;a href="https://help.github.com/en/github/using-git/caching-your-github-password-in-git"&gt;Caching your GitHub password in Git&lt;/a&gt;".&lt;/p&gt;

&lt;h2&gt;
  
  
  Continous integration &amp;amp; git over HTTPS
&lt;/h2&gt;

&lt;p&gt;It is a bit trickier to get HTTPS working on the CI. In the following, I'll take the example of Github Actions but that will work for any CI provider. There are two main solutions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;using &lt;code&gt;url.insteadof&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;or using &lt;code&gt;credential.helper&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For both options, you will need a PAT (personal access token) linked your own account. I refer to this token as &lt;code&gt;GH_TOKEN&lt;/code&gt;. To create a PAT, you can go &lt;a href="https://github.com/settings/tokens"&gt;to your Github settings&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ NOTE: around April 2020, Github decided to prevent users from using Github Secrets names that begin with &lt;code&gt;GITHUB_&lt;/code&gt;. We used to use the name &lt;code&gt;GITHUB_PAT&lt;/code&gt; frequently in Github Actions readmes, I guess we will all have to update everything!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Solution 1: &lt;code&gt;url.insteadof&lt;/code&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;actions/checkout@v2 sets the 'Authorization' for any git command issued from the checked out repo. But since this uses the Github Actions &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; which is limited to the current repository. And since we want to access another private repo, we have to disable that.&lt;/li&gt;
&lt;li&gt;instead, you can use &lt;code&gt;git config --global url.insteadof&lt;/code&gt;. The &lt;code&gt;GH_TOKEN&lt;/code&gt; Github Secret is a &lt;a href="https://github.com/settings/tokens"&gt;Github personal token&lt;/a&gt; that has the 'repo' scope (full control of private repositories).&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: when using git over https with a token on &lt;code&gt;https://github.com&lt;/code&gt;, the username doesn't matter, that's why we put &lt;code&gt;foo&lt;/code&gt; here.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;git config --local --remove-section http."https://github.com/"&lt;/span&gt;
    &lt;span class="s"&gt;git config --global url."https://foo:${GH_TOKEN}@github.com/your-org".insteadOf "https://github.com/your-org"&lt;/span&gt;
  &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;GH_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GH_TOKEN }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Solution 2: &lt;code&gt;credential.helper&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;When &lt;code&gt;git config url.insteadof&lt;/code&gt; does not work, you can try using &lt;code&gt;git credential.helper&lt;/code&gt; instead. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;cat &amp;lt;&amp;lt;EOF &amp;gt; creds&lt;/span&gt;
      &lt;span class="s"&gt;#!/bin/sh&lt;/span&gt;
      &lt;span class="s"&gt;echo protocol=https&lt;/span&gt;
      &lt;span class="s"&gt;echo host=github.com&lt;/span&gt;
      &lt;span class="s"&gt;echo username=foo&lt;/span&gt;
      &lt;span class="s"&gt;echo password=${GH_TOKEN}&lt;/span&gt;
    &lt;span class="s"&gt;EOF&lt;/span&gt;
    &lt;span class="s"&gt;sudo install creds /usr/local/bin/creds&lt;/span&gt;
    &lt;span class="s"&gt;git config --global credential.helper "creds"&lt;/span&gt;
  &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;GH_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GH_TOKEN }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Solution 2 bis: &lt;code&gt;credential.helper&lt;/code&gt; with a Github Action
&lt;/h3&gt;

&lt;p&gt;Same as solution 2 but wrapped in a neat Github Action &lt;a href="https://github.com/marketplace/actions/setup-git-credentials"&gt;setup-git-credentials&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This action stores the credentials in the file &lt;code&gt;$XDG_CONFIG_HOME/git/credentials&lt;/code&gt; and configures git to use it by calling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git config &lt;span class="nt"&gt;--global&lt;/span&gt; credential.helper store/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The actions also makes sure all &lt;code&gt;ssh&lt;/code&gt; calls are rewritten to &lt;code&gt;https&lt;/code&gt;. The 'credentials' field must be of the form &lt;a href="https://foo:%24GH_TOKEN@github.com/"&gt;https://foo:$GH_TOKEN@github.com/&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fusion-engineering/setup-git-credentials@v2&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;credentials&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://foo:{{secrets.GH_TOKEN}}@github.com&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Learning Kubernetes Controllers</title>
      <dc:creator>Maël Valais</dc:creator>
      <pubDate>Wed, 22 Apr 2020 09:58:00 +0000</pubDate>
      <link>https://dev.to/maelvls/learning-kubernetes-controllers-496j</link>
      <guid>https://dev.to/maelvls/learning-kubernetes-controllers-496j</guid>
      <description>&lt;p&gt;Kubernetes' extensibility is probably its biggest strength. Controllers and CRDs are all over the place. But finding the right information to begin writing a controller isn't easy due to the sheer amount of tribal knowledge scattered everywhere. This post intends to help you start with controllers.&lt;/p&gt;




&lt;p&gt;Let us begin with some terminology:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;controller&lt;/strong&gt;: a single loop that watches some objects. We often refer to this loop as "controller loop" or "sync loop" or "reconcile loop".&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;controller binary&lt;/strong&gt; is a binary that runs one or multiple sync loops. We often refer to it as "controllers".&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CRD&lt;/strong&gt; (Custom Resource Definition) is a simple YAML manifest that describes a custom object, for example &lt;a href="https://github.com/jetstack/cert-manager/blob/a04d2f0935/deploy/crds/crd-orders.yaml#L2"&gt;this CRD&lt;/a&gt; defines the acme.cert-manager.io/v1alpha3 Order resource. After applying this CRD to a Kubernetes cluster, you can apply manifests that have the kind "Order"&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: CRDs and controllers are decoupled. You can apply a CRD manifest without having any controller binary running. It works in both ways: you can have a controller binary running that doesn't require any custom objects. Traefik is a controller binary which relies on built-in Service objects.&lt;/p&gt;

&lt;p&gt;Note: the "CRD" manifest is just a schema. It doesn't carry any logic (except for the basic validation the apiserver does). The actual logic happens in the controller binary.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;operator&lt;/strong&gt;: the term "operator" is often used to mean a controller binary with its CRDs, for example the &lt;a href="https://github.com/elastic/cloud-on-k8s"&gt;elastic operator&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Here are the links that I would give to anyone interested in writing their own controller:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/kubernetes/community/blob/712590c108bd4533b80e8f2753cadaa617d9bdf2/contributors/devel/sig-api-machinery/controllers.md"&gt;sig-api-machinery/controllers.md&lt;/a&gt; gives a good intuition as to what a "controller" is:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;A Kubernetes controller is an active reconciliation process. That is, it watches some object for the world's desired state, and it watches the world's actual state, too. Then, it sends instructions to try and make the world's current state be more like the desired state.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Note: the client-go's informers and listers and workqueue are not mandatory for writing a controller: you can just rely on client-go's &lt;code&gt;Watch&lt;/code&gt; primitive to reconcile state. The informers and workqueue add important scalability and reliability features but these also come with the cost of heavy abstractions. Use client-go's &lt;code&gt;Watch&lt;/code&gt; first to have a sense of what it can offer, and then try out informers and workqueue.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://book.kubebuilder.io/quick-start.html"&gt;Kubebuilder book&lt;/a&gt; is a nice starting point. Kubebuilder uses code generation a lot and that's what most controllers use nowadays (Rancher uses a somehow forked version of controller-runtime and controller-tools, &lt;a href="https://github.com/rancher/wrangler"&gt;Wrangler&lt;/a&gt;, that also generates code but with their own "style" – for example, simple flat interfaces instead of &lt;a href="https://github.com/kubernetes/client-go"&gt;client-go&lt;/a&gt;'s deeply nested interfaces that don't feel like Go).&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md"&gt;Kubernetes API conventions&lt;/a&gt; is an amazing document. It summarizes a lot of the "tribal knowledge" around naming and how sync loops are conceived and what they mean by "level-based behaviour".&lt;/li&gt;
&lt;li&gt;Github search "&lt;a href="https://github.com/search?q=language%3Ayaml+language%3Ago+kubernetes+controllers"&gt;language:yaml language:go kubernetes controllers&lt;/a&gt;", tons of nice examples of controllers&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/jetstack/cert-manager"&gt;cert-manager&lt;/a&gt;'s codebase is a nice controller to take a look at&lt;/li&gt;
&lt;li&gt;ClusterAPI &lt;a href="https://github.com/kubernetes-sigs/cluster-api/blob/master/docs/proposals/20190610-machine-states-preboot-bootstrapping.md"&gt;proposals&lt;/a&gt; and codebase (&lt;a href="https://github.com/kubernetes-sigs/cluster-api"&gt;capi&lt;/a&gt;, &lt;a href="https://github.com/kubernetes-sigs/cluster-api-provider-aws"&gt;capa&lt;/a&gt;) and  (we took a lot of inspiration from what they do)&lt;/li&gt;
&lt;li&gt;The ClusterAPI &lt;a href="https://docs.google.com/document/d/1fQNlqsDkvEggWFi51GVxOglL2P1Bvo2JhZlMhm2d-Co/edit#"&gt;Meeting notes&lt;/a&gt; contains a ton of useful information on Machine, MachinePool... (crazy how much I learned from that).&lt;/li&gt;
&lt;li&gt;The Kubernetes &lt;code&gt;status&lt;/code&gt; field is tricky. You can take a look at "&lt;a href="https://maelvls.dev/kubernetes-conditions/"&gt;conditions vs. phases vs. reasons&lt;/a&gt;".&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://github.com/kubernetes/kubernetes"&gt;Kubernetes codebase&lt;/a&gt; itself is also a very nice read. It might feel overwhelming at first; I invite you to take a look at a few of the following sync loops contained in the &lt;code&gt;kube-controller-manager&lt;/code&gt;, &lt;code&gt;kube-scheduler&lt;/code&gt; and &lt;code&gt;kubelet&lt;/code&gt;. Since each sync loop reads or updates different objects, I also detail which objects are updated or created by each sync loop:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;| binary                  | sync loop = component              | reads | creates    | updates    |&lt;br&gt;
  | ----------------------- | ---------------------------------- | ----- | ---------- | ---------- |&lt;br&gt;
  | kube-controller-manager | &lt;a href="https://github.com/kubernetes/kubernetes/blob/5bac42bf/pkg/controller/deployment/deployment_controller.go#L560-L649"&gt;&lt;code&gt;syncDeployment&lt;/code&gt;&lt;/a&gt; | Pod   | ReplicaSet | Deployment |&lt;br&gt;
  | kube-controller-manager | &lt;a href="https://github.com/kubernetes/kubernetes/blob/5bac42bf/pkg/controller/replicaset/replica_set.go#L653-L721"&gt;&lt;code&gt;syncReplicaSet&lt;/code&gt;&lt;/a&gt; |       | Pod        |            |&lt;br&gt;
  | kubelet                 | &lt;a href="https://github.com/kubernetes/kubernetes/blob/5bac42bf/pkg/kubelet/status/status_manager.go#L514-L567"&gt;&lt;code&gt;syncPod&lt;/code&gt;&lt;/a&gt;               |       |            | Pod        |&lt;br&gt;
  | kube-scheduler          | &lt;a href="https://github.com/kubernetes/kubernetes/blob/5bac42bf/pkg/scheduler/scheduler.go#L589-L762"&gt;&lt;code&gt;scheduleOne&lt;/code&gt;&lt;/a&gt;       |       |            | Pod        |&lt;br&gt;
  | kubelet                 | &lt;a href="https://github.com/kubernetes/kubernetes/blob/5bac42bff9bfb9dfe0f2ea40f1c80cac47fc12b2/pkg/kubelet/kubelet_node_status.go#L374-L391"&gt;&lt;code&gt;syncNodeStatus&lt;/code&gt;&lt;/a&gt; |       |            | Node       |&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The podcast episode "&lt;a href="https://changelog.com/gotime/105"&gt;Gotime #105 – Kubernetes and Cloud Native&lt;/a&gt;" (Oct. 2019) with Joe Beda (initiator of Kubernetes) and Kris Nova is very interesting and tells us more about the genesis of the project, which things like why is Kubernetes written in Go and why client-go feels like Java. For example:
&amp;gt; &lt;strong&gt;Kris Nova:&lt;/strong&gt; I think there’s a fourth role. I think there’s what we called in the book an infrastructure engineer. These are effectively the folks like Joe and myself. These are the folks who are writing software to manage and mutate infrastructure behind the scenes. Folks who are contributing to Kubernetes, folks who are writing the software for the operators, folks who are writing admission controller implementations and so forth… I think it’s this very new engineer role, that we haven’t seen until we’ve started having – effectively, as Joe likes to put it, a platform-platform.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://github.com/operator-framework/operator-sdk"&gt;operator-sdk&lt;/a&gt; (RedHat) is a package that aims at helping dealing with the whole scafollding when writing a sync loop. It relies on &lt;a href="https://github.com/kubernetes-sigs/controller-runtime"&gt;controller-runtime&lt;/a&gt;. I don't use either of them but taking a look at these projects helps getting more understanding about the challenges (read: boilerplate) that comes when writing controllers. I personally write all the controller-related boilerplate myself (creating the queue, setting event handlers, running the loop itself...).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And a final note: CRDs are not necessary for writing a controller! You can write a tiny controller that watches the "standard" Kubernetes objects. That's exactly what ingress controllers do: they watch for Service objects.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Update 23 April 2020&lt;/strong&gt;: I added a quote from Kris Nova! 😁&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Update 2 May 2020&lt;/strong&gt;: Rephrased the "terminology" bullet points to make them clearer and added a note on CRD vs. controller binary.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>The Client-go Transitive Hell</title>
      <dc:creator>Maël Valais</dc:creator>
      <pubDate>Wed, 15 Apr 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/maelvls/the-client-go-transitive-hell-10j1</link>
      <guid>https://dev.to/maelvls/the-client-go-transitive-hell-10j1</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;⚠️ I'm not sure this is a transitive issue. It might just not be due to transitivity at all!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Looks like the Kubernetes people chose to break the client-go API without bumping the major version (with a new import path e.g &lt;code&gt;k8s.io/client-go/v13&lt;/code&gt;) when adding support for &lt;code&gt;context.Context&lt;/code&gt; that comes with Kubernetes v1.18. Darren Shepherd reported the &lt;a href="https://issues.k8s.io/88472"&gt;issue&lt;/a&gt; in February 2020. We &lt;a href="https://github.com/kubernetes/client-go#compatibility-your-code---client-go"&gt;were warned&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The v0.x.y tags indicate that go APIs may change in incompatible ways in different versions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At first, I thought that it would not affect us at Ori since we don’t use any extra dependency that rely on client-go — so no transitive dependencies on client-go, we are the sole user of it. We then began working on some tooling depending on the API of our main project. And the transitive hell began.&lt;/p&gt;

&lt;p&gt;Here is what the error looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# k8s.io/client-go/rest
../../../go/pkg/mod/k8s.io/client-go@v11.0.0+incompatible/rest/request.go:598:31: not enough arguments in call to watch.NewStreamWatcher
        have (*versioned.Decoder)
        want (watch.Decoder, watch.Reporter)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On top of that, you might also have a version mismatch between &lt;code&gt;k8s.io/api&lt;/code&gt; and &lt;code&gt;k8s.io/apimachinery&lt;/code&gt; with the following error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;../../../go/pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/clientcmd/api/v1/conversion.go:52:12: s.DefaultConvert undefined (type conversion.Scope has no field or method DefaultConvert)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you cannot upgrade to v0.18.* (e.g. you have a transitive dependency that still relies on v0.17.4), a workaround is to set client-go to use the latest pre-v1.18 version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt; require (
&lt;span class="gd"&gt;-    k8s.io/client-go v0.18.1
-    k8s.io/apimachinery v0.18.1
-    k8s.io/api v0.18.1 //indirect
&lt;/span&gt;&lt;span class="gi"&gt;+    k8s.io/client-go v0.17.4
+    k8s.io/apimachinery v0.17.4
+    k8s.io/api v0.17.4 //indirect
&lt;/span&gt; )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;v0.17.4&lt;/code&gt; version is the last version of apimachinery and api that stays compatible with client-go pre-v1.18.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long-term&lt;/strong&gt;: you want to upgrade to client-go v0.18.* as soon as possible. If the breaking dependencies that still require client-go with no &lt;code&gt;context.Context&lt;/code&gt; argument, ask their maintainer (if it is an open-source project) and hopefully that will work... but then, anyone relying on this project will also have then stuff broken.&lt;/p&gt;

&lt;p&gt;The project still has v11, v12... version but they stopped tagging new major versions like they did with &lt;code&gt;v11.0.0&lt;/code&gt; (there is no &lt;code&gt;v18.0.0&lt;/code&gt;) because they don't use the major used with Go Modules. That's due to the fact that the client-go project doesn't follow the "&lt;a href="https://research.swtch.com/vgo-import"&gt;semantic import versioning&lt;/a&gt;" rule. You can't do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% go get k8s.io/client-go@v12.0.0
go get k8s.io/client-go@v12.0.0: k8s.io/client-go@v12.0.0: invalid version: module contains a go.mod file, so major version must be compatible: should be v0 or v1, not v12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's why the Kubernetes team maintains another set of tags that begin with &lt;code&gt;v0.*&lt;/code&gt; (e.g., &lt;code&gt;v0.18.0&lt;/code&gt;) for that specific reason. Just a clever way of escaping the semantic import versioning, but all these different versions make it very confusing...&lt;/p&gt;

&lt;p&gt;Oh, and when you use the &lt;code&gt;kubernetes-1.17.4&lt;/code&gt; tag, it redirects to the &lt;code&gt;v0.17.4&lt;/code&gt; tag (my guess is that it is infered by &lt;code&gt;go get&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% go get k8s.io/client-go@kubernetes-1.17.4
go: k8s.io/client-go kubernetes-1.17.4 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; v0.17.4
go: downloading k8s.io/client-go v0.17.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And to make even more confusing, these tags (kubernetes-v1.17.4, v0.17.4 and v12.0.0) are not set on the master branch; instead, they all live on headless branches.&lt;/p&gt;

&lt;p&gt;This issue is a reminder that we (Kubernetes hackers who write controllers for a living) heavily rely on the “good will” of the Kubernetes team. These decisions as they might affect anyone relying on Kubernetes "as a platform"... 🤔&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How do packets find their way back?</title>
      <dc:creator>Maël Valais</dc:creator>
      <pubDate>Mon, 13 Apr 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/maelvls/how-do-packets-find-their-way-back-2mcm</link>
      <guid>https://dev.to/maelvls/how-do-packets-find-their-way-back-2mcm</guid>
      <description>&lt;p&gt;In a previous post "&lt;a href="https://dev.to/packets-eye-kubernetes-service/"&gt;The Packet's-Eye View of a Kubernetes Service&lt;/a&gt;", I studied how traffic flows in when using Kubernetes Services. In the last diagram of that post, I could not clearly see how traffic could make its way back to the user. In this article, I will try to understand how packets are able to flow back to the user and where stateless rewriting happens.&lt;/p&gt;

&lt;p&gt;In the following diagram, we can see a packet coming from a user, then being re-written by Google's VPC firewalls and finally coming into a VM "node 1".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DGcUkpiY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/how-do-packets-come-back/dnat-google-vpc-how-comes-back.svg" class="article-body-image-wrapper"&gt;&lt;img alt="Packet coming from a user hitting one going through Google's VPC" src="https://res.cloudinary.com/practicaldev/image/fetch/s--DGcUkpiY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/how-do-packets-come-back/dnat-google-vpc-how-comes-back.svg" width="461" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, how come the packet can come back &lt;del&gt;and does it use conntrack&lt;/del&gt; and does Google's firewall have to remember some state?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Update 14th April:&lt;/strong&gt; I initially thought that the conntrack kernel module would not register anything when using DNAT. &lt;a href="https://twitter.com/networkop1"&gt;@networkop1&lt;/a&gt; showed me that conntrack registers the connection even for stateless DNATing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://netfilter.org/documentation/HOWTO/netfilter-hacking-HOWTO-3.html#ss3.3"&gt;conntrack&lt;/a&gt; is a part of the netfilter suite in the Linux kernel. It is in charge of remembering connections that are forwarded. The initial packet hits the iptables machinery and conntrack remembers it so that further packets don't need to go through iptables again. You can list the tracked connections using the &lt;a href="https://manpages.debian.org/testing/conntrack/conntrack.8.en.html"&gt;conntrack(8)&lt;/a&gt; tool. I mention it in "&lt;a href="https://dev.to/debugging-kubernetes-networking/"&gt;Debugging Kubernetes Networking&lt;/a&gt;".&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let us dive a bit more and add the "response" packets. For the following diagram, I used the excellent &lt;a href="https://textik.com/"&gt;textik&lt;/a&gt; ascii drawing tool.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;           D-NAT (dest-based NAT, also called port-forwarding)

+--------------------------------------------------------------------------+
|   src: 90.76.45.149:32345                       src: 35.211.248.124:80   |
|   dst: 35.211.248.124:80                        dst: 90.76.45.149:32345  |
+------------------------------90.76.45.149--------------------------------+
               |                 (user)                     |
               |                                            |
+--------------------------------------------------------------------------+
|              |                one-to-one                  |              |
|              v              port forwarding               |              |
|   src: 90.76.45.149:32345          =          - src: 10.142.0.62:80      |
| - dst: 35.211.248.124:80      no need for     + src: 35.211.248.124:80   |
| + dst: 10.142.0.62:80         conntrack to      dst: 90.76.45.149:32345  |
|              |                 remember!                  ^              |
|              |                (stateless)                 |              |
|              |                                            |              |
+--------------|--------------35.211.248.124----------------|--------------+
               |              (Google's VPC)                |
               |                                            |
               |                                            |
+--------------|--------------------------------------------|--------------+
|              v             userland process               |              |
|   src: 90.76.45.149:32345      response         src: 10.142.0.62:80      |
|   dst: 10.142.0.62:80      ----------------&amp;amp;gt;    dst: 90.76.45.149:32345  |
|                                                                          |
+-------------------------------10.142.0.62--------------------------------+
                                (VM in VPC)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;By reading through the diagram, we can see that the packet is re-written by Google's firewall using DNAT: the destination is replaced by a fixed IP, the one of the VM.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Why do I say "packets" but what I should really say is "segments"? That's because I don't really know anyone using this strict terminology. Outside of the kernel and TCP/IP stack implementors, who actually cares about the L3 layer "units"? And I enjoy the "packet" word too more than "segment"!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, why do I care about Google's firewalls storing state? That's because if some state about a connection has to be remembered, it means it is harder to distribute the firewall horizontally, which makes it harder to scale.&lt;/p&gt;

&lt;p&gt;As we can see on the diagram, the firewall does not need to remember anything: it is just a static one-to-one relation between &lt;code&gt;10.142.0.62&lt;/code&gt; and &lt;code&gt;35.211.248.124&lt;/code&gt;.&lt;/p&gt;




&lt;p&gt;Here is what I want to remember from this post:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Outgoing traffic from your broadband modem router has to be SNATed (source-based NAT). The router needs to keep track of outgoing connections using conntrack.&lt;/li&gt;
&lt;li&gt;Incoming traffic from the Internet to a Google Cloud VM has to go through the VPC firewall. The packet rewriting is very fast and very scalable since it only uses DNAT, which means no need to remember anything.&lt;/li&gt;
&lt;li&gt;Most packet forwarding in Kubernetes relies on stateless DNATing (e.g. &lt;code&gt;hostPort&lt;/code&gt; or &lt;code&gt;nodePort&lt;/code&gt;). Some parts of Kubernetes rely on stateful SNAT rewriting, for example when you use &lt;code&gt;externalTrafficPolicy: Cluster&lt;/code&gt; in a which is the default policy for a Service. The following diagram shows where this rewriting happens (extracted from the last diagram in "&lt;a href="https://dev.to/packets-eye-kubernetes-service/"&gt;The Packet's-Eye View of a Kubernetes Service&lt;/a&gt;"):&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wwU_zv0l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/how-do-packets-come-back/kubernetes-snat-cluster-ip.svg" class="article-body-image-wrapper"&gt;&lt;img alt="Packet's source is rewritten (SNAT) because of the 'policy: Cluster' that is set in the Service." src="https://res.cloudinary.com/practicaldev/image/fetch/s--wwU_zv0l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/how-do-packets-come-back/kubernetes-snat-cluster-ip.svg" width="283" height="89"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The evolution of my home office from 2019 to 2022</title>
      <dc:creator>Maël Valais</dc:creator>
      <pubDate>Mon, 30 Mar 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/maelvls/my-home-office-setup-in-2020-220f</link>
      <guid>https://dev.to/maelvls/my-home-office-setup-in-2020-220f</guid>
      <description>&lt;p&gt;In this post, I document the changes to my office setup through time.&lt;/p&gt;

&lt;h2&gt;
  
  
  2022 Update 2
&lt;/h2&gt;

&lt;p&gt;I decided to move my office out of my home! I am now located in a co-working space in the city of Toulouse. It is called "HarryCow".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CX8JOHb6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/harrycow-2022-03-03.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CX8JOHb6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/harrycow-2022-03-03.jpg" alt="" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2022 Update
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZuR-xRYx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/mael-home-office-desk-2022.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZuR-xRYx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/mael-home-office-desk-2022.jpg" alt="" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I added a second monitor. It is the same as the one I bought in 2019.&lt;/p&gt;

&lt;h2&gt;
  
  
  2021 Second Update
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HlNC9eOm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/mael-home-office-desk-2021-06-18.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HlNC9eOm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/mael-home-office-desk-2021-06-18.jpg" alt="" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the above picture, I went from a Macbook Pro to a custom PC I built!! I wanted a machine purposed for Linux and with as much CPU as I could afford. I made a &lt;a href="https://gist.github.com/maelvls/3028862dd516005403cf728f4705f4bb"&gt;gist&lt;/a&gt; with all the tips I have learned from my experience of switching from macOS to Linux.&lt;/p&gt;

&lt;p&gt;Here is the part list (also available on &lt;a href="https://pcpartpicker.com/list/FcxPkX"&gt;pcpartpicker&lt;/a&gt;):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Part&lt;/th&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Price (01 May 2021)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;
&lt;del&gt;AMD Ryzen 9 5900X 3.4 GHz 12-Core Processor&lt;/del&gt; (I sent it back)&lt;/td&gt;
&lt;td&gt;€ 708 (street price)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;AMD Ryzen 9 5950X 3.4 GHz 16-Core Processor (bought on AMD.com at MSRP)&lt;/td&gt;
&lt;td&gt;€ 795 (MSRP)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU Cooler&lt;/td&gt;
&lt;td&gt;Noctua NH-D15 CHROMAX.BLACK 82.52 CFM CPU Cooler&lt;/td&gt;
&lt;td&gt;€ 99&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Motherboard&lt;/td&gt;
&lt;td&gt;Gigabyte X570 AORUS MASTER ATX AM4 Motherboard&lt;/td&gt;
&lt;td&gt;€ 359&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory&lt;/td&gt;
&lt;td&gt;G.Skill Trident Z RGB 32 GB (2 x 16 GB) DDR4-3600 CL16 Memory&lt;/td&gt;
&lt;td&gt;€ 304&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage&lt;/td&gt;
&lt;td&gt;Samsung 980 Pro 1 TB M.2-2280 NVME Solid State Drive&lt;/td&gt;
&lt;td&gt;€ 219&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Video Card&lt;/td&gt;
&lt;td&gt;
&lt;del&gt;Gigabyte Radeon RX 6700 XT 12 GB EAGLE&lt;/del&gt; (I sent it back)&lt;/td&gt;
&lt;td&gt;€ 858 (street price)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Video Card&lt;/td&gt;
&lt;td&gt;AMD Radeon RX 6800 16 GB (bought on AMD.com at MSRP)&lt;/td&gt;
&lt;td&gt;€ 579 (MSRP)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Case&lt;/td&gt;
&lt;td&gt;Fractal Design Define 7 Compact ATX Mid Tower Case&lt;/td&gt;
&lt;td&gt;€ 133&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Power Supply&lt;/td&gt;
&lt;td&gt;be quiet! Dark Power Pro 11 750 W 80+ Platinum Semi-modular ATX Power Supply&lt;/td&gt;
&lt;td&gt;€ 243&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Case Fan × 4&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.bequiet.com/en/casefans/717"&gt;be quiet! Silent Wings 3 59.5 CFM 140 mm Fan PWM&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;€ 29 × 4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;€ 2847&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IDy3cQiv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/mael-home-office-desk-2021-06-18-close-up-pc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IDy3cQiv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/mael-home-office-desk-2021-06-18-close-up-pc.jpg" alt="" width="880" height="1173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2021 Update
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dhVvbe4E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/mael-home-office-desk-2021.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dhVvbe4E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/mael-home-office-desk-2021.jpg" alt="My home office in 2020" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I switched from the Apple Magic AZERTY keyboard to a QWERTY mechanical keyboard: the &lt;a href="https://www.amazon.com/DURGOD-Mechanical-Interface-Tenkeyless-Anti-Ghosting/dp/B07B8J6C3C"&gt;Durgod Taurus K320 TKL&lt;/a&gt;. I chose the Cherry MX Brown switches, and I love them. It feels so much better to be typing on this keyboard. Moving from AZERTY to QWERTY was the hardest part. It took about 3 months to get as confortable as I was using the AZERTY layout. I really wanted to stop working on a "second class" keyboard layout that has poor shortcut support on most apps. The other adjustment I had to do was to go to the macOS settings and swap the Windows key with the Alt key so that the ⌘ and ⌥ keys are at in the right order.&lt;/li&gt;
&lt;li&gt;Two &lt;a href="https://www.amazon.com/UTEBIT-Upgraded-Articulating-Friction-Adjustable/dp/B07H77KB7R/ref=sr_1_6?dchild=1&amp;amp;keywords=UTEBIT+Desk+Mount+Metal+Tabletop+Light+Stand+Adjustable&amp;amp;qid=1615567111&amp;amp;sr=8-6"&gt;UTEBIT articulating arms&lt;/a&gt; that hold (1) an old iPhone 7 that I use as a secondary screen using Duet and (2) a Razer Kiyo camera.&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://www.razer.com/streaming-cameras/razer-kiyo"&gt;Razer Kiyo&lt;/a&gt; camera (USB 3). I wish I had a proper DSLR camera though: the Kiyo needs a ton of light to have a good image quality.&lt;/li&gt;
&lt;li&gt;Two &lt;a href="https://www.amazon.com/UTEBIT-Shooting-Adjustable-Aluminum-Tabletop/dp/B08PYY95LJ/ref=sr_1_3?dchild=1&amp;amp;keywords=UTEBIT+Desk+Mount+Metal+Tabletop+Light+Stand+Adjustable&amp;amp;qid=1615567111&amp;amp;sr=8-3"&gt;UTEBIT tabletop arms&lt;/a&gt; that I use to hold the flood lights.&lt;/li&gt;
&lt;li&gt;Two &lt;a href="https://www.amazon.com/Neewer-Pieces-Bi-color-Video-Light/dp/B06XW3B81V"&gt;NEEWER bi-color 660 flood lights&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;A set of UTEBIT 1/4 to 3/8 screw adapters.&lt;/li&gt;
&lt;li&gt;24 panels of the &lt;a href="https://www.thomann.de/gb/the_takustik_was7_absorber_8erset.htm"&gt;t.akustik WAS-7 Absorber&lt;/a&gt;. It greatly reduced the amount of echo/reberb in the room, and allows me to put the microphone a bit further away without losing on sound quality.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2020 Update
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---mht5w0j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/mael-home-office-desk-2020.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---mht5w0j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/mael-home-office-desk-2020.jpg" alt="My home office in 2020" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.bluedesigns.com/products/compass"&gt;Yeti Compass&lt;/a&gt; boom arm
&amp;gt; Note that the boom arm is meant to carry the shock mount + the mic and since I only mounted the Yeti mic, the spring mechanism is a bit too tight which means the arm tends to go up. I unscrewed the screw in the base of the boom arm at its minimum but the tension is still too high and the boom arm tends to go up.&lt;/li&gt;
&lt;li&gt;Apple Magic Keyboard. I also use a &lt;a href="https://fr.sharkoon.com/product/PureWriter%20TKL"&gt;Sharkoon PureWriter TKL&lt;/a&gt; from time to time when I want to bother my collegues with the clank-clank sound of the red switches&lt;/li&gt;
&lt;li&gt;USB-C hub Ugreen and AUKEY USB3 hub.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2019 Office
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3dLFg3cQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/mael-home-office-desk-2019.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3dLFg3cQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/mael-home-office-desk-2019.jpg" alt="My home office in 2020" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.amazon.com/LG-27UL850-W-Display-DisplayHDR-Connectivity/dp/B07MKT1W65/ref=cm_cr_arp_d_product_top?ie=UTF8"&gt;LG 27UL850-W 27 inches 4K monitor&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://griffintechnology.com/products/elevator"&gt;Griffin Elevator stand&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://www.amazon.co.uk/FlexiSpot-Adjustable-Electric-Standing-Automatic/dp/B071G2NSRP/ref=sr_1_fkmr0_1?keywords=flexispot%2BE5B&amp;amp;qid=1563776981&amp;amp;s=electronics&amp;amp;sr=8-1-fkmr0&amp;amp;th=1"&gt;Flexispot standing desk E5B&lt;/a&gt; (B = black) with a €25 wood board I mounted on top.&lt;/li&gt;
&lt;li&gt;I use the &lt;a href="https://www.beatsbydre.com/headphones/solo3-wireless"&gt;Beats Solo 3&lt;/a&gt; headphones. I only use it for audio output. I never use the mic in Bluetooth mode since it greatly degrades my colleague's listening experience.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.bluedesigns.com/products/yeti"&gt;Blue Yeti&lt;/a&gt; USB microphone
&amp;gt; Why a standalone mic just for Zoom? One very important aspect of the remote-only work setup is the importance of being properly understood. I take this very seriously and think that the sound quality of the mic I use every day influences how effective my meetings are. I also make a number of calls to potential candidates, which means I should sound perfect.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.amazon.com/Logitech-Wireless-Solar-Kebyoard-iPhone/dp/B007VL8Y2C"&gt;Logitech K760&lt;/a&gt; (wireless solar keyboard).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.amazon.co.uk/Logitech-Master-Wireless-Bluetooth-Windows/dp/B00ULNAOMA"&gt;Logitech MX Master&lt;/a&gt; Just an excellent mouse&lt;/li&gt;
&lt;li&gt;AUKEY mousepad XXL.&lt;/li&gt;
&lt;li&gt;Some random chair (I wish I could afford the Herman Miller Aeron)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And finally, here is a picture from the backyard. 🙂&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LgOkcYQI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/at-maels.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LgOkcYQI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/evolution-of-my-home-office/at-maels.jpg" alt="Backyard" width="880" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Migrating from GKE to Civo's K3s</title>
      <dc:creator>Maël Valais</dc:creator>
      <pubDate>Sun, 22 Mar 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/maelvls/migrating-from-gke-to-civo-s-k3s-9b5</link>
      <guid>https://dev.to/maelvls/migrating-from-gke-to-civo-s-k3s-9b5</guid>
      <description>&lt;p&gt;To learn and play with Kubernetes, I keep a "playground" cluster to try things on it (helm files I use are &lt;a href="https://github.com/maelvls/k.maelvls.dev"&gt;here&lt;/a&gt;). Since 2019, I have been using GKE (Google's managed Kubernetes service), which works great with the one-year \$300 credit that you get initially. A few days ago, reality hit me hard with this message:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CX-smkgA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/from-gke-to-civo-k3s/2-days-free-trial.png" class="article-body-image-wrapper"&gt;&lt;img alt="Only 2 days left on my GCP 1-year trial" src="https://res.cloudinary.com/practicaldev/image/fetch/s--CX-smkgA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/from-gke-to-civo-k3s/2-days-free-trial.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I had only 2 days to find a plan and migrate everything away from GKE! My current setup was only using a single &lt;code&gt;n1-standard-1&lt;/code&gt; on &lt;code&gt;us-west-1&lt;/code&gt; and with no network load balancer. But that was still around €15 a month, and I just didn't want to pay. I chose to migrate to Civo's managed K3s since they are in beta and I really wanted to try K3s.&lt;/p&gt;

&lt;h2&gt;
  
  
  Civo's Managed K3s
&lt;/h2&gt;

&lt;p&gt;Civo is a company that offers a public cloud as well as managed Kubernetes clusters. Their managed Kubernetes offer, named "&lt;a href="https://www.civo.com/kube100"&gt;KUBE100&lt;/a&gt;", is quite new (launched in mid-2019). Unlike most managed Kubernetes offerings like EKS or GKE, Civo went with Rancher's &lt;a href="https://k3s.io/"&gt;K3s&lt;/a&gt; Kubernetes distribution.&lt;/p&gt;

&lt;p&gt;Compared to the standard Kubernetes distribution, K3s is a way lighter: simple embedded database with sqlite, one single binary instead of four; a single VM can both host the control plane and run pods. With the traditional Kubernetes distribution, you have to run the control plane on a separate VM. Note that Civo has a whole blog post on "&lt;a href="https://www.civo.com/blog/k8s-vs-k3s"&gt;Kubernetes vs. K3s&lt;/a&gt;".&lt;/p&gt;

&lt;p&gt;This lightweight feature is what brings me here, and also the fact that K3s comes by default with Traefik &amp;amp; their own tiny Service type=LoadBalancer controller, which means you don't even need an expensive network load balancer like on GKE when you want to expose a service to the internet.&lt;/p&gt;

&lt;p&gt;Of course, K3s has drawbacks and is probably meant for IoT-based clusters, but it's also perfect for playing around with Kubernetes! So I went ahead and create a two-nodes cluster:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bnmSlEjZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/from-gke-to-civo-k3s/civo-k3s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bnmSlEjZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://maelvls.dev/from-gke-to-civo-k3s/civo-k3s.png" alt="My K3s cluster on Civo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that since this is K3s, the master can also be scheduled with pods. So I could have gone with a single node.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating ExternalDNS, cert-manager and Traefik
&lt;/h2&gt;

&lt;p&gt;That's the easy part. I just had to run &lt;a href="https://github.com/maelvls/k.maelvls.dev"&gt;./helm_apply&lt;/a&gt; which runs &lt;code&gt;helm upgrade --install&lt;/code&gt; for each helm chart I use.&lt;/p&gt;

&lt;p&gt;In the process, I decided to go with &lt;code&gt;*.k.maelvls.dev&lt;/code&gt; instead of &lt;code&gt;*.kube.maelvls.dev&lt;/code&gt;. The shorter, the better! I forgot to add that I still use Google's CloudDNS. I did not find an easy alternative yet. Maybe just Cloudflare for now?&lt;/p&gt;

&lt;p&gt;After creating Traefik, I knew that its service would automatically be populated with the &lt;code&gt;status.loadBalancer&lt;/code&gt; field thanks to K3s' &lt;a href="https://github.com/rancher/k3s/blob/master/pkg/servicelb/controller.go"&gt;servicelb&lt;/a&gt;. Let's see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; traefik get services
NAME     TYPE          CLUSTER-IP        EXTERNAL-IP      PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;
traefik  LoadBalancer  192.168.255.134   91.211.152.190   443:32164/TCP,80:32684/TCP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great! Now, Traefik will propagate this external IP to the ingresses, and ExternalDNS will use the &lt;code&gt;status.loadBalancer&lt;/code&gt; from these ingresses in order to set &lt;code&gt;A&lt;/code&gt; records.&lt;/p&gt;

&lt;p&gt;If you want to know more about how "servicelb" works, you can take a look at &lt;a href="https://maelvls.dev/packets-eye-kubernetes-service/"&gt;The Packet's-Eye View of a Kubernetes Service&lt;/a&gt; where I describe how Akrobateo works (K3s' servicelb has the exact same behavior).&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating MinIO from the old to the new cluster
&lt;/h2&gt;

&lt;p&gt;I use &lt;a href="https://min.io/"&gt;minio&lt;/a&gt; for various uses. It is great if you want a S3-compatible storage solution. In order to migrate, I followed &lt;a href="https://www.scaleway.com/en/docs/how-to-migrate-object-storage-buckets-with-minio"&gt;this&lt;/a&gt;. It's quite painless:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; minio run a &lt;span class="nt"&gt;--generator&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;run-pod/v1 &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Never &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;alpine

% wget https://dl.minio.io/client/mc/release/linux-amd64/mc &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;mc /usr/bin
% mc config host add old https://minio.kube.maelvls.dev AKIAIOSFODNN7EXAMPLE &lt;span class="s2"&gt;"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"&lt;/span&gt; &lt;span class="nt"&gt;--api&lt;/span&gt; S3v4
% mc config host add new http://minio:9000 AKIAIOSFODNN7EXAMPLE &lt;span class="s2"&gt;"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"&lt;/span&gt; &lt;span class="nt"&gt;--api&lt;/span&gt; S3v4

% mc &lt;span class="nb"&gt;ls &lt;/span&gt;old/         &lt;span class="c"&gt;# List buckets since I had to create manually each bucket.&lt;/span&gt;
% mc mb new/bucket1  &lt;span class="c"&gt;# Then create each bucket one by one.&lt;/span&gt;
% mc &lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;--recursive&lt;/span&gt; old/bucket1/ new/bucket1/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I also decided to change the access key &amp;amp; secret key. Again, quite painless. As mentioned in the &lt;a href="https://github.com/minio/minio/tree/master/docs/config"&gt;documentation&lt;/a&gt;, I changed the secret stored in Kubernetes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; minio edit secret minio
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then I temporarily added the &lt;code&gt;MINIO_ACCESS_KEY_OLD&lt;/code&gt; and &lt;code&gt;MINIO_SECRET_KEY_OLD&lt;/code&gt; to the deployment by editing it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; minio edit deployment minio
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, the pods get recreated, and MinIO picks up the new secret. Note: I also had to edit the deployment again in order to remove the temporary &lt;code&gt;_OLD&lt;/code&gt; environment variables.&lt;/p&gt;




&lt;p&gt;To recap, the whole migration was painless. The only data I migrated was MinIO. Note that I didn't have any SLA to comply with, but if I had planned a bit better, I could have moved over with almost zero downtime.&lt;/p&gt;

&lt;p&gt;In order to get almost-zero-downtime, I would have made sure to keep the old and new MinIO instances replicated until the move was over. The only problem with the whole migration is the DNS change. I cannot precisely know when the propagation will take. After completing the migration and assuming that all the DNS entries are propagated, if for some reason people keep hitting the old IP due to outdated DNS entries, the old and new clusters would have become out-of-sync. To mitigate that issue, I could have chosen to "cordon" the old cluster just to make sure that this case never happens.&lt;/p&gt;

&lt;p&gt;The repo for my Kubernetes playground cluster (&lt;code&gt;*.k.maelvls.dev&lt;/code&gt;) is available &lt;a href="https://github.com/maelvls/k.maelvls.dev"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Update 7 May 2020&lt;/strong&gt;: better introduction explaining why I use GKE, tell what Civo and K3s are all about.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
  </channel>
</rss>
