<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tim Downey</title>
    <description>The latest articles on DEV Community by Tim Downey (@downey).</description>
    <link>https://dev.to/downey</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/downey"/>
    <language>en</language>
    <item>
      <title>Easily Produce a Software Bill of Materials with Cloud Native Buildpacks</title>
      <dc:creator>Tim Downey</dc:creator>
      <pubDate>Sun, 05 Jun 2022 19:49:23 +0000</pubDate>
      <link>https://dev.to/downey/easily-produce-a-software-bill-of-materials-with-cloud-native-buildpacks-3nm3</link>
      <guid>https://dev.to/downey/easily-produce-a-software-bill-of-materials-with-cloud-native-buildpacks-3nm3</guid>
      <description>&lt;p&gt;&lt;em&gt;Cover Photo by &lt;a href="https://unsplash.com/@madebyvadim?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Vadim Sherbakov&lt;/a&gt;on &lt;a href="https://unsplash.com/photos/osSryggkso4"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;It seems like every week we hear that there is a new vulnerability in some core dependency. Or maybe yet another package we all depend on has been hacked (maintainers, please enable two-factor auth!!). The only thing we can do as developers in these cases is patch our software back to a safe version of the dependency as soon as possible. That's easier said than done, however. For many of us even just knowing what dependencies our deployed code is using is a struggle. &lt;/p&gt;

&lt;p&gt;That's where having a &lt;a href="https://www.cisa.gov/sbom"&gt;Software Bill of Materials&lt;/a&gt; comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an SBOM?
&lt;/h2&gt;

&lt;p&gt;A Software Bill of Materials, aka an SBOM, is a key component in software supply chain security. SBOMs are really just a fancy term for a nested list of all of the dependencies that make up a piece of software. There are a variety of different formats such as &lt;a href="https://github.com/anchore/syft"&gt;Syft JSON&lt;/a&gt;, &lt;a href="https://spdx.dev/"&gt;SPDX&lt;/a&gt;, and &lt;a href="https://cyclonedx.org/specification/overview/"&gt;Cyclone DX&lt;/a&gt;. Others do a &lt;a href="https://fossa.com/blog/software-bill-of-materials-formats-use-cases-tools/"&gt;much better job at explaining their content and differences&lt;/a&gt;, but at their core all SBOMs contain a list of dependencies and information about each dependency such as its name, version, supplier, cryptographic hashes (checksums), and other unique identifiers. And even if you don't personally care about SBOMs (even though you should!!), the US Federal Government sure does. There is an &lt;a href="https://www.federalregister.gov/documents/2021/06/02/2021-11592/software-bill-of-materials-elements-and-considerations"&gt;executive order&lt;/a&gt; mandating that all software sold to the US Federal Government include them. So how do we go about generating a Software Bill of Materials for our code?&lt;/p&gt;

&lt;h2&gt;
  
  
  Producing SBOMs with Buildpacks
&lt;/h2&gt;

&lt;p&gt;There are a number of different utilities that can generate SBOMs for container images after the fact (such as &lt;a href="https://github.com/anchore/syft"&gt;Syft&lt;/a&gt;), but I like to get them for free without doing any additional work. 🙃 Plus, it's really nice to generate them at build time since if you're using a compiled language and producing minimal, distroless images it can be difficult to even determine what went into that binary.&lt;/p&gt;

&lt;p&gt;That's where &lt;a href="https://buildpacks.io/"&gt;Cloud Native Buildpacks&lt;/a&gt; come in.&lt;/p&gt;

&lt;p&gt;I first became a fan of buildpacks after I pushed my first Rails app to Heroku nearly a decade ago and have continued to appreciate them throughout my time as a &lt;a href="https://www.cloudfoundry.org/"&gt;Cloud Foundry&lt;/a&gt; contributor. Cloud Native Buildpacks (CNBs) take all that is nice about buildpacks from those ecosystems and bring them to the world of containers and Kubernetes. Using the &lt;code&gt;pack&lt;/code&gt; CLI I can quickly build a container image capable of running my source code without having to maintain a &lt;code&gt;Dockerfile&lt;/code&gt; and have it come with a whole load of best practices already baked in. Among those is the fact that many CNBs have first-class support for SBOMs &lt;a href="https://buildpacks.io/docs/features/bill-of-materials/"&gt;baked in&lt;/a&gt;. By simply building your app with one of these buildpacks it will include a layer in the resulting image that contains Syft, SPDX, and Cyclone DX formatted SBOMs which you can easily access with the &lt;code&gt;pack sbom download&lt;/code&gt; command. By baking the SBOM into the image, you can keep both of them together and even sign the image to know that it hasn't been tampered with and exactly what version of the code it belongs to.&lt;/p&gt;

&lt;p&gt;Sound cool? While the &lt;a href="https://buildpacks.io/docs/features/bill-of-materials/"&gt;buildpack docs&lt;/a&gt; cover how to do it, let's do a quick demonstration with a real codebase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;I'll be using the &lt;a href="https://buildpacks.io/docs/tools/pack/"&gt;&lt;code&gt;pack&lt;/code&gt;&lt;/a&gt; CLI to build my images and the &lt;a href="https://github.com/wagoodman/dive"&gt;&lt;code&gt;dive&lt;/code&gt;&lt;/a&gt; CLI to inspect them in more depth. Both utilities were readily available for my M1 Mac via &lt;code&gt;brew&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The codebase I'm using is the Cloud Foundry &lt;a href="https://github.com/cloudfoundry/korifi"&gt;Korifi&lt;/a&gt; project -- a set of Golang Kubernetes controllers.&lt;/p&gt;

&lt;p&gt;So first thing's first, we've got to clone the repo and &lt;code&gt;cd&lt;/code&gt; into its directory. Next (assuming you haven't already configured &lt;code&gt;pack&lt;/code&gt;), set the default builder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;pack config default-builder paketobuildpacks/builder:base
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This basically is telling &lt;code&gt;pack&lt;/code&gt; that you want to use the core set of Paketo Cloud Native Buildpacks with a versatile base OS image. Since this is a Golang project I could have chosen the &lt;code&gt;tiny&lt;/code&gt; builder (similar to &lt;a href="https://github.com/GoogleContainerTools/distroless"&gt;distroless&lt;/a&gt;), but the &lt;code&gt;base&lt;/code&gt; builder is more flexible and supports other languages that I use frequently so I like to keep it as my default builder.&lt;/p&gt;

&lt;p&gt;Next, build the project! Since Korifi contains the code for other components of this project, we need to set the &lt;code&gt;BP_GO_TARGETS&lt;/code&gt; to build the "controllers" binary specifically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;pack build korifi-controllers --env BP_GO_TARGETS="./controllers"
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will take a bit of time to build, but once it's done that's it! We've now got a runnable container image for our Kubernetes controllers with some SBOMs baked in. The &lt;code&gt;pack&lt;/code&gt; CLI let's us extract those SBOMs with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;pack sbom download korifi-controllers -o /tmp/korifi-controllers-sbom
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This outputted them to the temp directory I specified with &lt;code&gt;-o&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls /tmp/korifi-controllers-sbom/layers/sbom/launch/paketo-buildpacks_go-build/targets
sbom.cdx.json  sbom.spdx.json sbom.syft.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, it output SBOMs in the three main formats: Cyclone DX, SPDX, and Syft.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"artifacts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"f81915beaeb286e0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cloud.google.com/go/compute"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"v1.6.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"go-module"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"foundBy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"go-module-binary-cataloger"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"locations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"controllers"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"licenses"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"language"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"go"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"cpes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"cpe:2.3:a:go:compute:v1.6.1:*:*:*:*:*:*:*"&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"purl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"pkg:golang/cloud.google.com/go/compute@v1.6.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"metadataType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GolangBinMetadata"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"metadata"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"goCompiledVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"go1.18.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"architecture"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"amd64"&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"24aa113dfb7228c8"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"code.cloudfoundry.org/eirini-controller"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"v0.3.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"go-module"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"foundBy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"go-module-binary-cataloger"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"locations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"controllers"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"licenses"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"language"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"go"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"cpes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"purl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"pkg:golang/code.cloudfoundry.org/eirini-controller@v0.3.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"metadataType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GolangBinMetadata"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"metadata"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"goCompiledVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"go1.18.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"architecture"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"amd64"&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As I mentioned earlier, the SBOMs are a part of the image itself so there is no doubt what they belong to and no need to store them separately.&lt;/p&gt;

&lt;p&gt;You can see them using the &lt;code&gt;dive&lt;/code&gt; utility I mentioned earlier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dive korifi-controllers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a really cool utility in general because it lets you view all of the files that make up each layer in a container image. Here we can use it to navigate to one of the smaller layers at the end that contains our SBOMs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rBjgKf2i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qpxxaia62y479i30vvfx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rBjgKf2i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qpxxaia62y479i30vvfx.png" alt="Inspecting the SBOM layers of an image using dive." width="880" height="649"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So that's cool. It also means that if you sign your images (always a good idea) that the SBOMs is conveniently signed as well and this will make it tamper resistant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vulnerability Scanning our SBOMs
&lt;/h2&gt;

&lt;p&gt;So what else can we do with those SBOM files? We can scan them for vulnerabilities! For this we can use the &lt;a href="https://github.com/anchore/grype"&gt;&lt;code&gt;grype&lt;/code&gt;&lt;/a&gt; CLI (like Syft, it is also built by Anchore) and point it at the SBOM files we downloaded earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;grype sbom:/tmp/korifi-controllers-sbom/layers/sbom/launch/paketo-buildpacks_go-build/targets/sbom.syft.json --only-fixed
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt; ✔ Vulnerability DB        [no update available]
 ✔ Scanned image           [0 vulnerabilities]

[0000]  WARN some package(s) are missing CPEs. This may result in missing vulnerabilities. You may autogenerate these using: --add-cpes-if-none
No vulnerabilities found
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Grype is really cool and something I'd recommend running in CI periodically against your SBOMs/images. There's always going to be CVEs out there so you can use the &lt;code&gt;--fail-on&lt;/code&gt; flag to specify what level you'd like to fail on and the &lt;code&gt;--only-fixed&lt;/code&gt; flag to only report CVEs that you can take action on.&lt;/p&gt;

&lt;h2&gt;
  
  
  SBOMs are Cool
&lt;/h2&gt;

&lt;p&gt;Well there you have it. Software Bill of Materials are currently all the rage when it comes to securing your software supply chain and as we've seen, they're not too difficult to produce! I hope you learned something new about buildpacks and SBOMs from this post, and even if you haven't, thanks for sticking around this long! 😊&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>security</category>
      <category>sbom</category>
      <category>buildpacks</category>
    </item>
    <item>
      <title>Installing Folding at Home on a Raspberry Pi</title>
      <dc:creator>Tim Downey</dc:creator>
      <pubDate>Sun, 31 Jan 2021 21:30:32 +0000</pubDate>
      <link>https://dev.to/downey/installing-folding-at-home-on-a-raspberry-pi-895</link>
      <guid>https://dev.to/downey/installing-folding-at-home-on-a-raspberry-pi-895</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;tl;dr?&lt;/strong&gt; Feel free to skip ahead to the installation guide below.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Back when I was a kid I used to run &lt;a href="https://foldingathome.org/" rel="noopener noreferrer"&gt;Folding@Home&lt;/a&gt; on our old family computer -- a silver Gateway tower with a 1.8 GHz Pentium 4. I'm not sure how I found out about it, but I remember being inordinately excited by the prospect of contributing CPU cycles to simulate protein folding and help discover cures for cancer and other diseases. Yeah, I was a nerd. 🤓&lt;/p&gt;

&lt;p&gt;Anyways, I kind of forgot about the project until this past year -- thanks to Covid-19. 😒 In addition to the typical work units, Folding@Home was now distributing work to help researchers better understand the SARS-CoV-2 virus (&lt;a href="https://foldingathome.org/diseases/infectious-diseases/covid-19/" rel="noopener noreferrer"&gt;you can read more on that here!&lt;/a&gt;). So once again I did my civic duty and fired up my computers to contribute!&lt;/p&gt;

&lt;p&gt;My desktop with its GTX 1070 graphics card chewed through simulations like crazy -- leagues faster than the old Gateway ever could. It got me wondering, though, "How would my Raspberry Pis fare?" I've got a ton of these suckers sitting dormant these days, both from my &lt;a href="https://downey.io/blog/how-to-build-raspberry-pi-kubernetes-cluster/" rel="noopener noreferrer"&gt;Pi Kubernetes cluster&lt;/a&gt; and the &lt;a href="https://downey.io/blog/raspberry-pi-4-heatsinks-and-fans/" rel="noopener noreferrer"&gt;Raspberry Pi fan performance tests&lt;/a&gt; that I conducted. Rather than collect dust I thought it would be fun to put them to work (even though it's probably not an efficient use of power). After all, they're probably just as powerful as that old Gateway PC.&lt;/p&gt;

&lt;p&gt;Unfortunately, when I first looked, no folding client existed for ARM processors. After all, why would the Folding@Home developers waste time building the software for low power ARM devices when desktop x86 processors and GPUs can run circles around them? I decided to search anyway and &lt;a href="https://foldingforum.org/viewtopic.php?t=35998" rel="noopener noreferrer"&gt;forum posts like these&lt;/a&gt; started to give me hope. And sure enough, now it is!&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Folding at Home on Raspberry Pi
&lt;/h2&gt;

&lt;p&gt;Here's what it takes to start folding on a Raspberry Pi.&lt;/p&gt;

&lt;h3&gt;
  
  
  64 bit Raspberry Pi
&lt;/h3&gt;

&lt;p&gt;You'll need a Raspberry Pi with a 64 bit processor to be able to run &lt;a href="mailto:Folding@Home"&gt;Folding@Home&lt;/a&gt;. A 3B or 4 will do nicely. Folding uses around 500MB of RAM so you'll be good with either a 1GB or 2GB model. These days, I recommend the Raspberry Pi 4 in either its 2GB or 4GB configuration for the best bang for your buck.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install the OS
&lt;/h3&gt;

&lt;p&gt;In addition to your 64 bit Pi, you'll need a 64 bit OS to run the &lt;code&gt;arm64&lt;/code&gt; Folding@Home client. The typical Raspbian/Raspberry Pi OS is 32 bit (since until recently Raspberry Pis did not have more than 4 gigs of RAM), so you'll need to download it specially.&lt;/p&gt;

&lt;p&gt;You can find the latest ones here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://downloads.raspberrypi.org/raspios_arm64/images/" rel="noopener noreferrer"&gt;Raspberry Pi OS Full arm64 Images&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://downloads.raspberrypi.org/raspios_lite_arm64/images/" rel="noopener noreferrer"&gt;Raspberry Pi OS Lite arm64 Images&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I chose to go with the "lite" option since I run my Pis headless. Next install it as you normally would (&lt;a href="https://www.raspberrypi.org/documentation/installation/installing-images/" rel="noopener noreferrer"&gt;follow these steps if you need help!&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;I usually do a few more things after flashing the OS image to the SD card. Namely, I...&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a file named &lt;code&gt;ssh&lt;/code&gt; in the root directory of the card via &lt;code&gt;touch ssh&lt;/code&gt;. This enables ssh on the card for the default &lt;code&gt;pi&lt;/code&gt; user (password is &lt;code&gt;raspberry&lt;/code&gt; remember to change this!).&lt;/li&gt;
&lt;li&gt;Configure WiFi for the Pi by creating a &lt;code&gt;wpa_supplicant.conf&lt;/code&gt; file in the root directory. This looks like:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;country=US
update_config=1
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev

network={
  scan_ssid=1
  ssid="your-network-SSID-here"
  psk="your-network-password-here"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way I can quickly log on to my router, see what IP it gave the Pi, and connect to it over ssh.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh pi@192.168.1.10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Installing the Folding@Home Client
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn9o88m5or2j3ertvs1hn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn9o88m5or2j3ertvs1hn.png" alt="Screenshot of a terminal installing Folding@Home on a Raspberry Pi"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open up a terminal or &lt;code&gt;ssh&lt;/code&gt; on to the Pi and make a new directory in your home directory called &lt;code&gt;fah&lt;/code&gt; where we'll download the client (you can name this directory whatever you want).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; ~/fah &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; ~/fah
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, download the arm64 Folding@Home Debian package. At the time of writing this, the latest version was &lt;code&gt;7.6.21&lt;/code&gt;, but you can &lt;a href="https://download.foldingathome.org/releases/public/release/fahclient/debian-stable-arm64/" rel="noopener noreferrer"&gt;find the most recent ones here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://download.foldingathome.org/releases/public/release/fahclient/debian-stable-arm64/v7.6/fahclient_7.6.21_arm64.deb &lt;span class="nt"&gt;-O&lt;/span&gt; fahclient.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now install it! This will pop up a fancy interface for you to input your username, team number, and passkey (if you've got one).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;dpkg &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;--force-depends&lt;/span&gt; fahclient.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will also ask if you want to run it as a service automatically on start up. If you choose this you'll be able to see an entry for &lt;code&gt;FAHClient.service&lt;/code&gt; under your &lt;code&gt;systemd&lt;/code&gt; services. You can always change your config later by editing &lt;code&gt;/etc/fahclient/config.xml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That's it! If you ever want to uninstall Folding@Home from your Pi, just run &lt;code&gt;sudo dpkg -P fahclient&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Folding Performance Comparison
&lt;/h2&gt;

&lt;p&gt;These things get pretty hot under heavy load, so I decided to test out three different configurations to see what kind of cooling was necessary to fold at max power without thermal throttling. I recorded temperature and clock speed stats while folding using &lt;a href="https://github.com/tcdowney/knick-knacks/blob/7cc6c9e47fd918f5d68bb938dec952dd2a323b58/raspberry-pi/raspi-metrics.sh" rel="noopener noreferrer"&gt;the same script&lt;/a&gt; that I used in my cooling experiment.&lt;/p&gt;

&lt;p&gt;The amount and difficulty of work distributed by Folding@Home is variable so I let the Pis run for two weeks to see what they could accomplish.&lt;/p&gt;

&lt;h3&gt;
  
  
  Raspberry Pi 3B in Aluminum Flirc Enclosure
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fllfopcipwm4wnz2es3ta.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fllfopcipwm4wnz2es3ta.jpg" alt="Raspberry Pi 3B in Flirc aluminum case and a purple origami crane"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First I tried out an old Raspberry Pi 3B that I had lying around. It's in a passively cooled aluminum Flirc Case that does a pretty decent job of keeping the CPU cool under normal usage. Even at 100% CPU usage during folding the Pi only reached 67C. Not &lt;em&gt;that&lt;/em&gt; cool, but not hot enough to throttle either.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Specs:&lt;/strong&gt; Broadcom BCM2837 ARM Cortex-A53 Quad Core CPU at 1.2 GHZ&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Results:&lt;/strong&gt; 9 work units (WUs) completed and 4,179 points (Credit) earned&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Raspberry Pi 4B in Aluminum Argon NEO Enclosure
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7kwz0ki89cu4pfkmpudj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7kwz0ki89cu4pfkmpudj.jpg" alt="Raspberry Pi 4 with Argon NEO Case"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next I tried one of my Raspberry Pi 4Bs, this one in an aluminum Argon NEO case that's pretty similar to the Flirc case on the 3B. Raspberry Pi 4Bs run both faster and hotter than the 3Bs did so I was curious to see if passive cooling was enough. In &lt;a href="https://downey.io/blog/raspberry-pi-4-heatsinks-and-fans/" rel="noopener noreferrer"&gt;the cooling experiments&lt;/a&gt; I had done, under heavy synthetic load the 4B would have to throttle its CPU in this case. Fortunately, this was not the case for folding. It was able to keep running at a full 1.5 GHz, albeit at a toasty 72C.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Specs:&lt;/strong&gt; Broadcom BCM2711B0 ARM Cortex-A72 Quad Core CPU at 1.5 GHZ&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Results:&lt;/strong&gt; 24 work units (WUs) completed and 19,648 points (Credit) earned&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Raspberry Pi 4B with Tower Cooler and 40mm Fan
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F501eemqkie093yqxy5t5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F501eemqkie093yqxy5t5.jpg" alt="Raspberry Pi 4 with RGB Cooler Tower"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My last experiment used another Raspberry Pi 4B with a ridiculous cooling tower. This one did great during the cooling experiments so I expected it to perform decently for folding as well. Sure enough, it did. It was able to run at a steady 1.5 GHz and keep at a balmy 42C. This one could probably be overclocked to 1.6 or 1.7 GHz in order to eke out a tad more performance, but given the variability of work units it would be difficult to compare. It was able to complete more work than the other 4B, but given that it wasn't being throttled I'll just chalk that up to luck.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Specs:&lt;/strong&gt; Broadcom BCM2711B0 ARM Cortex-A72 Quad Core CPU at 1.5 GHZ&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Results:&lt;/strong&gt; 27 work units (WUs) completed and 22,642 points (Credit) earned&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;So, to wrap it all up, a modern desktop CPU and a GPU could smoke these Pis... and that's ok. I had fun waking up these long dormant little computers and that was worth it to me. If you've got an unused Pi sitting around consider putting it to use for a little bit as well. Maybe it will help cure something! 😊&lt;/p&gt;

</description>
      <category>raspberrypi</category>
      <category>foldingathome</category>
    </item>
    <item>
      <title>Capturing Network Traffic from a Kubernetes Pod with Ephemeral Debug Containers</title>
      <dc:creator>Tim Downey</dc:creator>
      <pubDate>Sun, 27 Sep 2020 17:24:21 +0000</pubDate>
      <link>https://dev.to/downey/capturing-network-traffic-from-a-kubernetes-pod-with-ephemeral-debug-containers-57md</link>
      <guid>https://dev.to/downey/capturing-network-traffic-from-a-kubernetes-pod-with-ephemeral-debug-containers-57md</guid>
      <description>&lt;h2&gt;
  
  
  Running tcpdump against a Running Pod
&lt;/h2&gt;

&lt;p&gt;The other day I had a situation where I needed to debug network traffic between an app and its Envoy sidecar proxy. Fortunately, since the app image was Ubuntu-based and it was an unimportant dev cluster, I was able to just &lt;code&gt;kubectl exec&lt;/code&gt; into a shell on the container and &lt;code&gt;apt install tcpdump&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now that I had &lt;code&gt;tcpdump&lt;/code&gt; installed, I could run it and pipe the output to &lt;a href="https://www.wireshark.org/" rel="noopener noreferrer"&gt;Wireshark&lt;/a&gt; on my local machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;kubectl exec my-app-pod -c nginx -- tcpdump -i eth0 -w - | wireshark -k -i -
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It was pretty slick, if I do say so myself, and made me feel like a &lt;a href="https://en.wikipedia.org/wiki/Hackers_(film)" rel="noopener noreferrer"&gt;Hackers&lt;/a&gt; character. 😎&lt;/p&gt;

&lt;p&gt;There's some issues with this, though. 😳&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I had to &lt;code&gt;kubectl exec&lt;/code&gt; and install arbitrary software from the internet on a running Pod. This is fine for internet-connected dev environments, but probably not something you'd want to do (or be able to do) in production.&lt;/li&gt;
&lt;li&gt;If this app had been using a minimal &lt;a href="https://github.com/GoogleContainerTools/distroless" rel="noopener noreferrer"&gt;&lt;code&gt;distroless&lt;/code&gt;&lt;/a&gt; base image or was built with a &lt;a href="https://buildpacks.io/" rel="noopener noreferrer"&gt;buildpack&lt;/a&gt; I wouldn't have been able to &lt;code&gt;apt install&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If I rebuilt the app container image to include &lt;code&gt;tcpdump&lt;/code&gt; that would have required the Pods to be recreated. Not ideal if the bug is tricky to reproduce.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So installing &lt;code&gt;tcpdump&lt;/code&gt; as needed isn't always an option. Why not just include it when building the initial container image for the app so that it's always available? That path leads to image bloat and the more unecessary packages we include in our image the more potential attack vectors there are.&lt;/p&gt;

&lt;p&gt;So what else can we do?&lt;/p&gt;

&lt;p&gt;Fortunately for us, newer versions of Kubernetes come with some alpha features for &lt;a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container" rel="noopener noreferrer"&gt;debugging running pods&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ephemeral Debug Containers
&lt;/h2&gt;

&lt;p&gt;Kubernetes 1.16 has a new &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/" rel="noopener noreferrer"&gt;Ephemeral Containers&lt;/a&gt; feature that is perfect for our use case. With Ephemeral Containers, we can ask for a new temporary container with the image of our choosing to run inside an existing Pod. This means we can keep the main images for our applications lightweight and then bolt on a heavy image with all of our favorite debug tools when necessary.&lt;/p&gt;

&lt;p&gt;For the following examples I'll be using my &lt;a href="https://github.com/tcdowney/mando" rel="noopener noreferrer"&gt;&lt;code&gt;mando&lt;/code&gt; app&lt;/a&gt; which is running as a Pod named &lt;code&gt;mando-655449598d-fqrvb&lt;/code&gt;. It's built with a Go buildpack (&lt;a href="https://downey.io/blog/how-to-use-kbld-with-kubernetes/" rel="noopener noreferrer"&gt;you can read more on that here&lt;/a&gt;), so it's the perfect example of an app with a minimal image.&lt;/p&gt;

&lt;p&gt;To demonstrate how this can be hard to work with, let's first try to open a shell in it the traditional way.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;kubectl exec -it mando-655449598d-fqrvb -- /bin/sh

error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "3ca55f9b6457995be6c6254a8d274706e42d89f431956b5b02ad9eade5e5f788": OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No &lt;code&gt;/bin/sh&lt;/code&gt;? That's rough. Let's provide a shell with an Ephemeral Container using the &lt;code&gt;busybox&lt;/code&gt; image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;kubectl alpha debug -it mando-655449598d-fqrvb --image=busybox --target=mando -- /bin/sh

If you don't see a command prompt, try pressing enter.
&lt;/span&gt;&lt;span class="gp"&gt;/ #&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"hello there"&lt;/span&gt;
&lt;span class="go"&gt;hello there
&lt;/span&gt;&lt;span class="gp"&gt;/ #&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;span class="go"&gt;bin   dev   etc   home  proc  root  sys   tmp   usr   var
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can do all sorts of shell-like activities!&lt;/p&gt;

&lt;p&gt;For our use case, though, we want to capture network packets. So let's use an image that's optimized for network troubleshooting: &lt;a href="https://github.com/nicolaka/netshoot" rel="noopener noreferrer"&gt;&lt;code&gt;nicolaka/netshoot&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can use &lt;code&gt;kubectl alpha debug&lt;/code&gt; with run &lt;code&gt;tcpdump&lt;/code&gt; and pipe the output to our local Wireshark just as we'd done before!&lt;/p&gt;

&lt;p&gt;Here's a concrete example of me using &lt;code&gt;tcpdump&lt;/code&gt; to capture packets on the &lt;code&gt;eth0&lt;/code&gt; interface:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;kubectl alpha debug -i mando-655449598d-fqrvb --image=nicolaka/netshoot --target=mando -- tcpdump -i eth0 -w - | wireshark -k -i -
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F655l3rwzn1ca5rueg8zh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F655l3rwzn1ca5rueg8zh.png" alt="tcpdump of a kubernetes pod displayed in wireshark"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since I'm using &lt;a href="https://istio.io/" rel="noopener noreferrer"&gt;Istio&lt;/a&gt; as my service mesh, capturing packets from &lt;code&gt;eth0&lt;/code&gt; primarily shows traffic to and from the Envoy sidecar proxy. If we want to debug traffic between the proxy and the &lt;code&gt;mando&lt;/code&gt; app itself we can do the same thing against the &lt;code&gt;lo&lt;/code&gt; (loopback) interface:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;kubectl alpha debug -i mando-655449598d-fqrvb --image=nicolaka/netshoot --target=mando -- tcpdump -i lo -w - | wireshark -k -i -
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I've found both of these commands to be invaluable when debugging the service mesh interactions on my clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveats
&lt;/h2&gt;

&lt;p&gt;As of Kubernetes 1.19, Ephemeral Containers are still an alpha feature and are not recommended for production clusters. So chances are you won't have access to them yet in your environment. As an alpha feature, the interface and functionality of the feature is liable to change, so don't get too attached to the current implementation! It's such a useful feature, however, I'd doubt they'd cut it entirely.&lt;/p&gt;

&lt;p&gt;There are still ways to get early access today, though.&lt;/p&gt;

&lt;p&gt;If you're using a managed Kubernetes service like GKE you can create an (unsupported) &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/creating-an-alpha-cluster" rel="noopener noreferrer"&gt;alpha cluster&lt;/a&gt; that will have all sorts of experimental features enabled. I'm less familiar with other managed offerings, but chances are they'll have some form of alpha release channel was well.&lt;/p&gt;

&lt;p&gt;Or if you're running a local &lt;code&gt;kind&lt;/code&gt; cluster, BenTheElder shows how you can enable ephemeral containers &lt;a href="https://github.com/kubernetes-sigs/kind/issues/1210#issuecomment-570399316" rel="noopener noreferrer"&gt;here&lt;/a&gt; with the following &lt;code&gt;kind&lt;/code&gt; config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class="na"&gt;featureGates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;EphemeralContainers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're using a custom &lt;code&gt;kubeadm&lt;/code&gt; deployed cluster, you can &lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/" rel="noopener noreferrer"&gt;configure the Kubernetes control plane components&lt;/a&gt; to run with the &lt;code&gt;--feature-gates=EphemeralContainers=true&lt;/code&gt; flag.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alternatives
&lt;/h2&gt;

&lt;p&gt;If for some reason you can't enable ephemeral containers and you really want to capture some packets, don't despair! 😌&lt;/p&gt;

&lt;p&gt;Check out &lt;a href="https://github.com/eldadru/ksniff" rel="noopener noreferrer"&gt;&lt;code&gt;ksniff&lt;/code&gt;&lt;/a&gt; as an alternative. It can create a privileged Pod that will create a new container that shares the same network namespace as your target Pod and let you capture packets that way.&lt;/p&gt;

&lt;p&gt;If you can't run privileged pods and can't add tcpdump to the container image yourself, well...&lt;/p&gt;

&lt;p&gt;I'm sure you'll figure something out.&lt;/p&gt;

&lt;p&gt;Best of luck! 🙂&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Cover photo credit: &lt;a href="https://unsplash.com/@tvick" rel="noopener noreferrer"&gt;Taylor Vick&lt;/a&gt; on &lt;a href="https://unsplash.com/" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>networking</category>
    </item>
    <item>
      <title>Public Routing to a Private Kubernetes Cluster</title>
      <dc:creator>Tim Downey</dc:creator>
      <pubDate>Tue, 25 Aug 2020 03:56:42 +0000</pubDate>
      <link>https://dev.to/downey/public-routing-to-a-private-kubernetes-cluster-ebn</link>
      <guid>https://dev.to/downey/public-routing-to-a-private-kubernetes-cluster-ebn</guid>
      <description>&lt;p&gt;For personal projects and hacking around on Kubernetes, I'm a fan of inexpensive managed clusters (like the ones offered by &lt;a href="https://www.digitalocean.com/products/kubernetes/"&gt;Digital Ocean&lt;/a&gt;). For about ten bucks a month, I can spin up a small single-node cluster and hit the ground running. Unfortunately, with these small clusters you only get about a gig of usable RAM and a single vCPU at best. If you want more, prices quickly climb.&lt;/p&gt;

&lt;p&gt;At home, I've got a pretty decent Linux box with 12 vCPU (thanks &lt;a href="https://www.amd.com/en/ryzen"&gt;Ryzen&lt;/a&gt;!) and 48G of RAM that I've lovingly named &lt;code&gt;tomorrowlan&lt;/code&gt;. A similar configuration on Digital Ocean would run about $240 a month! When I'm messing around on personal projects, I really don't care about resiliency or production worthiness. It would be awesome to use my existing hardware while retaining the ability to share what I've built with others on the wider internet.&lt;/p&gt;

&lt;p&gt;Thanks to a handful of open-source tools, I can make this dream a reality. With &lt;a href="https://docs.inlets.dev/#/"&gt;Inlets&lt;/a&gt;, &lt;a href="https://projectcontour.io/"&gt;Contour&lt;/a&gt;, &lt;a href="https://github.com/kubernetes-sigs/kind"&gt;KinD&lt;/a&gt;, and &lt;a href="https://github.com/kubernetes-sigs/external-dns"&gt;external-dns&lt;/a&gt;, I can easily spin up new clusters and get them externally routable within seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Components
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YJwiuT_g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xc62omtetx5xwm18yea1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YJwiuT_g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xc62omtetx5xwm18yea1.png" alt="Architecture diagram of cluster using Inlets, Contour, and external-dns." width="880" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram above provides an abridged view of how all of these components work together. It honestly looks more complicated than it actually is. &lt;/p&gt;

&lt;p&gt;In this architecture, we have a Kubernetes cluster running on hardware running at home. It is behind a home router's &lt;a href="https://en.wikipedia.org/wiki/Network_address_translation"&gt;NAT&lt;/a&gt; and whatever networking nonsense the home ISP is doing. Typically, this cluster would be utterly unreachable without special configuration of the router. Plus, you're most likely are issued a dynamic IP address from your ISP that will change over time. Besides, even if you did set this all up, you probably don't want randoms from the internet connecting directly to your home network.&lt;/p&gt;

&lt;p&gt;Instead, we can use an inexpensive cloud VM with a public IP to serve as our entry point. Thanks to all of the components mentioned earlier, we can tunnel traffic from this VM to our private cluster. Now, I'm going to dig into what each of these components is doing. Additionally, I'll demonstrate how they can be used to expose a simple app to the outside world. What I'm not going to do is go in-depth on how to install or configure them. They're all well maintained and continuously updated, so that aspect of it is best left to the official docs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Inlets
&lt;/h2&gt;

&lt;p&gt;First, we'll start with &lt;a href="https://docs.inlets.dev/#/"&gt;Inlets&lt;/a&gt; since it's arguably the most important piece of all of this. We'll use it to tunnel traffic from a publically reachable cloud instance (I use a $5-a-month Digital Ocean VM) to our internal cluster. Inlets is &lt;a href="https://en.wikipedia.org/wiki/Open-core_model"&gt;open-core&lt;/a&gt; software, so if you need Layer 4 routing or TLS termination, you'll have to pay for the "Pro" version. For hobbyist use cases like my own, you can make do with what the free version provides. Specifically, we'll be using &lt;a href="https://github.com/inlets/inlets-operator"&gt;inlets-operator&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The inlets-operator is a Kubernetes operator that you'll install to the cluster. It continuously watches for new &lt;code&gt;LoadBalancer&lt;/code&gt; services and uses &lt;a href="https://github.com/inlets/inlets"&gt;inlets&lt;/a&gt; to provision what it calls an "exit node" VM on the cloud of your choice and a public IP to expose the &lt;code&gt;LoadBalancer&lt;/code&gt; service. In this case, it will be exposing the &lt;code&gt;LoadBalancer&lt;/code&gt; service of our Ingress Controller.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;Follow &lt;a href="https://blog.alexellis.io/ingress-for-your-local-kubernetes-cluster/"&gt;this tutorial&lt;/a&gt; for getting started with the inlets-operator.&lt;/p&gt;




&lt;h2&gt;
  
  
  external-dns
&lt;/h2&gt;

&lt;p&gt;This one is definitely a bit more of a "nice to have." Inlets will give you a public IP address that &lt;em&gt;will&lt;/em&gt; work for reaching your apps. Raw IP addresses are pretty unwieldy, however. So instead, if you've got a domain handy, you can configure DNS to map that IP to your domain. For my cluster, I tend to use &lt;code&gt;*.k8s.downey.cloud&lt;/code&gt;. This is typically a manual process, but it doesn't have to be!&lt;/p&gt;

&lt;p&gt;You can use a project called &lt;a href="https://github.com/kubernetes-sigs/external-dns"&gt;external-dns&lt;/a&gt; to configure DNS for your &lt;code&gt;LoadBalancer&lt;/code&gt; services automatically! It's pretty slick. All you've got to do is annotate the service with &lt;code&gt;external-dns.alpha.kubernetes.io/hostname&lt;/code&gt; and your desired hostname. Then the external-dns controller will do the rest and configure your DNS for you! Since I'm using a Digital Ocean VM as my Inlets node, I have external-dns set up to configure Digital Ocean's Cloud DNS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;Installation of external-dns is pretty dependent on what DNS provider you're using. Follow the &lt;a href="https://github.com/kubernetes-sigs/external-dns"&gt;steps in the README&lt;/a&gt; for the cloud you're using. For Digital Ocean &lt;a href="https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/digitalocean.md"&gt;I followed these&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Contour Ingress Controller
&lt;/h2&gt;

&lt;p&gt;Using an &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/"&gt;Ingress Controller&lt;/a&gt; with Inlets isn't strictly necessary. You could create a &lt;code&gt;LoadBalancer&lt;/code&gt; service for every single app you want to expose, and Inlets will happily create VMs to route to those services. Though with that approach, costs will quickly add up.&lt;/p&gt;

&lt;p&gt;With an Ingress Controller we can only expose a single proxy and have that proxy route to all of our apps internally. Any Ingress Controller will work for this purpose, but I like using &lt;a href="https://projectcontour.io/"&gt;Contour&lt;/a&gt; for several reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It's a Cloud Native Computing Foundation (CNCF) project just like Kubernetes&lt;/li&gt;
&lt;li&gt;It uses &lt;a href="https://www.envoyproxy.io/"&gt;Envoy proxy&lt;/a&gt;, another CNCF project which has been battle-tested at scale and is pretty light on resource consumption&lt;/li&gt;
&lt;li&gt;Envoy proxy can hot-reload config changes, so there is no downtime as you add and remove backend apps&lt;/li&gt;
&lt;li&gt;It has excellent support for TLS termination out of the box (this is a moot point with the free version of Inlets, however -- you'd need the TCP routing support of inlets-pro -- but in general this is good)&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;Follow the &lt;a href="https://projectcontour.io/getting-started/"&gt;Getting Started docs&lt;/a&gt; for quickly getting Contour up and running. Contour provides the &lt;code&gt;LoadBalancer&lt;/code&gt; service YAML that Inlets will expose. To get it working well with external-dns, I use the following &lt;a href="https://get-ytt.io/"&gt;&lt;code&gt;ytt&lt;/code&gt;&lt;/a&gt; overlay&lt;sup&gt;1&lt;/sup&gt; to annotate it with my domain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#@ load("@ytt:overlay", "overlay")&lt;/span&gt;

&lt;span class="c1"&gt;#@overlay/match by=overlay.subset({"kind": "Service", "metadata":{"name": "envoy", "namespace": "projectcontour"}, "spec":{"type":"LoadBalancer"}}),expects=1&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;#@overlay/match missing_ok=True&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;#@overlay/match missing_ok=True&lt;/span&gt;
    &lt;span class="c1"&gt;#@overlay/merge&lt;/span&gt;
    &lt;span class="na"&gt;external-dns.alpha.kubernetes.io/hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*.k8s.downey.cloud"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As an example, you can tweak the installation command slightly to do the &lt;code&gt;ytt&lt;/code&gt; templating inline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;kubectl apply -f &amp;lt;(ytt -f https://projectcontour.io/quickstart/contour.yaml -f /tmp/external-dns-overlay.yaml --ignore-unknown-comments=true)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;sup&gt;1&lt;/sup&gt; - If you haven't heard of &lt;a href="https://get-ytt.io/"&gt;&lt;code&gt;ytt&lt;/code&gt;&lt;/a&gt; before, I recommend checking it out. It's part of the &lt;a href="https://carvel.dev/"&gt;Carvel&lt;/a&gt; suite of Kubernetes utilities and is my go to for transforming YAML that I don't control. Like this Contour installation.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  KinD
&lt;/h2&gt;

&lt;p&gt;I use KinD, or &lt;a href="https://github.com/kubernetes-sigs/kind"&gt;Kubernetes in Docker&lt;/a&gt;, to run my clusters because I find it super convenient. All it requires is the &lt;code&gt;kind&lt;/code&gt; CLI and having a Docker daemon running. It's very quick to spin up and destroy clusters when they're just running as Docker containers.&lt;/p&gt;

&lt;p&gt;That said, there's plenty of other valid options here, such as &lt;a href="https://microk8s.io/"&gt;microk8s&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/tasks/tools/install-minikube/"&gt;minikube&lt;/a&gt;, or directly on the OS with &lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/"&gt;kubeadm&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;To install KinD follow their &lt;a href="https://kind.sigs.k8s.io/docs/user/quick-start/"&gt;quick start guide&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demonstration
&lt;/h2&gt;

&lt;p&gt;So, how does this all work together? Let's view it from the perspective of deploying a single app: &lt;a href="https://httpbin.org/"&gt;&lt;code&gt;httpbin&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can deploy &lt;code&gt;httpbin&lt;/code&gt; to the cluster with the following &lt;code&gt;Deployment&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpbin&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpbin&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpbin&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.io/kennethreitz/httpbin&lt;/span&gt;
        &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpbin&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we can make it easily reachable from within the cluster by creating a &lt;code&gt;ClusterIP&lt;/code&gt; service for it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpbin&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpbin&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpbin&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's cool. We can now reach it at &lt;code&gt;httpbin.default.svc.cluster.local&lt;/code&gt; from within the cluster on port 80. This next step isn't necessary, but for fun we can also reach it locally from the by using the &lt;code&gt;kubectl port-forward&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;tim@tomorrowlan:~/workspace/k8s-yaml$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;kubectl port-forward service/httpbin 8000:80
&lt;span class="gp"&gt;Forwarding from 127.0.0.1:8000 -&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;80
&lt;span class="gp"&gt;Forwarding from [::1]:8000 -&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that running you can just &lt;code&gt;curl localhost:8000&lt;/code&gt; and hit the &lt;code&gt;httpbin&lt;/code&gt; pods. So far so good!&lt;br&gt;
Go ahead and kill the &lt;code&gt;kubectl port-forward&lt;/code&gt;. It's time to make it reachable for everyone else.&lt;/p&gt;

&lt;p&gt;We can do this by creating an &lt;code&gt;Ingress&lt;/code&gt; resource to let Contour know about our &lt;code&gt;httpbin&lt;/code&gt; service and what traffic should be routed to it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;extensions/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpbin&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpbin.k8s.downey.cloud&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpbin&lt;/span&gt;
          &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! We can now access the &lt;code&gt;httpbin&lt;/code&gt; app on the internet at &lt;a href="http://httpbin.k8s.downey.cloud"&gt;http://httpbin.k8s.downey.cloud&lt;/a&gt;! 😌&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--do2i-X8n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k49ivyveb0a6hj69qgw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--do2i-X8n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k49ivyveb0a6hj69qgw6.png" alt="The httpbin app running on a public domain" width="880" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Recap
&lt;/h2&gt;

&lt;p&gt;So what all just happened? Well let's look back at the diagram from before...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YJwiuT_g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xc62omtetx5xwm18yea1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YJwiuT_g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xc62omtetx5xwm18yea1.png" alt="Architecture diagram of cluster using Inlets, Contour, and external-dns." width="880" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Contour came with a &lt;code&gt;LoadBalancer&lt;/code&gt; service that points to its Envoy proxy which we annotated with &lt;code&gt;external-dns.alpha.kubernetes.io/hostname: "*.k8s.downey.cloud"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The inlets-operator saw we have a &lt;code&gt;LoadBalancer&lt;/code&gt; service that has no IP so it provisioned a cloud VM with a public IP&lt;/li&gt;
&lt;li&gt;The inlets-operator oepened a tunnel between an Envoy running on the cloud VM and Contour's Envoy&lt;/li&gt;
&lt;li&gt;Since Contour's &lt;code&gt;LoadBalancer&lt;/code&gt; service has the &lt;code&gt;external-dns.alpha.kubernetes.io/hostname&lt;/code&gt; annotation, external-dns configured DNS to point to the service's public IP&lt;/li&gt;
&lt;li&gt;Contour saw the &lt;code&gt;Ingress&lt;/code&gt; resource we created for &lt;code&gt;httpbin&lt;/code&gt; and configured its Envoy to direct requests going to &lt;code&gt;httpbin.k8s.downey.cloud&lt;/code&gt; to the &lt;code&gt;httpbin&lt;/code&gt; &lt;code&gt;ClusterIP&lt;/code&gt; service&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;httpbin&lt;/code&gt; &lt;code&gt;ClusterIP&lt;/code&gt; service directed the traffic to the &lt;code&gt;httpbin&lt;/code&gt; pods&lt;/li&gt;
&lt;li&gt;This Rube Goldberg machine resulted in our pods being publicly reachable&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What's awesome is that the steps for the &lt;code&gt;httpbin&lt;/code&gt; app itself are not actually any different than using a &lt;code&gt;Deployment&lt;/code&gt;, &lt;code&gt;Service&lt;/code&gt;, and &lt;code&gt;Ingress&lt;/code&gt; on a managed Kubernetes cluster! There was a lot that happened behind the scenes, but once it was all set up it basically just works.&lt;/p&gt;

&lt;p&gt;We've now had the chance to see one of my favorite aspects of Kubernetes -- how extensible it is! By combining a few building blocks, we're able to quickly replicate the &lt;code&gt;LoadBalancer&lt;/code&gt; service experience of a  managed Kubernetes cluster on a personal dev machine. I've got 12 vCPU and 48 gigs of RAM at my disposal so I have enough resources to develop non-trivial workloads &lt;em&gt;and&lt;/em&gt; I can still demo the end result online. All for the five bucks a month it costs to run the Inlets exit node.&lt;/p&gt;

&lt;p&gt;I, at least, think that's pretty cool.&lt;/p&gt;

&lt;p&gt;Cheers! 🐢&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Make Kubectl Exec Run a Command Against Multiple Pods</title>
      <dc:creator>Tim Downey</dc:creator>
      <pubDate>Tue, 21 Jul 2020 18:05:21 +0000</pubDate>
      <link>https://dev.to/downey/how-to-make-kubectl-exec-run-a-command-against-multiple-pods-3c58</link>
      <guid>https://dev.to/downey/how-to-make-kubectl-exec-run-a-command-against-multiple-pods-3c58</guid>
      <description>&lt;p&gt;I was really surprised to discover the other day that &lt;code&gt;kubectl&lt;/code&gt; &lt;a href="https://github.com/kubernetes/kubernetes/issues/8876"&gt;does not support&lt;/a&gt; running the same command against multiple &lt;code&gt;Pods&lt;/code&gt; out of the box.&lt;br&gt;
I get why that wouldn't be supported for interactive terminals, but seems like non-interactive commands should be fine.&lt;/p&gt;

&lt;p&gt;Oh well. We can still do what we want thanks to UNIX tools like &lt;a href="https://en.wikipedia.org/wiki/Xargs"&gt;xargs&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;kubectl get pods -o name | xargs -I{} kubectl exec {} -- &amp;lt;command goes here&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just replace the &lt;code&gt;&amp;lt;command goes here&amp;gt;&lt;/code&gt; bit with what you want to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example: Setting Log Level to Debug for All Istio IngressGateway Envoys
&lt;/h2&gt;

&lt;p&gt;Here's a real world example of when and how you might want to do this. The other day I was troubleshooting our Istio installation on a dev cluster and needed to set the log level of all of our ingress Envoy proxies to &lt;code&gt;debug&lt;/code&gt;. One way to do this is to configure it through a &lt;code&gt;POST&lt;/code&gt; request to each Envoy's admin &lt;code&gt;/logging&lt;/code&gt; endpoint (on Istio this is on port &lt;code&gt;15000&lt;/code&gt; by default). &lt;/p&gt;

&lt;p&gt;We had five Envoys and I was feeling lazy, so I cooked up the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;kubectl -n istio-system get pods -l app=istio-ingressgateway -o name | xargs -I{} kubectl -n istio-system exec {} -- curl -s localhost:15000/logging?level=debug -X POST
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I'm sure there are other UNIX incantations that could do the same, but this got the job done for me and I'm proud of it. 😊👍&lt;/p&gt;

</description>
      <category>todayisearched</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>How to Make Kubectl Jsonpath Output On Separate Lines</title>
      <dc:creator>Tim Downey</dc:creator>
      <pubDate>Mon, 20 Jul 2020 01:41:12 +0000</pubDate>
      <link>https://dev.to/downey/how-to-make-kubectl-jsonpath-output-on-separate-lines-52bm</link>
      <guid>https://dev.to/downey/how-to-make-kubectl-jsonpath-output-on-separate-lines-52bm</guid>
      <description>&lt;p&gt;Getting &lt;code&gt;kubectl&lt;/code&gt; to output &lt;a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/"&gt;jsonpath&lt;/a&gt; results on separate lines is something that I have to Google every time. 😖&lt;/p&gt;

&lt;p&gt;For example, the following command extracts the &lt;code&gt;podIP&lt;/code&gt; for every running &lt;code&gt;Pod&lt;/code&gt; across all namespaces.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;kubectl get pods -A -o jsonpath='{.items[*].status.podIP}'
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It returns something like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;10.244.0.11 10.244.0.8 10.244.0.14 10.244.0.10 10.244.0.6 10.244.0.12 10.244.0.13 10.244.0.15 10.244.0.7 10.244.0.9 10.244.0.3 10.244.0.2 10.244.0.5 172.18.0.2 172.18.0.2 172.18.0.2 172.18.0.2 172.18.0.2 172.18.0.2 10.244.0.4
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's not the friendliest output to work with, that's for sure. 🙅‍♀️&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding New Lines
&lt;/h2&gt;

&lt;p&gt;You can use the jsonpath &lt;code&gt;range&lt;/code&gt; function to iterate over the list and tack on a new line after each element with &lt;code&gt;{\n}&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;kubectl get pods -A -o jsonpath='{range .items[*]}{.status.podIP}{"\n"}{end}'
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This outputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;10.244.0.11
10.244.0.8
10.244.0.14
10.244.0.10
10.244.0.6
10.244.0.12
10.244.0.13
10.244.0.15
10.244.0.7
10.244.0.9
10.244.0.3
10.244.0.2
10.244.0.5
172.18.0.2
172.18.0.2
172.18.0.2
172.18.0.2
172.18.0.2
172.18.0.2
10.244.0.4
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Awesome! Now we can work with the output using all sorts of standard UNIX utilities that operate on new line (e.g. &lt;code&gt;sort&lt;/code&gt;, &lt;code&gt;xargs&lt;/code&gt;, &lt;code&gt;uniq&lt;/code&gt;, etc.).&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus
&lt;/h2&gt;

&lt;p&gt;You can use other whitespace characters too. So imagine we wanted to print the &lt;code&gt;Pod&lt;/code&gt; namespaces/names along with their IPs and separate them by a comma.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-A&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{range .items[*]}{.metadata.namespace}{"/"}{.metadata.name}{","}{.status.podIP}{"\n"}{end}'&lt;/span&gt;
&lt;span class="go"&gt;
default/fah-cpu-7c66fc7948-582sr,10.244.0.11
default/fah-cpu-7c66fc7948-c9xb5,10.244.0.8
default/fah-cpu-7c66fc7948-dlm5z,10.244.0.14
default/fah-cpu-7c66fc7948-g25cb,10.244.0.10
default/fah-cpu-7c66fc7948-g2svf,10.244.0.6
default/fah-cpu-7c66fc7948-hxmfn,10.244.0.12
default/fah-cpu-7c66fc7948-jxkp8,10.244.0.13
default/fah-cpu-7c66fc7948-n7rvt,10.244.0.15
default/fah-cpu-7c66fc7948-txvpg,10.244.0.7
default/fah-cpu-7c66fc7948-vzpbz,10.244.0.9
default/mando-57fff9d5f5-rdxrx,10.244.0.3
kube-system/coredns-66bff467f8-r9g25,10.244.0.2
kube-system/coredns-66bff467f8-xfd5k,10.244.0.5
kube-system/etcd-kind-control-plane,172.18.0.2
kube-system/kindnet-g6jvd,172.18.0.2
kube-system/kube-apiserver-kind-control-plane,172.18.0.2
kube-system/kube-controller-manager-kind-control-plane,172.18.0.2
kube-system/kube-proxy-9t7tt,172.18.0.2
kube-system/kube-scheduler-kind-control-plane,172.18.0.2
local-path-storage/local-path-provisioner-bd4bb6b75-zdv22,10.244.0.4
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Outputting in jsonpath can be pretty handy!&lt;/p&gt;

&lt;p&gt;Though I'll still have to look up how to do it everytime. 😌&lt;/p&gt;

</description>
      <category>todayisearched</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>The Great Raspberry Pi Cooling Bake-Off</title>
      <dc:creator>Tim Downey</dc:creator>
      <pubDate>Sat, 04 Jul 2020 19:28:19 +0000</pubDate>
      <link>https://dev.to/downey/the-great-raspberry-pi-cooling-bake-off-5cbm</link>
      <guid>https://dev.to/downey/the-great-raspberry-pi-cooling-bake-off-5cbm</guid>
      <description>&lt;p&gt;Two years ago, I found myself alone over the Fourth of July and a bit bored. So, of course, I did the reasonable thing and conducted a performance comparison of a few of the cooling options available for the Raspberry Pi 3B. This comparison resulted in my post &lt;a href="https://downey.io/blog/raspberry-pi-3-heat-sink-comparison/"&gt;"Raspberry Pi Heat Sink Science"&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;2020 and its Pandemic has gifted me with even more isolation (&lt;em&gt;ahem social distancing&lt;/em&gt; 😷) this Fourth, so why not make the best of it and create Raspberry Pi Heat Sink Science Episode II: The 4th Awakens! This time with Raspberry Pi 4Bs! Raspberry Pi 4s have way more RAM (&lt;a href="https://www.raspberrypi.org/blog/8gb-raspberry-pi-4-on-sale-now-at-75/"&gt;up to 8GB&lt;/a&gt;) and also run faster than their predecessor -- their quadcore ARM A72 can be overclocked to upward of 2GHz. Unfortunately, they also &lt;a href="https://www.raspberrypi.org/documentation/hardware/raspberrypi/power/README.md"&gt;need more power&lt;/a&gt; and run much hotter.&lt;/p&gt;

&lt;p&gt;I'm super excited about having more RAM, so I'm planning on modernizing my &lt;a href="https://downey.io/blog/how-to-build-raspberry-pi-kubernetes-cluster/"&gt;Raspberry Pi Kubernetes Cluster&lt;/a&gt; for 2020, but want to dig into just how much hotter these things get since that might affect my build design. I also just wanted an excuse to buy a ridiculous cooling tower for a Raspberry Pi. 😅&lt;/p&gt;

&lt;h2&gt;
  
  
  Configurations and Components
&lt;/h2&gt;

&lt;p&gt;I will be testing a Raspberry Pi 4B 4GB under the following configurations. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Full disclosure: these are Amazon affiliate links, so if you buy anything from them I'll receive a small percentage as compensation. It might help me recoup some of the costs of these experiments.&lt;/em&gt; 🙏&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;No Heatsink&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://amzn.to/3gsbpEy"&gt;Heatsink&lt;/a&gt; (with &lt;a href="https://amzn.to/3gsDlrW"&gt;Thermal Paste&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://amzn.to/2ZR9W43"&gt;Heatsink + 30mm Fan&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://amzn.to/3ioFkiy"&gt;Power over Ethernet (PoE) Adapter with 25mm Fan&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://amzn.to/2BtOzO4"&gt;Argon NEO Case&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://amzn.to/3iy4U4C"&gt;GeeekPi Cooler Tower Heatsink + 40mm RGB Fan&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Because having a stable power source matters when overclocking the Raspberry Pi 4, I chose to use an 18 Watt/3.5 Amp &lt;a href="https://amzn.to/3e0yH2o"&gt;power supply by Argon&lt;/a&gt; that had pretty good reviews. For the Power over Ethernet experiment, I used a &lt;a href="https://amzn.to/3f26SIt"&gt;tp-link 4-port PoE switch&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Methodology
&lt;/h2&gt;

&lt;p&gt;I started testing with a stock Raspberry Pi 4 using the latest firmware. At 1.5 GHz it was incredibly stable, even with no heatsink. This was boring. So the first thing I did was &lt;a href="https://magpi.raspberrypi.org/articles/how-to-overclock-raspberry-pi-4#:~:text=%20How%20to%20overclock%20Raspberry%20Pi%204%20,the%20default%20CPU%20speed.%20Open%20a...%20More%20"&gt;overclock my Pi to 2.0 GHz&lt;/a&gt;. This was surprisingly easy and stable with a decent power supply.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benchmarking the Raspberry Pi 4
&lt;/h3&gt;

&lt;p&gt;In my previous post, I used the &lt;a href="https://linux.die.net/man/1/stress"&gt;&lt;code&gt;stress&lt;/code&gt; tool&lt;/a&gt; to create synthetic load on the Raspberry Pi's CPU. This time I wanted to do something slightly more real and opted to use &lt;a href="https://github.com/akopytov/sysbench"&gt;&lt;code&gt;sysbench&lt;/code&gt;&lt;/a&gt; and its prime number verification test to create CPU load.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sysbench --test=cpu --cpu-max-prime=50000 --num-threads=4 run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Measuring Temperature and CPU Clock Frequency
&lt;/h3&gt;

&lt;p&gt;While running this benchmark, I ran &lt;a href="https://github.com/tcdowney/knick-knacks/blob/7cc6c9e47fd918f5d68bb938dec952dd2a323b58/raspberry-pi/raspi-metrics.sh"&gt;this script&lt;/a&gt; in the background to output a CSV containing the temperature and CPU frequency for every second of the experiment.  Feel free to check out the script itself, but the important bits are the &lt;code&gt;vcgencmd measure_temp&lt;/code&gt; command for getting the Raspberry Pi's temperature and the &lt;code&gt;vcgencmd measure_clock arm&lt;/code&gt; command for getting the &lt;strong&gt;current&lt;/strong&gt; clock frequency of the Pi's ARM processor.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;vcgencmd measure_clock arm&lt;/code&gt; command was a new one for me. Last time I just used whatever was in the &lt;code&gt;/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq&lt;/code&gt; file, but discovered that this is more like a "desired frequency." The &lt;code&gt;vcgencmd measure_clock arm&lt;/code&gt; command was giving me the &lt;em&gt;actual&lt;/em&gt; frequency, which is useful for detecting thermal throttling&lt;sup&gt;1&lt;/sup&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vcgencmd measure_temp
temp=34.0'C

$ vcgencmd measure_clock arm
frequency(48)=2000478464
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To chart the results I used Python and friends: &lt;a href="https://pandas.pydata.org/"&gt;Pandas&lt;/a&gt;, &lt;a href="https://matplotlib.org/"&gt;Matplotlib&lt;/a&gt;, and &lt;a href="https://seaborn.pydata.org/"&gt;Seaborn&lt;/a&gt;. If you're curious, you can check out my (pretty rough) plotting code &lt;a href="https://github.com/tcdowney/knick-knacks/blob/master/raspberry-pi/temp-plots/plot.py"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The benchmarks typically took around three minutes to complete (completion time is indicated by a vertical dashed line on the graphs), and measurements were collected for five minutes to see how quickly the Pi cooled back down. So without further ado, let's look at some results!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;sup&gt;1&lt;/sup&gt; - The Raspberry Pi 4 will thermal throttle, or &lt;a href="https://www.raspberrypi.org/documentation/hardware/raspberrypi/frequency-management.md"&gt;slow down its CPU&lt;/a&gt;, when its temperature is between 80-85°C, so this clock frequency measurement can tell us just how much!&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment 1: No Heatsink
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s5wGiErC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zy3a22w31t2lv97txx20.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s5wGiErC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zy3a22w31t2lv97txx20.jpg" alt="Raspberry Pi 4 with no heatsink temperature test" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first experiment I ran was with the Pi in its default configuration: no heatsink whatsoever. Just the processor's heat spreader. Unsurprisingly this configuration fared the worst.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rdoZRMoa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n9v3ps3v078dbp2boysm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rdoZRMoa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n9v3ps3v078dbp2boysm.png" alt="Raspberry Pi 4 with no heatsink temperature chart" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;sysbench&lt;/code&gt; CPU benchmark took around 216 seconds to complete -- almost 40 seconds longer than the other configurations -- and the Pi exhibited significant thermal throttling. Notice how the CPU frequency oscillates between 2.0 and 1.3 GHz.&lt;/p&gt;

&lt;p&gt;Temperatures started out fairly high at around 50°C and quickly reached the throttle zone. Without any cooling the Pi never reached its base temperature during the five minute measurement window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Noise levels:&lt;/strong&gt; Silent! It's not cooled at all!&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment 2: Heatsink (with Thermal Paste)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--waQLNe8w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/57t1ivqhlmuynqwacrjc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--waQLNe8w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/57t1ivqhlmuynqwacrjc.jpg" alt="Raspberry Pi 4 with aluminum heatsink temperature test" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the second experiment I used a cheap aluminum heatsink and some inexpensive Cooler Master thermal compound to affix it to the CPU.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ozWHmPyk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ybgx9mbd58z0m6h1xpsk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ozWHmPyk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ybgx9mbd58z0m6h1xpsk.png" alt="Raspberry Pi 4 with aluminum heatsink temperature chart" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The benchmark took around 181 seconds to complete -- significantly faster than the previous experiment.&lt;/p&gt;

&lt;p&gt;Temperatures started out much lower at around 40°C. They still reached the thermal throttle zone, but it took much longer to get there. The throttling didn't begin until 150 seconds into the experiment, whereas with no heatsink throttling started just 60 seconds in. This is a significant improvement and shows that if your Pi is typically used with short, bursty workloads you can likely get by with just a simple heatsink. If your Pi is running sustained CPU-intensive workloads, however, the heatsink alone won't suffice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🙉 Noise Levels:&lt;/strong&gt; Silent! It's passively cooled.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment 3: Heatsink + 30mm Fan
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GigE8HHx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tepqkngdb6co5di7y0fs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GigE8HHx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tepqkngdb6co5di7y0fs.jpg" alt="Raspberry Pi 4 with aluminum heatsink and 30mm fan temperature test" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the third experiment I kept the same heatsink on and added a cheap 30mm Raspberry Pi fan that I ordered. Unfortunately, the fan came with no mounting hardware so I had to improvise with some rubber bands. It's not pretty, but it got the job done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7ZnYwqm4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/91e55g474jnm56vszl1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7ZnYwqm4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/91e55g474jnm56vszl1b.png" alt="Raspberry Pi 4 with aluminum heatsink and 30mm fan temperature chart" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The benchmark took a little over 178.6 seconds to complete. A few seconds faster than with the heatsink alone and basically the same result as the following tests.&lt;/p&gt;

&lt;p&gt;Temperatures were much improved with just this simple fan. We started at 37°C and reached 64°C at our highest point. There was no thermal throttling and the Pi was able to rapidly cool itself back down. If I actually had a case to mount this fan on, I could probably stop here. But I don't, and we've got more experiments to run!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🙉 Noise Levels:&lt;/strong&gt; Medium. The 30mm fan was pretty quiet. There was a slight hum, but nothing too bad.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment 4: Power over Ethernet Adapter + 25mm Fan
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FY7dDHJ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qxjh1qxnvwahcckgxma7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FY7dDHJ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qxjh1qxnvwahcckgxma7.jpg" alt="Raspberry Pi 4 with PoE Hat temperature test" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The fourth experiment was probably the most practical one for me. As I mentioned earlier, for my next Kubernetes cluster I'm planning on powering the Pis with Power over Ethernet (PoE) so that I don't have to mess around with a handful of expensive power supplies. The PoE adapter (aka PoE Hat) comes with a built in 25mm fan. Since my cluster will have multiple Pis in close proximity it's important that this adapter can keep it cool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SCzMMOeF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nuuzenbtj60x98wkwfzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SCzMMOeF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nuuzenbtj60x98wkwfzg.png" alt="Raspberry Pi 4 with PoE Hat temperature chart" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Like before, the benchmark took a little over 178.6 seconds to complete. It was a few milliseconds faster than the 30mm fan, but since I only ran the benchmark once this isn't enough data to claim it was absolutely faster.&lt;/p&gt;

&lt;p&gt;Temperatures were great -- we maxed out at around 62°C with no thermal throttling. I'm pretty confident now that this will do fine in a clustered-configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🙉 Noise Levels:&lt;/strong&gt; Medium-Loud. The 25mm fan was fairly loud and pretty whiny. In a cluster I could see this definitely getting annoying. So if you're using Raspberry Pis because they're quiet... you might want to reconsider if you plan on using the PoE adapter. I mean, it's not that bad, but if you are running it in a small dorm or apartment I could see it being a nuisance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment 5: Argon NEO Case
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gF6jsVj6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/q8avrygu27m3en5tgm2v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gF6jsVj6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/q8avrygu27m3en5tgm2v.jpg" alt="Raspberry Pi 4 with Argon NEO Case temperature test" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the fifth experiment I tested a case that I purchased for its aesthetics. The Argon NEO case is made of aluminum and pretty slim. For $15 it feels really high quality and makes the Pi look like a "real" computer!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o38FRZDl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/f824c9ht09w08pkr7npr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o38FRZDl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/f824c9ht09w08pkr7npr.png" alt="Raspberry Pi 4 with Argon NEO Case temperature chart" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Internally the Argon NEO is affixed to the Pi's CPU with a piece of included thermal tape. This connects it to an aluminum column that lets it radiate heat throughout the entire case. Effectively making it into a massive heatsink! The results speak for themselves. We start at 34°C and make a very gradual climb as the benchmark runs. It tops out at 58°C -- our best performance yet!&lt;/p&gt;

&lt;p&gt;I thought for sure that the small size of the case would inhibit air flow and cause it to get pretty hot, but I was pleasantly surprised with how well it performed&lt;sup&gt;2&lt;/sup&gt;. For my Raspberry Pi 3 I've been using a &lt;a href="https://amzn.to/3e0zOiA"&gt;Flirc Case&lt;/a&gt; that I've been pretty happy with. This one, however, definitely gives it a run for its money.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🙉 Noise Levels:&lt;/strong&gt; Silent! It's passively cooled.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;sup&gt;2&lt;/sup&gt; - I ran a longer experiment later and the Argon NEO was &lt;strong&gt;not&lt;/strong&gt; able to keep a Pi cool when under load for hours at time. See the update at the bottom of the post for more details.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment 6: Cooling Tower + 40mm RGB Fan
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uMdPY2xX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9xevuzbfhogd8dm2hvff.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uMdPY2xX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9xevuzbfhogd8dm2hvff.jpg" alt="Raspberry Pi 4 with RGB Cooler Tower temperature test" width="880" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the sixth and final experiment I got to test out my fun new toy: a cooler tower with a 40mm RGB fan that Amazon had recommended to me (they know me too well). It was a little irritating to assemble and mount, but overall not too bad.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--suO8Et-M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6bhjnmqt058s2g4awad4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--suO8Et-M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6bhjnmqt058s2g4awad4.png" alt="Raspberry Pi 4 with RGB Cooler Tower temperature chart" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can clearly see, this configuration performed the best. We started out at ~32°C and reached a peak of 50°C. The large heatsink tower with its (supposedly) copper heat pipes did an excellent job at drawing heat away from the CPU and the large, flashy RGB fan was able to blow all that heat away. Not only that, but it was able to cool the Pi back down extremely quickly once the benchmark had finished.&lt;/p&gt;

&lt;p&gt;The only downsides are that it's bulky (about triples the height of the Pi) and cost the most at $22.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🙉 Noise Levels:&lt;/strong&gt; Quiet. The quietest of the three fan configurations. The fan can also be powered off of the 3.3v port for even quieter operation (at the cost of higher temperatures). Pretty good!&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;If all you care about is absolute cooling performance, go with the &lt;a href="https://amzn.to/3iy4U4C"&gt;cooler tower&lt;/a&gt;. It looks a bit ridiculous, it's expensive, and it's bulky, but it can sure keep a Pi cool.&lt;/p&gt;

&lt;p&gt;However, if you can spare a few degrees and would prefer a more protective and practical case, I strongly recommend the &lt;a href="https://amzn.to/2BtOzO4"&gt;Argon NEO&lt;/a&gt;. This case is good if you have short (under 10 minute) bursty load, but not if you expect to run the Pi under sustained CPU load. Otherwise seek something with active cooling.&lt;/p&gt;

&lt;p&gt;If you want to get by on a budget, just buy an aluminum heatsink. They only cost a few cents each when purchased in bulk. If you think you're likely to have sustained CPU intensive workloads, spend a few dollars more and get a &lt;a href="https://amzn.to/2ZGJxpe"&gt;case with a small fan&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you're interested in the &lt;a href="https://amzn.to/3ioFkiy"&gt;PoE adapter&lt;/a&gt; because you want it for its Power over Ethernet capabilities, know that it will do a fine job at cooling your Pi. However, definitely don't buy it just for the fan! I'm satisfied enough, though, that the PoE adapters will be good enough for my future cluster.&lt;/p&gt;




&lt;h3&gt;
  
  
  Important Update
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;2020-07-06&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I was impressed with the Argon NEO so chose to use it to run some long-running CPU-intensive tasks. I checked in on it later and just about burned myself on it! 🔥🚒&lt;/p&gt;

&lt;p&gt;It was so hot I decided to put it on a silicone coaster to protect the furniture and run some longer tests. Here are the results of it versus the Cooler Tower when run for the better part of a day.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zS72pOBq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d87bz305z0l2dwzq55cj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zS72pOBq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d87bz305z0l2dwzq55cj.png" alt="Raspberry Pi 4 with Argon NEO long 8000 second experiment temperature chart" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After around 10 minutes of continuous CPU load the Raspberry Pi in the Argon NEO case eventually reached 80°C and reached a max temperature of 86°C. During this time it experience significant thermal throttling and was unable to cool itself back below 80. Once the CPU load was removed it took the case a while to passively cool the Pi back down to reasonable levels.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GNwfphCv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h3watd3ghtnefhnacdl7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GNwfphCv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h3watd3ghtnefhnacdl7.png" alt="Raspberry Pi 4 with Cooler Tower long 8000 second experiment temperature chart" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Cooling Tower + 40mm fan, on the other hand, only reached a max of 32°C and was able to rapidly cool itself back to the baseline once the experiment stopped.&lt;/p&gt;

</description>
      <category>raspberrypi</category>
      <category>hardware</category>
      <category>technology</category>
    </item>
    <item>
      <title>How are you keeping in touch with your team while working from home?</title>
      <dc:creator>Tim Downey</dc:creator>
      <pubDate>Thu, 02 Jul 2020 02:04:06 +0000</pubDate>
      <link>https://dev.to/downey/how-are-you-keeping-in-touch-with-your-team-while-working-from-home-186i</link>
      <guid>https://dev.to/downey/how-are-you-keeping-in-touch-with-your-team-while-working-from-home-186i</guid>
      <description>&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@danielkcheung"&gt;Daniel Cheung&lt;/a&gt; on &lt;a href="https://unsplash.com"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Over these past few months, COVID has turned many of us into remote employees for the first time. This is quite a change for those used to working colocated with their teams from a busy office.&lt;/p&gt;

&lt;p&gt;No more group lunches with coworkers, ping pong breaks, or serendipitous chit-chat. Without these things it can be tougher to bond or build camaraderie.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;What are some techniques or activities that you or your team has started doing to make up for this?&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;If you were already working as part of a distributed team before all of this, what has been working for you?&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>watercooler</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Creating a Simple Kubernetes Debug Pod that Runs Ubuntu</title>
      <dc:creator>Tim Downey</dc:creator>
      <pubDate>Sat, 27 Jun 2020 02:15:00 +0000</pubDate>
      <link>https://dev.to/downey/creating-a-simple-kubernetes-debug-pod-that-runs-ubuntu-49jb</link>
      <guid>https://dev.to/downey/creating-a-simple-kubernetes-debug-pod-that-runs-ubuntu-49jb</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally posted as a &lt;a href="https://downey.io/notes/dev/ubuntu-sleep-pod-yaml/"&gt;note on my blog&lt;/a&gt;. Sharing this technique here in case someone may find it helpful.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Sometimes it can be helpful to deploy a simple Ubuntu container to a cluster when debugging. Say you just applied some new &lt;code&gt;NetworkPolicy&lt;/code&gt; and want to test network connectivity between namespaces. Or maybe you added a new mutating admission webhook to inject sidecar containers and you need something to test it out with. Or maybe you just want a sandbox container to deploy and play around in.&lt;/p&gt;

&lt;p&gt;One thing I like to do is deploy a &lt;code&gt;Pod&lt;/code&gt; running Ubuntu that will let me install whatever tools I want. No need to worry about thin, &lt;a href="https://github.com/GoogleContainerTools/distroless"&gt;distroless&lt;/a&gt; images that are so secure I can't do anything! With the Ubuntu image everything is just an &lt;code&gt;apt install&lt;/code&gt; away. 😌&lt;/p&gt;

&lt;p&gt;However, it's not as simple as running the &lt;code&gt;ubuntu&lt;/code&gt; image on its own. You need to make it actually &lt;em&gt;do something&lt;/em&gt; or the container will just exit immediately. Fortunately this is easy enough... just make the container &lt;code&gt;sleep&lt;/code&gt; for a long time!&lt;/p&gt;

&lt;p&gt;I do this fairly often and hate having to write the YAML from scratch everytime. So this post will serve as a &lt;a href="https://downey.io/blog/leaving-breadcrumbs/"&gt;breadcrumb&lt;/a&gt; for my future self to find and copy and paste from in the future. 🤞&lt;/p&gt;

&lt;h2&gt;
  
  
  The YAML
&lt;/h2&gt;

&lt;p&gt;The following YAML will deploy a &lt;code&gt;Pod&lt;/code&gt; with a container running the &lt;a href="https://hub.docker.com/_/ubuntu/"&gt;&lt;code&gt;ubuntu&lt;/code&gt; Docker image&lt;/a&gt; that sleeps for a week. Plenty of time to do what you need!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sleep"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;604800"&lt;/span&gt;
    &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu&lt;/span&gt;
  &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Applying the YAML
&lt;/h2&gt;

&lt;p&gt;You can apply this via the following by piping stdin to &lt;code&gt;kubectl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: ubuntu
  labels:
    app: ubuntu
spec:
  containers:
  - image: ubuntu
    command:
      - "sleep"
      - "604800"
    imagePullPolicy: IfNotPresent
    name: ubuntu
  restartPolicy: Always
EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or you can apply the raw contents of this &lt;a href="https://gist.github.com/tcdowney/b8a0297241b74f94ef1fc6627f7ea69a"&gt;Gist&lt;/a&gt; directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;kubectl apply -f https://gist.githubusercontent.com/tcdowney/b8a0297241b74f94ef1fc6627f7ea69a/raw/eaae035f5adca37ca00d4a49f1c1958fe3db89e3/ubuntu-sleep.yaml
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using The Pod
&lt;/h2&gt;

&lt;p&gt;Start up an interactive shell in the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; ubuntu &lt;span class="nt"&gt;--&lt;/span&gt; /bin/bash
&lt;span class="go"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;root@ubuntu:/#&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can install whatever you want! For example, I often install &lt;code&gt;curl&lt;/code&gt; via the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt &lt;span class="nb"&gt;install &lt;/span&gt;curl &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What About Ephemeral Debug Containers?
&lt;/h2&gt;

&lt;p&gt;If you've been following along with the latest Kubernetes releases, you may be aware of a new alpha feature in Kubernetes 1.18 known as &lt;a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container"&gt;ephemeral debug containers&lt;/a&gt;. This features lets you take a &lt;strong&gt;running Pod&lt;/strong&gt; and attach an arbitrary "debug" container that has all of the tools you might need to debug it. This is really powerful for several reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If a Pod is misbehaving you can attach the container to it and see what's going on directly.&lt;/li&gt;
&lt;li&gt;You can continue to follow best practices and publish small container images. No need to include debug utilities "just in case."&lt;/li&gt;
&lt;li&gt;No need to look up this page to copy paste some YAML for a hacky Ubuntu sleep pod!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I'm really looking forward to them. However, Kubernetes 1.18 is still pretty bleeding age (at least at the time of writing this post) and the feature is still in alpha. There's also some use cases for the Ubuntu pod that it doesn't cover so this method still has some life in it yet!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>notes</category>
      <category>todayisearched</category>
    </item>
    <item>
      <title>Save Money and Skip the Kubernetes Load Balancer</title>
      <dc:creator>Tim Downey</dc:creator>
      <pubDate>Sun, 14 Jun 2020 19:29:13 +0000</pubDate>
      <link>https://dev.to/downey/save-money-and-skip-the-kubernetes-load-balancer-5d8l</link>
      <guid>https://dev.to/downey/save-money-and-skip-the-kubernetes-load-balancer-5d8l</guid>
      <description>&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/services-networking/#loadbalancer"&gt;&lt;code&gt;LoadBalancer&lt;/code&gt;&lt;/a&gt; Services are super convenient. Just change the type of a &lt;code&gt;NodePort&lt;/code&gt; service to &lt;code&gt;LoadBalancer&lt;/code&gt; and your cloud provider will provision a new cloud load balancer, external IP address, and firewall rules to make your workload reachable to the world. It's a fantastic user experience! It's no wonder that example installations and tutorials love to include them.&lt;/p&gt;

&lt;p&gt;Load balancers, however, come with a cost. On Digital Ocean, for example, each load balancer will run you ten dollars a month -- and for small development clusters this cost can quickly dwarf the cost of the cluster itself.&lt;/p&gt;

&lt;p&gt;In this post I'll show how you can save money by skipping the load balancer for your development clusters. We'll see how with a few minor tweaks we can route traffic with an Ingress Controller and plain old fashioned DNS.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Plan
&lt;/h2&gt;

&lt;p&gt;In place of our cloud load balancer we are going to configure DNS for our domain to round-robin between each of our cluster's worker nodes. This will result in requests to our domain going directly to a worker node. We will install an Ingress Controller on our cluster and have it listen on ports &lt;code&gt;80&lt;/code&gt; and &lt;code&gt;443&lt;/code&gt; to handle incoming HTTP(S) traffic and forward it to workloads running on our cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kU2QlBKz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6fll3j2xvpw64hxf639d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kU2QlBKz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6fll3j2xvpw64hxf639d.png" alt="Diagram of ingress using DNS instead of a load balancer" width="880" height="562"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, in short, we'll do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install an &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/"&gt;Ingress Controller&lt;/a&gt; as a DaemonSet&lt;/li&gt;
&lt;li&gt;Use a &lt;code&gt;NodePort&lt;/code&gt; Service instead of a &lt;code&gt;LoadBalancer&lt;/code&gt; service&lt;/li&gt;
&lt;li&gt;Configure the ingress Pods to listen on &lt;code&gt;hostPort&lt;/code&gt; &lt;code&gt;80&lt;/code&gt; and &lt;code&gt;443&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Configure DNS for our domain to have &lt;code&gt;A&lt;/code&gt; records pointing to each of our node external IP addresses&lt;/li&gt;
&lt;li&gt;Double check any firewall rules that apply to the worker VMs and ensure ingress traffic is allowed for ports &lt;code&gt;80&lt;/code&gt; and &lt;code&gt;443&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create an &lt;code&gt;Ingress&lt;/code&gt; resource to expose our app&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Installing the Ingress Controller
&lt;/h2&gt;

&lt;p&gt;I have yet to find an Ingress Controller that this doesn't work with. I've done this with Istio, Contour, and NGINX Ingress and they've all been fine. So choose an Ingress Controller and read its documentation on how to install it. To accomplish the steps outlined above, we'll need to tweak the installation YAML somewhat.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://projectcontour.io/"&gt;Contour&lt;/a&gt; can be installed with minimal changes. By default it installs itself as a DaemonSet and already configures the &lt;code&gt;hostPorts&lt;/code&gt; to do what we want. So if you're using Contour, feel free to skip ahead to the "Converting a LoadBalancer Service into a NodePort Service" section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.github.io/ingress-nginx/"&gt;NGINX Ingress Controller&lt;/a&gt; and &lt;a href="https://istio.io/"&gt;Istio&lt;/a&gt; require some extra work. We'll need to convert their ingress proxy Deployments into DaemonSets and configure the &lt;code&gt;hostPort&lt;/code&gt; properties for ports &lt;code&gt;80&lt;/code&gt; and &lt;code&gt;443&lt;/code&gt;. The steps below apply directly to NGINX Ingress, but can be adapted for Istio.&lt;/p&gt;

&lt;h3&gt;
  
  
  Converting a Deployment into a DaemonSet
&lt;/h3&gt;

&lt;p&gt;First we'll convert the &lt;code&gt;ingress-nginx-controller&lt;/code&gt; Deployment into a DaemonSet. Why use a DaemonSet?&lt;br&gt;
Well, for dev environments with a small number of nodes I prefer using a DaemonSet over a Deployment for managing the ingress Pods. This ensures that there will be an instance of our ingress proxy running on all nodes in the cluster and greatly simplifies configuring the DNS. We can just create an &lt;code&gt;A&lt;/code&gt; record for every node! Otherwise we'd have to get fancier with placement and keep track of which nodes are configured to be ingress nodes. It's the &lt;a href="https://devops.stackexchange.com/questions/653/what-is-the-definition-of-cattle-not-pets"&gt;"cattle not pets"&lt;/a&gt; philosophy.&lt;/p&gt;

&lt;p&gt;Of course, if you have a large number of nodes it's probably a waste of resources to have them all running ingress proxies -- but if you're operating at that scale you'll probably be using a real load balancer anyway.&lt;/p&gt;

&lt;p&gt;So to do this programatically we can use a tool called &lt;a href="https://get-ytt.io/"&gt;&lt;code&gt;ytt&lt;/code&gt;&lt;/a&gt;. We can use &lt;code&gt;ytt&lt;/code&gt; to apply the overlay below on top of what the NGINX Ingress Controller provides by default to transform a Deployment named &lt;code&gt;ingress-nginx-controller&lt;/code&gt; into a valid DaemonSet.&lt;/p&gt;
&lt;h3&gt;
  
  
  Configuring Host Ports
&lt;/h3&gt;

&lt;p&gt;We can also use this same overlay to update the &lt;code&gt;containerPorts&lt;/code&gt; to include &lt;code&gt;hostPort&lt;/code&gt;s so that the container can listen on port &lt;code&gt;80&lt;/code&gt; and &lt;code&gt;443&lt;/code&gt;. This is necessary because we can't do any fancy port-forwarding with plain DNS, so the ingress proxy needs to be able to listen on the standard HTTP and HTTPS ports.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#@ load("@ytt:overlay", "overlay")&lt;/span&gt;

&lt;span class="c1"&gt;#@overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"ingress-nginx-controller"}}),expects=1&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;#@overlay/replace&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DaemonSet&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;#@overlay/match missing_ok=True&lt;/span&gt;
  &lt;span class="na"&gt;updateStrategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;#@overlay/match by=overlay.subset({"name":"controller"}),expects=1&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;#@overlay/match by=overlay.subset({"containerPort":80}),expects=1&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
          &lt;span class="c1"&gt;#@overlay/match missing_ok=True&lt;/span&gt;
          &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="c1"&gt;#@overlay/match by=overlay.subset({"containerPort":443}),expects=1&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt;
          &lt;span class="c1"&gt;#@overlay/match missing_ok=True&lt;/span&gt;
          &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So if we save this overlay to a file -- let's call it &lt;code&gt;/tmp/nginx-deployment-to-daemonset.yaml&lt;/code&gt; -- we can apply it as so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;ytt -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml \
-f /tmp/nginx-deployment-to-daemonset.yaml \
--ignore-unknown-comments=true
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Like I said earlier, though. If you're using Contour it's done this already for you so you can skip this part.&lt;/p&gt;

&lt;h3&gt;
  
  
  Converting a LoadBalancer Service into a NodePort Service
&lt;/h3&gt;

&lt;p&gt;This next step you'll likely need to do regardless of what Ingress Controller you chose. Any pesky &lt;code&gt;LoadBalancer&lt;/code&gt; services will need to be transformed into &lt;code&gt;NodePort&lt;/code&gt; services. Everything we've done so far will all still work with a &lt;code&gt;LoadBalancer&lt;/code&gt; Service, but since the whole point is not to waste money on one, we've got to convert it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You might be wondering why we needed to configure &lt;code&gt;hostPort&lt;/code&gt;s earlier if a &lt;code&gt;NodePort&lt;/code&gt; Service can fulfill a similar role. That's an excellent question. The main reason is that by default, most clusters will only allow a small, high-range of ports for the &lt;code&gt;NodePort&lt;/code&gt; to select from... typically ports &lt;code&gt;30000-32767&lt;/code&gt;. A cluster admin would explicitly need to allow low port numbers like &lt;code&gt;80&lt;/code&gt; and &lt;code&gt;443&lt;/code&gt; and that's pretty uncommon. And probably not a very secure choice. We're converting the &lt;code&gt;LoadBalancer&lt;/code&gt; Service into a &lt;code&gt;NodePort&lt;/code&gt; mostly for any cluster internal networking that the Ingress Controller may be using it for. All external traffic will bypass the Service and go directly to the nodes.&lt;/p&gt;

&lt;p&gt;Anyways, the following overlay will do this for you -- let's save it in &lt;code&gt;/tmp/convert-lb-to-nodeport.yaml&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#@ load("@ytt:overlay", "overlay")&lt;/span&gt;

&lt;span class="c1"&gt;#@overlay/match by=overlay.subset({"kind": "Service", "spec":{"type":"LoadBalancer"}}),expects=1&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;#@overlay/replace&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It can be applied the same way as before using &lt;code&gt;ytt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; I go into more detail on &lt;a href="https://downey.io/notes/dev/convert-loadbalancer-service-to-nodeport-ytt/"&gt;how to convert a LoadBalancer Service into a NodePort Service with ytt here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Validating our Changes
&lt;/h3&gt;

&lt;p&gt;If you'd like to validate that you transformed the &lt;code&gt;Deployment&lt;/code&gt; correctly, you can use &lt;a href="https://github.com/instrumenta/kubeval"&gt;&lt;code&gt;kubeval&lt;/code&gt;&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;ytt -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml \
-f /tmp/nginx-deployment-to-daemonset.yaml \
-f /tmp/convert-lb-to-nodeport.yaml \
--ignore-unknown-comments=true | kubeval - --strict
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Installing to the Cluster
&lt;/h3&gt;

&lt;p&gt;Now let's apply both overlays together and install to the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;ytt -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml \
-f /tmp/nginx-deployment-to-daemonset.yaml \
-f /tmp/convert-lb-to-nodeport.yaml \
--ignore-unknown-comments=true | kubectl apply -f -
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring DNS
&lt;/h2&gt;

&lt;p&gt;You'll want to configure a &lt;a href="https://en.wikipedia.org/wiki/Wildcard_DNS_record"&gt;wildcard&lt;/a&gt; &lt;code&gt;A&lt;/code&gt; record for the domain you want to be using. How you actually do this depends a lot on your DNS provider, but it's a pretty standard thing so hopefully it's not too rough.&lt;/p&gt;

&lt;p&gt;For me, I'm using Digital Ocean to manage DNS for my domain, &lt;code&gt;downey.cloud&lt;/code&gt;. I have two nodes in my cluster, each with an external IP (&lt;code&gt;157.245.191.103&lt;/code&gt; and &lt;code&gt;138.68.53.184&lt;/code&gt;). I want to be able to host workloads under the &lt;code&gt;*.k8s.downey.cloud&lt;/code&gt; wildcard subdomain so I will need to configure two &lt;code&gt;A&lt;/code&gt; records: one for each node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DXqMpC0Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/31kepiemgh1j488835kj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DXqMpC0Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/31kepiemgh1j488835kj.png" alt="Digital Ocean DNS configuration for round robin A records" width="880" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you've done that, you can verify it all works by using a tool like &lt;code&gt;dig&lt;/code&gt; or &lt;code&gt;nslookup&lt;/code&gt; after waiting for the DNS changes to propagate. You should see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;dig example.k8s.downey.cloud
&lt;span class="go"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&amp;lt;&amp;lt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; DiG 9.10.6 &amp;lt;&amp;lt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; example.k8s.downey.cloud
&lt;span class="gp"&gt;;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; global options: +cmd
&lt;span class="gp"&gt;;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; Got answer:
&lt;span class="gp"&gt;;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; -&amp;gt;&amp;gt;HEADER&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt; &lt;span class="no"&gt;opcode&lt;/span&gt;&lt;span class="sh"&gt;: QUERY, status: NOERROR, id: 40148
&lt;/span&gt;&lt;span class="gp"&gt;;&lt;/span&gt;&lt;span class="sh"&gt;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
&lt;/span&gt;&lt;span class="go"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; OPT PSEUDOSECTION:
&lt;span class="gp"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;EDNS: version: 0, flags:&lt;span class="p"&gt;;&lt;/span&gt; udp: 512
&lt;span class="gp"&gt;;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; QUESTION SECTION:
&lt;span class="gp"&gt;;&lt;/span&gt;example.k8s.downey.cloud.  IN  A
&lt;span class="go"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; ANSWER SECTION:
&lt;span class="go"&gt;example.k8s.downey.cloud. 300   IN  A   157.245.191.103
example.k8s.downey.cloud. 300   IN  A   138.68.53.184

&lt;/span&gt;&lt;span class="gp"&gt;;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; Query &lt;span class="nb"&gt;time&lt;/span&gt;: 109 msec
&lt;span class="gp"&gt;;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; SERVER: 192.168.1.1#53&lt;span class="o"&gt;(&lt;/span&gt;192.168.1.1&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="gp"&gt;;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; WHEN: Sun Jun 14 10:22:19 PDT 2020
&lt;span class="gp"&gt;;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; MSG SIZE  rcvd: 85
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring Firewall Rules
&lt;/h2&gt;

&lt;p&gt;Like DNS, how to do this step is also dependent on your cloud provider. Wherever you configure firewall rules that apply to the worker node VMs in your cluster, you will want to allow incoming TCP traffic for ports &lt;code&gt;80&lt;/code&gt; and &lt;code&gt;443&lt;/code&gt;. In Digital Ocean, for example, this is done on the "Firewall Rules" page under the "Networking" tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8wZoPRkA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gxwlcggltr6i1m7mrf0o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8wZoPRkA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gxwlcggltr6i1m7mrf0o.png" alt="Digital Ocean firewall configuration" width="880" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Ingress
&lt;/h2&gt;

&lt;p&gt;Now that we have our Ingress Controller installed and DNS configured, all the pieces are in place and it's time to deploy an app. For example's sake, I'll be deploying a simple Go app called &lt;a href="https://github.com/tcdowney/mando"&gt;mando&lt;/a&gt; (as in Mandalorian).&lt;/p&gt;

&lt;p&gt;It has some example Kubernetes deployment configuration prebaked for us:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/tcdowney/mando/blob/master/deploy/example-deployment.yaml"&gt;&lt;code&gt;deploy/example-deployment.yaml&lt;/code&gt; (includes the Deployment and a ClusterIP Service)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/tcdowney/mando/blob/master/deploy/example-ingress.yaml"&gt;&lt;code&gt;deploy/example-ingress.yaml&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We can install it with a &lt;code&gt;kubectl apply&lt;/code&gt; for each of those files. It's all pretty run-of-the-mill, but let's take a closer look at that Ingress resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mando&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mando.k8s.downey.cloud&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mando&lt;/span&gt;
          &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configures the Ingress Controller to direct traffic to &lt;code&gt;mando.k8s.downey.cloud&lt;/code&gt; to the Service named &lt;code&gt;mando&lt;/code&gt; on port &lt;code&gt;8080&lt;/code&gt;. So our DNS gets everything going to &lt;code&gt;*.k8s.downey.cloud&lt;/code&gt; routed to our ingress proxies and this bit of config gets it to go to the correct backend application.&lt;/p&gt;

&lt;p&gt;With this in place, we can now reach our application! Let's try &lt;code&gt;curl&lt;/code&gt;ing it real quick.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;curl mando.k8s.downey.cloud/this &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;span class="go"&gt;
*   Trying 157.245.191.103...
* TCP_NODELAY set
&lt;/span&gt;&lt;span class="gp"&gt;* Connected to mando.k8s.downey.cloud (157.245.191.103) port 80 (#&lt;/span&gt;0&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="gp"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;GET /this HTTP/1.1
&lt;span class="gp"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;Host: mando.k8s.downey.cloud
&lt;span class="gp"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;User-Agent: curl/7.54.0
&lt;span class="gp"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;Accept: &lt;span class="k"&gt;*&lt;/span&gt;/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="gp"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="go"&gt;&amp;lt; HTTP/1.1 200 OK
&amp;lt; date: Sun, 14 Jun 2020 17:34:44 GMT
&amp;lt; content-length: 18
&lt;/span&gt;&lt;span class="gp"&gt;&amp;lt; content-type: text/plain;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;utf-8
&lt;span class="go"&gt;&amp;lt; x-envoy-upstream-service-time: 1
&amp;lt; server: envoy
&amp;lt;
this
is the way
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yep, the request went directly to one of our node IPs (&lt;code&gt;157.245.191.103&lt;/code&gt;) on port &lt;code&gt;80&lt;/code&gt; and we can see it was proxied through our Ingress Controller's (Contour in this case) Envoy!&lt;/p&gt;

&lt;p&gt;We can create additional Ingress resources for other apps we have deployed, or do fancy things like weighted routing, TLS termination, or path-based routing. That stuff is all dependent on what your Ingress Controller supports, though, and out of scope for this post. 🙃&lt;/p&gt;

&lt;h2&gt;
  
  
  That's It!
&lt;/h2&gt;

&lt;p&gt;What an achievement! We can now save tens of dollars a month! 🤑&lt;/p&gt;

&lt;p&gt;That's not much if your employer is paying for it, but for the small personal clusters that I like to experiment with it basically halves the price I pay. Again, this probably isn't something you'll want to do for running production workloads, but it's definitely a handy technique. Cheers!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>networking</category>
    </item>
    <item>
      <title>How to use kbld to Rapidly Iterate on Kubernetes Deployed Apps</title>
      <dc:creator>Tim Downey</dc:creator>
      <pubDate>Thu, 28 May 2020 01:27:55 +0000</pubDate>
      <link>https://dev.to/downey/using-kbld-to-rapidly-iterate-on-kubernetes-deployed-apps-48eh</link>
      <guid>https://dev.to/downey/using-kbld-to-rapidly-iterate-on-kubernetes-deployed-apps-48eh</guid>
      <description>&lt;p&gt;&lt;em&gt;Obligatory shipping container cover image by &lt;a href="https://unsplash.com/@bboba"&gt;Boba Jovanovic&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/container?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is kbld?
&lt;/h2&gt;

&lt;p&gt;When creating applications that extend or interact with Kubernetes, there are times when it's necessary to deploy and develop against a real K8s cluster. While Kubernetes makes it trivial to apply and roll out new changes, the building and pushing new dev Docker images for your application can be a rigamarole. On top of that, you also have to remember to configure the &lt;a href="https://kubernetes.io/docs/concepts/containers/images/#updating-images"&gt;&lt;code&gt;imagePullPolicy&lt;/code&gt;&lt;/a&gt; for your containers to &lt;code&gt;Always&lt;/code&gt;. Otherwise, who knows if the &lt;code&gt;node&lt;/code&gt; your app ends up running on has the old image cached!&lt;/p&gt;

&lt;p&gt;Fortunately, there is a tool that can help solve all of these problems. The &lt;a href="https://get-kbld.io/"&gt;&lt;code&gt;kbld&lt;/code&gt;&lt;/a&gt; CLI (pronounced "k build") assists with all things around image building and pushing for Kubernetes. It's under active development so refer to &lt;a href="https://get-kbld.io/"&gt;the kbld site&lt;/a&gt; for the most up to date feature set, but I'm a fan of its ability to do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Know where my source code to build lives&lt;/li&gt;
&lt;li&gt;Build an OCI image (using Docker or &lt;a href="https://buildpacks.io/"&gt;Cloud Native Buildpacks&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Tag the image and push it to the registry of my choice (local registry for KIND, DockerHub, GCR, etc.)&lt;/li&gt;
&lt;li&gt;Find references to the image in Kubernetes deployment YAMLs and replace vague references with image digests for deterministic deployments&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I'm less of a fan, however, of its terse documentation. So in this post, I'm going to show how I use &lt;code&gt;kbld&lt;/code&gt; to build Docker images for my projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding kbld Configuration
&lt;/h2&gt;

&lt;p&gt;Like most things in the Kubernetes ecosystem, the &lt;code&gt;kbld&lt;/code&gt; CLI is configured by YAML files. There are several options here, but the two main YAML objects I use are &lt;code&gt;Sources&lt;/code&gt; and &lt;code&gt;ImageDestinations&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sources
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kbld.k14s.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Sources&lt;/span&gt;
&lt;span class="na"&gt;sources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image-repository/image-name&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/path/to/source/code&lt;/span&gt;
  &lt;span class="na"&gt;pack&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;builder&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;heroku/buildpacks:18&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;A &lt;code&gt;Sources&lt;/code&gt; object declares the images that kbld should be responsible for building. It includes information about the &lt;code&gt;path&lt;/code&gt; for the source code of an image as well as configuration for the image builder (&lt;code&gt;docker&lt;/code&gt; or &lt;code&gt;pack&lt;/code&gt;).&lt;/p&gt;
&lt;h3&gt;
  
  
  ImageDestinations
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kbld.k14s.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ImageDestinations&lt;/span&gt;
&lt;span class="na"&gt;destinations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image-repository/image-name&lt;/span&gt;
  &lt;span class="na"&gt;newImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.io/image-repository/image-name&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;ImageDestinations&lt;/code&gt; tell kbld how it should tag and push the images that it has built. It's a pretty simple resource, and I was surprised at first that there was nothing about authentication here for private registries. That config, however, comes in either through your Docker config or as environment variables. See &lt;a href="https://github.com/k14s/kbld/blob/master/docs/auth.md"&gt;these kbld authentication docs&lt;/a&gt; for more information on that.&lt;/p&gt;
&lt;h3&gt;
  
  
  Are these Kubernetes Resources?
&lt;/h3&gt;

&lt;p&gt;An astute developer might recognize that these kbld resources look suspiciously similar to Kubernetes resource objects and wonder if there are any &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;CRDs&lt;/a&gt; involved here. That's not the case, though. The similarities are purely superficial, and these resources are used client-side directly by the &lt;code&gt;kbld&lt;/code&gt; CLI.&lt;/p&gt;

&lt;p&gt;As always, refer to &lt;a href="https://github.com/k14s/kbld/blob/master/docs/config.md"&gt;the kbld config documentation&lt;/a&gt; for the latest on what is possible.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to Use kbld
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--566lAguM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/tcdowney"&gt;
        tcdowney
      &lt;/a&gt; / &lt;a href="https://github.com/tcdowney/mando"&gt;
        mando
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      this is the way
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a href="https://hub.docker.com/repository/docker/downey/mando" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/9ece5d8f7ef3a34fe934b57f063088e8f529847e77ea27539158fa315c6f77e9/68747470733a2f2f696d672e736869656c64732e696f2f646f636b65722f696d6167652d73697a652f646f776e65792f6d616e646f" alt="Docker Image Size (latest by date)"&gt;&lt;/a&gt; &lt;a href="https://hub.docker.com/repository/docker/downey/mando" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/71d425686fa3e70b67379faeee2d4e0ff71dc1a12589d7088d750eb72befaef5/68747470733a2f2f696d672e736869656c64732e696f2f646f636b65722f762f646f776e65792f6d616e646f" alt="Docker Image Version (latest by date)"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
mando&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/uelA7KRLINA?t=8" rel="nofollow"&gt;this is the way&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;just a little app for testing path-based routing&lt;/p&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;docker run -p 8080:8080 downey/mando
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h2&gt;
build and deploy&lt;/h2&gt;
&lt;p&gt;can be built and deployed to Kubernetes with &lt;a href="https://get-kbld.io/" rel="nofollow"&gt;&lt;code&gt;kbld&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight highlight-text-shell-session notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c1"&gt;kbld -f build -f deploy | kubectl apply -f -&lt;/span&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;/div&gt;

  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/tcdowney/mando"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;The following examples refer to a simple Go app called &lt;a href="https://github.com/tcdowney/mando"&gt;mando&lt;/a&gt; that will be built and deployed to Kubernetes using &lt;code&gt;kbld&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building an app with kbld using a Dockerfile
&lt;/h3&gt;

&lt;p&gt;If you wish to follow along, you'll need the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/get-docker/"&gt;Install docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sign up for a free &lt;a href="https://hub.docker.com/"&gt;DockerHub account&lt;/a&gt; or have access to a different image registry&lt;/li&gt;
&lt;li&gt;&lt;a href="https://k14s.io/"&gt;Install kbld&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Have access to a Kubernetes cluster and &lt;code&gt;kubectl&lt;/code&gt; if you want to deploy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To start, since we'll be publishing to an OCI image registry, we'll first need to &lt;a href="https://github.com/k14s/kbld/blob/master/docs/auth.md"&gt;authenticate&lt;/a&gt;. Since I'm pushing my images to DockerHub that means I just need to &lt;code&gt;docker login&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For the following, I'll be working off of the &lt;a href="https://github.com/tcdowney/mando/tree/kbld-dockerfile-example"&gt;&lt;code&gt;kbld-dockerfile-example&lt;/code&gt; branch&lt;/a&gt; of my test app repo.&lt;/p&gt;

&lt;p&gt;In this repo, I have an example &lt;code&gt;Deployment&lt;/code&gt; for Kubernetes in the &lt;code&gt;deploy&lt;/code&gt; directory and the kbld files within the &lt;code&gt;build&lt;/code&gt; directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kbld.k14s.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Sources&lt;/span&gt;
&lt;span class="na"&gt;sources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;downey/mando&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here I've configured kbld to build my image, &lt;code&gt;downey/mando&lt;/code&gt;, using the code and &lt;code&gt;Dockerfile&lt;/code&gt; at the root of my repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kbld.k14s.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ImageDestinations&lt;/span&gt;
&lt;span class="na"&gt;destinations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;downey/mando&lt;/span&gt;
  &lt;span class="na"&gt;newImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.io/downey/mando&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;code&gt;ImageDestinations&lt;/code&gt; configuration tells kbld to tag and push my image to DockerHub at &lt;code&gt;docker.io/downey/mando&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now to use this configuration, in the root of the app directory we can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;kbld -f build -f deploy
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will then see kbld work its magic. It will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build the &lt;code&gt;mando&lt;/code&gt; app using its &lt;code&gt;Dockerfile&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Push it to DockerHub&lt;/li&gt;
&lt;li&gt;Update the references to the image in our Kubernetes &lt;code&gt;Deployment&lt;/code&gt; to use the digest for the image we just built&lt;/li&gt;
&lt;li&gt;Output the Kubernetes YAML with all changes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We can then either write this output to a file or deploy it directly to Kubernetes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;kbld -f build -f deploy | kubectl apply -f -
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It might not seem like much at first. But after dozens of cycles of &lt;code&gt;docker build&lt;/code&gt;, &lt;code&gt;docker push&lt;/code&gt;, updating Kubernetes config to point to a new tag, and deploying, kbld can end up saving a bunch of time!&lt;/p&gt;

&lt;p&gt;Where I &lt;em&gt;really&lt;/em&gt; find kbld useful though, is with Cloud Native Buildpacks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building an app with kbld using Buildpacks
&lt;/h3&gt;

&lt;p&gt;For this section I'll be working off of the &lt;a href="https://github.com/tcdowney/mando/tree/kbld-pack-example"&gt;&lt;code&gt;kbld-pack-example&lt;/code&gt; branch&lt;/a&gt; of my test app repo. If you're unfamiliar with the concept of buildpacks, I'd encourage you to &lt;a href="https://blog.heroku.com/buildpacks-go-cloud-native"&gt;learn more about them&lt;/a&gt; or check out my blog post on &lt;a href="https://downey.io/blog/deploying-ruby-app-kubernetes-buildpack-kapp/"&gt;deploying apps to Kubernetes with Buildpacks&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Using Buildpacks instead of a Dockerfile to build is simple with &lt;code&gt;kbld&lt;/code&gt;. The flow is pretty much the same -- instead of a Dockerfile, we will use the &lt;a href="https://buildpacks.io/docs/install-pack/"&gt;pack&lt;/a&gt; CLI (install it if you haven't already) and make some minor tweaks to our &lt;code&gt;Sources&lt;/code&gt; YAML.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kbld.k14s.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Sources&lt;/span&gt;
&lt;span class="na"&gt;sources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;downey/mando&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
  &lt;span class="na"&gt;pack&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;builder&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cloudfoundry/cnb:tiny&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we tell kbld which &lt;a href="https://buildpacks.io/docs/concepts/components/builder/"&gt;"builder"&lt;/a&gt; to use. I'm using the &lt;code&gt;cnb:tiny&lt;/code&gt; builder since it's optimized for creating "distroless" lightweight images for Go binaries. Perfect for this use case. If you're unsure which builder to use, you can always run &lt;code&gt;pack suggest-builders&lt;/code&gt; to get an up-to-date list of builders from Cloud Foundry and Heroku.&lt;/p&gt;

&lt;p&gt;Anyways, as I said earlier, the flow is the same as before. To build and deploy, just run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;kbld -f build -f deploy | kubectl apply -f -
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;kbld&lt;/code&gt; CLI will now build using &lt;code&gt;pack&lt;/code&gt; instead of &lt;code&gt;docker&lt;/code&gt;! Since I use the &lt;code&gt;pack&lt;/code&gt; CLI pretty infrequently, I'm more than happy to hand over the reigns to &lt;code&gt;kbld&lt;/code&gt; and let it orchestrate the build and push.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Well that's about it. If you've gotten this far, hopefully this post has helped demystify some of the basic use cases for &lt;code&gt;kbld&lt;/code&gt; and how it can help streamline the Docker image push-build-deploy flow. If you're looking to learn more, check out the &lt;a href="https://github.com/k14s/kbld/blob/master/docs/README.md"&gt;kbld docs&lt;/a&gt; or join the &lt;code&gt;#k14s&lt;/code&gt; channel on &lt;a href="https://slack.kubernetes.io/"&gt;Kubernetes Slack&lt;/a&gt;. Good luck! 🌝&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Web Monetization Sidecar Proxy [Grant For The Web Hackathon Submission]</title>
      <dc:creator>Tim Downey</dc:creator>
      <pubDate>Mon, 25 May 2020 00:22:36 +0000</pubDate>
      <link>https://dev.to/downey/grant-for-the-web-hackathon-web-monetization-sidecar-proxy-45j6</link>
      <guid>https://dev.to/downey/grant-for-the-web-hackathon-web-monetization-sidecar-proxy-45j6</guid>
      <description>&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LYbtNq0w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tr330fz79dvz9wy7vbo2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LYbtNq0w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tr330fz79dvz9wy7vbo2.png" alt="Architecture diagram. Request comes in money comes out." width="771" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A simple proof-of-concept sidecar proxy&lt;sup&gt;1&lt;/sup&gt; that injects Web Monetization &lt;code&gt;meta&lt;/code&gt; tags into HTML responses from the backend application. Offloading this responsibility to a sidecar is valuable for several reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It allows applications to benefit from Web Monetization without having to change the code of the application itself. Useful for cases where there are many applications being deployed or when the application is not modifiable (e.g. commercial off the shelf software).&lt;/li&gt;
&lt;li&gt;Platform operators could automatically deploy these sidecars across their entire cluster without application developer intervention.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While the proxy I built for this Hackathon is far from production ready, I hope that this proof-of-concept can inspire further work. One idea might be writing a &lt;a href="https://banzaicloud.com/blog/envoy-wasm-filter/"&gt;WASM filter&lt;/a&gt; for Envoy proxies that does this work.&lt;/p&gt;

&lt;p&gt;Included with the code is an example Kubernetes &lt;code&gt;Deployment&lt;/code&gt; that showcases how the proxy can easily be added to an existing &lt;code&gt;Pod&lt;/code&gt; as a sidecar container.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;sup&gt;1&lt;/sup&gt; - What is a sidecar proxy? It is a &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/sidecar"&gt;sidecar container&lt;/a&gt; that runs a proxy sitting in front of your "main" application container. Sidecars are frequently used by &lt;a href="https://buoyant.io/2017/04/25/whats-a-service-mesh-and-why-do-i-need-one/"&gt;service meshes&lt;/a&gt; to enable features like intelligent load balancing, rate-limiting, certificate / TLS management, and more. &lt;a href="https://www.envoyproxy.io/learn/service-mesh"&gt;Envoy&lt;/a&gt; is a commonly used sidecar proxy.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Submission Category:
&lt;/h3&gt;

&lt;p&gt;Foundational Technology&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Demo:&lt;/strong&gt; &lt;a href="https://monetization-proxy.k8s.downey.dev/"&gt;https://monetization-proxy.k8s.downey.dev/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bTyVWNPE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fa71mpa91fth99k6p2me.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bTyVWNPE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fa71mpa91fth99k6p2me.png" alt="Firefox Browser showing a Ghost blog with Web Monetization capabilities" width="880" height="766"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've deployed a sample Ghost blog running with the monetization proxy sidecar to a small development Kubernetes cluster running on Digital Ocean&lt;sup&gt;2&lt;/sup&gt;. The site can be accessed at &lt;a href="https://monetization-proxy.k8s.downey.dev/"&gt;here&lt;/a&gt;. Additionally, you can view the Kubernetes deployment configuration &lt;a href="https://github.com/tcdowney/web-monetization-proxy/blob/master/hack/example-deployment.yaml"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;sup&gt;2&lt;/sup&gt; - This cluster costs money so I will tear it down in a few weeks.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Link to Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--566lAguM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/tcdowney"&gt;
        tcdowney
      &lt;/a&gt; / &lt;a href="https://github.com/tcdowney/web-monetization-proxy"&gt;
        web-monetization-proxy
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Simple Go proxy for injecting Web Monetization meta tags. Done as part of the Dev "Grant For The Web" Hackathon
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a href="https://hub.docker.com/repository/docker/downey/web-monetization-proxy" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/f73c92680fae21f214994afd727b70d68e05b40f0f4364b04f2285a6cc7b8c9d/68747470733a2f2f696d672e736869656c64732e696f2f646f636b65722f696d6167652d73697a652f646f776e65792f7765622d6d6f6e6574697a6174696f6e2d70726f7879" alt="Docker Image Size (latest by date)"&gt;&lt;/a&gt; &lt;a href="https://hub.docker.com/repository/docker/downey/web-monetization-proxy" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/e6646e7efc7bedbcba6bcf85c43b9c092ebb04be70ce30f3d196a80f50ffce29/68747470733a2f2f696d672e736869656c64732e696f2f646f636b65722f762f646f776e65792f7765622d6d6f6e6574697a6174696f6e2d70726f7879" alt="Docker Image Version (latest by date)"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
web-monetization-proxy&lt;/h1&gt;
&lt;p&gt;Docker repo: &lt;a href="https://hub.docker.com/repository/docker/downey/web-monetization-proxy" rel="nofollow"&gt;https://hub.docker.com/repository/docker/downey/web-monetization-proxy&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Simple proxy for injecting &lt;a href="https://webmonetization.org/" rel="nofollow"&gt;Web Monetization&lt;/a&gt; &lt;code&gt;meta&lt;/code&gt; tags. Intended to be deployed as a sidecar process alongside Web Monetization unaware webapps.&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/tcdowney/web-monetization-proxydoc/web-monetization-proxy.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZGgN0Qgn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/tcdowney/web-monetization-proxydoc/web-monetization-proxy.png" alt="Architecture diagram of the Web Monetization Proxy"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
disclaimer&lt;/h2&gt;
&lt;p&gt;This &lt;strong&gt;proof-of-concept&lt;/strong&gt; project was created for fun as part of the &lt;a href="https://dev.to/devteam/announcing-the-grant-for-the-web-hackathon-on-dev-3kd1" rel="nofollow"&gt;DEV "Grant For The Web" Hackathon&lt;/a&gt;. It should &lt;strong&gt;not be relied on&lt;/strong&gt; for production use cases and merely exists to demonstrate what is possible. In the future it might be worth exploring doing something similar using Envoy proxies and a &lt;a href="https://github.com/envoyproxy/envoy-wasm"&gt;WASM&lt;/a&gt; plugin that implements this functionality.&lt;/p&gt;
&lt;h2&gt;
configuration&lt;/h2&gt;
&lt;p&gt;The proxy relies on the following environment variables:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;PROXY_PORT&lt;/code&gt; -- the port the proxy listens on&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;BACKEND_PORT&lt;/code&gt; -- the port of the backend application that requests are forwarded to&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PAYMENT_POINTER&lt;/code&gt; -- an &lt;a href="https://paymentpointers.org/" rel="nofollow"&gt;Interledger Payment Pointer&lt;/a&gt; string&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Reference the &lt;a href="https://github.com/tcdowney/web-monetization-proxyhack/example-deployment.yaml"&gt;example Deployment&lt;/a&gt; to see how you might configure these in Kubernetes.&lt;/p&gt;
&lt;h2&gt;
development&lt;/h2&gt;
&lt;p&gt;This project uses &lt;a href="https://blog.golang.org/using-go-modules" rel="nofollow"&gt;Go modules&lt;/a&gt; which work best with Golang 1.13+. To run the project's…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/tcdowney/web-monetization-proxy"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;The proxy is available as a &lt;a href="https://hub.docker.com/repository/docker/downey/web-monetization-proxy"&gt;Docker image here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I built it
&lt;/h2&gt;

&lt;p&gt;The proxy is built using Go and uses an &lt;a href="https://golang.org/pkg/net/http/httputil/#NewSingleHostReverseProxy"&gt;&lt;code&gt;httputil.NewSingleHostReverseProxy&lt;/code&gt;&lt;/a&gt; to do a lot of the heavy lifting around proxying. A response modifying function is applied to each response from the backend and does the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Checks to see if the response &lt;code&gt;Content-Type&lt;/code&gt; is HTML. If it is not, we leave the response alone since we don't want to inject &lt;code&gt;meta&lt;/code&gt; tags into Javascript or CSS responses!&lt;/li&gt;
&lt;li&gt;If it &lt;em&gt;is&lt;/em&gt; HTML, we parse the DOM and inject the Web Monetization &lt;code&gt;meta&lt;/code&gt; tag into the document head.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For unit testing I used &lt;a href="https://github.com/sclevine/spec"&gt;https://github.com/sclevine/spec&lt;/a&gt; which provided some lightweight BDD goodies to Go's built-in testing library. It was my first time using it (I traditionally have used &lt;a href="https://onsi.github.io/ginkgo/"&gt;Ginkgo&lt;/a&gt; and &lt;a href="https://onsi.github.io/gomega/"&gt;Gomega&lt;/a&gt;) and I found it to be easy to work with.&lt;/p&gt;

&lt;p&gt;For building the OCI images I used the Go &lt;a href="https://buildpacks.io/"&gt;Cloud Native Buildpack&lt;/a&gt; and &lt;a href="https://get-kbld.io/"&gt;&lt;code&gt;kbld&lt;/code&gt;&lt;/a&gt; to streamline the build/push/deploy to Kubernetes flow. More details on that are in the project README.&lt;/p&gt;

&lt;p&gt;To deploy the demo I set up a Kubernetes cluster on Digital Ocean. I used &lt;a href="https://github.com/kubernetes/ingress-nginx"&gt;NGINX as my Ingress controller&lt;/a&gt; and set up automatic provisioning of TLS certificates using &lt;a href="https://cert-manager.io/"&gt;Cert-Manager&lt;/a&gt; and &lt;a href="https://letsencrypt.org/"&gt;LetsEncrypt&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Resources/Info
&lt;/h2&gt;

&lt;p&gt;This was a fun little project for me. I got to write some Go, set up a a new K8s cluster, and play around with some new libraries / tools.&lt;/p&gt;

&lt;p&gt;One thought I had while working on this though was "I sure wish there was just a Web Monetization HTTP header" instead of having to rely on &lt;code&gt;meta&lt;/code&gt; tags. This would have let me just use an off-the-shelf sidecar proxy like Envoy which supports header manipulation and would have opened it up to support non-HTML applications.&lt;/p&gt;

</description>
      <category>gftwhackathon</category>
      <category>webmonetization</category>
    </item>
  </channel>
</rss>
