<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nick Santos</title>
    <description>The latest articles on DEV Community by Nick Santos (@nicks).</description>
    <link>https://dev.to/nicks</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nicks"/>
    <language>en</language>
    <item>
      <title>Go Rant: Buffered Channels Should be Tossed in a Fire</title>
      <dc:creator>Nick Santos</dc:creator>
      <pubDate>Fri, 21 May 2021 17:40:04 +0000</pubDate>
      <link>https://dev.to/nicks/go-rant-buffered-channels-should-be-tossed-in-a-fire-210a</link>
      <guid>https://dev.to/nicks/go-rant-buffered-channels-should-be-tossed-in-a-fire-210a</guid>
      <description>&lt;p&gt;Channels are Go’s fundamental tool for concurrent programming.&lt;/p&gt;

&lt;p&gt;Buffered channels are a small, innocent-looking feature on top of channels. I’m going to try to convince you that they’re a monstrous abomination.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Recap on What I’m Ranting About
&lt;/h2&gt;

&lt;p&gt;First, a quick recap of how channels work if you haven’t used Go in a while.&lt;/p&gt;

&lt;p&gt;A channel lets you send data on one goroutine, and receive it on another, concurrent goroutine.&lt;/p&gt;

&lt;p&gt;In a normal unbuffered channel, sending data will block until the data is received.&lt;/p&gt;

&lt;p&gt;This blocks forever:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// https://play.golang.org/p/eJGdsxiHOIg
ch := make(chan int, 0)
ch &amp;lt;- 1
fmt.Println("Impossible")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prints “Received 1\nPossible”&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// https://play.golang.org/p/bBIkIiFXeF2
ch := make(chan int, 0)
go func() {
  fmt.Printf("Received: %d\n", &amp;lt;-ch)
}()
ch &amp;lt;- 1
fmt.Println("Possible")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But you can make buffered channels of arbitrary size N. A buffer of size N will block if you send more than N times before receiving.&lt;/p&gt;

&lt;p&gt;This prints “Possible” then deadlocks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// https://play.golang.org/p/GCL8SY_8AUk
ch := make(chan int, 1)
ch &amp;lt;- 1
fmt.Println("Possible")
ch &amp;lt;- 1
fmt.Println("Impossible")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  My Assertion
&lt;/h2&gt;

&lt;p&gt;There are only 3 reasonable values of N:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;0 — the channel always blocks.&lt;/li&gt;
&lt;li&gt;1 — the channel lets you push one element at a time.&lt;/li&gt;
&lt;li&gt;Infinite — the channel lets you keep pushing elements until you run out of memory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All other values of N will inevitably lead to bugs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why
&lt;/h2&gt;

&lt;p&gt;There are two ways to make this case: the empirical argument (why real projects blow up when they use buffered channels) and the theoretical one (why even hypothetical imagined projects will blow up when they use buffered channels).&lt;/p&gt;

&lt;h3&gt;
  
  
  From the Empirical Argument
&lt;/h3&gt;

&lt;p&gt;Every opensource project I’ve ever seen that used an arbitrary-N buffered channel was using that channel incorrectly. And ended up having bugs.&lt;/p&gt;

&lt;p&gt;Someone would inevitably send a list of elements to that channel. But the problem is that you can only do this safely if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can guarantee that the list is less than N.&lt;/li&gt;
&lt;li&gt;You can guarantee that no one else is also sending on the channel.&lt;/li&gt;
&lt;li&gt;OR you can guarantee that another goroutine is available to receive on the channel.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are often hard guarantees to make! They’re often about other actors that you don’t control (particularly if you got your list of elements somewhere else.) And they’re guarantees that the programming language doesn’t help you enforce at all.&lt;/p&gt;

&lt;p&gt;That hints at:&lt;/p&gt;

&lt;h2&gt;
  
  
  From the Theoretical Argument
&lt;/h2&gt;

&lt;p&gt;“I can only call this function N times synchronously” is an abnormal API constraint. And it’s not a constraint that programming languages do anything to help you enforce.&lt;br&gt;
Programming languages have lots of constraints they do help you to enforce!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They help you ensure that a function isn’t called (think: private functions).&lt;/li&gt;
&lt;li&gt;They help you ensure that a function is called exactly once (think: constructors, sync.Once).&lt;/li&gt;
&lt;li&gt;They help you ensure that a function is called only once at a time (think: locks/mutexes and all the tooling around them).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, I don’t want to harsh your buzz for programming language constraint checks. We live in 2021! Rust’s borrow checker can do compile-time checks that I never would have imagined!&lt;/p&gt;

&lt;p&gt;But let’s be honest: the Go-team does not have the appetite for the kind of programming language features that would help you use buffered channels well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hope
&lt;/h2&gt;

&lt;p&gt;I liked this proposal for Go2 for an unlimited-capacity buffered channel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/golang/go/issues/20352"&gt;proposal: spec: add support for unlimited capacity channels&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But I also liked Ian Lance Taylor’s response!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If we had generic types, unlimited channels could be implemented in a library with full type safety. A library would also make it possible to improve the implementation easily over time as we learn more.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So I’m hoping for a one-two punch:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go gets generics!&lt;/li&gt;
&lt;li&gt;We can add a channel wrapper that makes more sense!!&lt;/li&gt;
&lt;li&gt;They can remove buffered channels from the core language!!!&lt;/li&gt;
&lt;li&gt;Profit!!!!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Originally published at &lt;a href="https://nicksantos.medium.com/go-rant-buffered-channels-should-be-tossed-in-a-fire-d36dcc9dbf86"&gt;https://nicksantos.medium.com/go-rant-buffered-channels-should-be-tossed-in-a-fire-d36dcc9dbf86&lt;/a&gt;&lt;/p&gt;

</description>
      <category>go</category>
      <category>programming</category>
    </item>
    <item>
      <title>Three Ways to Run Kubernetes on CI and Which One is Right for You!</title>
      <dc:creator>Nick Santos</dc:creator>
      <pubDate>Fri, 02 Apr 2021 22:24:21 +0000</pubDate>
      <link>https://dev.to/nicks/three-ways-to-run-kubernetes-on-ci-and-which-one-is-right-for-you-33m2</link>
      <guid>https://dev.to/nicks/three-ways-to-run-kubernetes-on-ci-and-which-one-is-right-for-you-33m2</guid>
      <description>&lt;p&gt;When we first started developing Tilt, we broke ALL THE TIME.&lt;/p&gt;

&lt;p&gt;Either Kubernetes changed. Or we had a subtle misunderstanding in how the API works. Our changes would pass unit tests, but fail with a real Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;I built out an integration test suite that used the latest version of Tilt to deploy real sample projects against a real cluster.&lt;/p&gt;

&lt;p&gt;At the start, it was slow and flakey. But the tooling around running Kubernetes in CI has come a long way, especially in the last 1-2 years. Now it's less flakey than our normal unit tests 😬. Every new example repo we set up uses a one-time Kubernetes cluster to run tests against.&lt;/p&gt;

&lt;p&gt;A few of our friends have been asking us how we set it up and how to run their own clusters in CI. I've now explained it enough times that I should probably write down what we learned.&lt;/p&gt;

&lt;p&gt;Here are three ways to set it up, with the pros and cons of each!&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategy #1: Local Cluster, Remote Registry
&lt;/h2&gt;

&lt;p&gt;Here's how I set up our first integration test framework.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;I created a dedicated gcr.io bucket for us to store images, and a GCP service account with permission to write to it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I added the GCP service account credentials as a secret in our CI build.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I forked &lt;a href="https://github.com/kubernetes-retired/kubeadm-dind-cluster"&gt;&lt;code&gt;kubeadm-dind-cluster&lt;/code&gt;&lt;/a&gt;, a set of Bash scripts to set up Kubernetes with Docker-in-Docker techniques.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All our test projects had Tilt build images, push them to the gcr.io bucket, then deploy servers that used these images.&lt;/p&gt;

&lt;p&gt;I barely got this working. A huge breakthrough! It caught so many subtle bugs and race conditions.&lt;/p&gt;

&lt;p&gt;I wouldn't call the Bash scripts &lt;em&gt;readable&lt;/em&gt;. But they are hackable, cut-and-pasteable. There were examples of how to run it on CircleCI and TravisCI. &lt;code&gt;kubeadm-dind-cluster&lt;/code&gt; has been deprecated in favor of more modern approaches like &lt;a href="https://kind.sigs.k8s.io"&gt;&lt;code&gt;kind&lt;/code&gt;&lt;/a&gt;. But I learned a lot from its Bash scripts. We still use a lot of the techniques in this project today.&lt;/p&gt;

&lt;p&gt;There were other downsides though:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;When drive-by contributors sent us PRs, the integration tests failed. They didn't have access to to write to the gcr.io bucket. This made me so sad. Contributors felt unwelcome. I never figured out a way to make this secure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We didn't reset the gcr.io bucket between test runs. So it was hard to guarantee that images weren't leaking between tests. For example, if image pushing failed, we wanted to be sure we weren't picking up a cached image from a previous test.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Strategy #2: Local Registry On a VM
&lt;/h2&gt;

&lt;p&gt;When I revisited this, I wanted to make sure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Anyone could write to the image registry.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The image registry would reset between runs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By this time, &lt;code&gt;kind&lt;/code&gt; was taking off as the default choice for testing Kubernetes itself. &lt;code&gt;kind&lt;/code&gt; also comes with the ability to run a local registry, so you can push images to the registry on &lt;code&gt;localhost:5000&lt;/code&gt; and pull them from inside &lt;code&gt;kind&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I set up a new CI pipeline that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Creates a VM.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Installs all our dependencies, including Docker.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creates a &lt;code&gt;kind&lt;/code&gt; cluster with a local registry, using their script.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This worked well! And because the registry was local, it was faster than pushing to a remote registry. We still use this approach to test &lt;code&gt;ctlptl&lt;/code&gt; with both &lt;code&gt;minikube&lt;/code&gt; and &lt;code&gt;kind&lt;/code&gt;. Here's &lt;a href="https://github.com/tilt-dev/ctlptl/blob/b6f808a09b05b6cf7aa0b3365e4781d2c23e4851/.circleci/config.yml#L30"&gt;the CI config&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But I wasn't totally happy! Most of our team is more comfortable managing containers than managing VMs. VMs are slower. Upgrading dependencies is more heavyweight. We wondered: can we make this work in containers?&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategy #3: Local Registry On Remote Docker
&lt;/h2&gt;

&lt;p&gt;The last approach (and the one we use in most of our projects) uses some of the tricks that &lt;code&gt;kubeadm-dind-cluster&lt;/code&gt; uses.&lt;/p&gt;

&lt;p&gt;The CI pipeline:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Creates a container with our code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sets up &lt;a href="https://circleci.com/docs/2.0/building-docker-images"&gt;a remote Docker environment&lt;/a&gt; outside the container. (This avoids the pitfalls of running Docker inside Docker.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Starts a &lt;code&gt;kind&lt;/code&gt; cluster with a local registry inside the remote Docker environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uses &lt;code&gt;socat&lt;/code&gt; networking jujitsu to expose the remote registry and Kubernetes cluster inside the local container.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The &lt;code&gt;socat&lt;/code&gt; element makes this a bit tricky. But if you want to fork and hack it, check out &lt;a href="https://github.com/tilt-dev/kind-local/blob/master/.circleci/with-kind-cluster.sh"&gt;this Bash script&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But once it's set up: it's fast, robust, and easy to upgrade dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting it Together
&lt;/h2&gt;

&lt;p&gt;Hacking together this with Bash was the hard part.&lt;/p&gt;

&lt;p&gt;Tilt-team maintains &lt;a href="https://ctlptl.dev/"&gt;&lt;code&gt;ctlptl&lt;/code&gt;&lt;/a&gt;, a CLI &lt;br&gt;
for declaratively setting up local Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;I eventually folded all the logic in the Bash script into &lt;code&gt;ctlptl&lt;/code&gt;. As of &lt;code&gt;ctlptl&lt;/code&gt; 0.5.0, it will try to detect when you have a remote docker environment and set up the &lt;code&gt;socat&lt;/code&gt; forwarding.&lt;/p&gt;

&lt;p&gt;The Go code in &lt;code&gt;ctlptl&lt;/code&gt; is &lt;em&gt;far&lt;/em&gt; more verbose than the Bash script, comparing number of lines. But it includes error handling, cleanup logic, and idempotency, which makes it more suitable for local dev. (CI environments don't need any of this because we tear them down at the end anyway.)&lt;/p&gt;

&lt;p&gt;We use image-management tools that &lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry"&gt;auto-detect the registry location from the cluster&lt;/a&gt;, which helps with the configuration burden. I like the general trend of Kubernetes as a general-purpose config-sharing system so that tools can interoperate, rather than having to configure each tool individually.&lt;/p&gt;

&lt;p&gt;We currently use &lt;code&gt;ctlptl&lt;/code&gt; to set up clusters and test the services on real Kube clusters in all of &lt;a href="https://github.com/tilt-dev/tilt-example-html/blob/master/.circleci/config.yml"&gt;our example projects&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It's been a long journey! But I hope the examples here will make that journey a lot shorter for the next person 🙈.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shout-outs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://circleci.com/"&gt;CircleCI's&lt;/a&gt; remote Docker environment is good!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Thanks to &lt;a href="https://kind.sigs.k8s.io"&gt;the &lt;code&gt;kind&lt;/code&gt; team&lt;/a&gt; for working with us on the local registry wiring!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/kubernetes-retired/kubeadm-dind-cluster"&gt;&lt;code&gt;kubeadm-dind-cluster&lt;/code&gt;&lt;/a&gt;, we salute you as an early adventurer in this problem space!&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>testing</category>
    </item>
    <item>
      <title>Kubernetes is so Simple You Can Explore it with Curl</title>
      <dc:creator>Nick Santos</dc:creator>
      <pubDate>Thu, 18 Mar 2021 21:51:15 +0000</pubDate>
      <link>https://dev.to/nicks/kubernetes-is-so-simple-you-can-explore-it-with-curl-4lg2</link>
      <guid>https://dev.to/nicks/kubernetes-is-so-simple-you-can-explore-it-with-curl-4lg2</guid>
      <description>&lt;p&gt;A common take on Kubernetes is that it's very complicated. &lt;/p&gt;

&lt;p&gt;... and because it's complicated, the configuration is very verbose. &lt;/p&gt;

&lt;p&gt;... and because there's so much config YAML, we need big toolchains just to handle that config.&lt;/p&gt;

&lt;p&gt;I want to convince you that the arrow of blame points in the opposite direction!&lt;/p&gt;

&lt;p&gt;Kubernetes has a simple, genius idea about how to manage configuration.&lt;/p&gt;

&lt;p&gt;Because it's straightforward and consistent, we can manage more config than we ever could before! And now that we can manage oodles more config, we can build overcomplicated systems. Hooray!&lt;/p&gt;

&lt;p&gt;The configs themselves may be complicated. So in this post, I'm going to skip the configs. I'll focus purely on the API machinery and how to explore that API.&lt;/p&gt;

&lt;p&gt;Building APIs this way could benefit a lot of tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Idea?
&lt;/h2&gt;

&lt;p&gt;To explain the simple, genius idea, let's start with the simple, genius idea of Unix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Everything is a file.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or to be more precise, everything is a text stream. Unix programs read and write text streams. The filesystem is an API for finding text streams to read. Not all of these text streams are files!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;~/hello-world.txt&lt;/code&gt; is a text file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/dev/null&lt;/code&gt; is an empty text stream&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/proc&lt;/code&gt; is a set of text streams for reading about processes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's take a closer look at &lt;code&gt;/proc&lt;/code&gt;. &lt;a href="https://wizardzines.com/comics/proc/"&gt;Here's a Julia Evans comic about it&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can learn about what's running on your system by looking at &lt;code&gt;/proc&lt;/code&gt;, like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How many processes are running (&lt;code&gt;ls /proc&lt;/code&gt; - List the processes)&lt;/li&gt;
&lt;li&gt;What command line started process PID (&lt;code&gt;cat /proc/PID/cmdline&lt;/code&gt; - Get the process specification)&lt;/li&gt;
&lt;li&gt;How much memory process PID is using (&lt;code&gt;cat /proc/PID/status&lt;/code&gt; - Get the process status)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is the Kubernetes API?
&lt;/h2&gt;

&lt;p&gt;The Kubernetes API is &lt;code&gt;/proc&lt;/code&gt; for distributed systems.&lt;/p&gt;

&lt;p&gt;Everything is a resource over HTTP. We can explore every Kubernetes resource with a few HTTP GET commands.&lt;/p&gt;

&lt;p&gt;To follow along, you'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://kind.sigs.k8s.io/"&gt;&lt;code&gt;kind&lt;/code&gt;&lt;/a&gt; - or any small, throwaway Kubernetes cluster&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;curl&lt;/code&gt; - or any CLI tool for sending HTTP requests&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;jq&lt;/code&gt; - or any CLI tool for exploring JSON&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl&lt;/code&gt; - to help &lt;code&gt;curl&lt;/code&gt; authenticate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's start by creating a cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kind create cluster
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.19.1) 🖼
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a nice day! 👋

$ kubectl proxy &amp;amp;
Starting to serve on 127.0.0.1:8001
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;kubectl proxy&lt;/code&gt; is a server that handles certificates for us, so that we don't need to worry about auth tokens with &lt;code&gt;curl&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The Kubernetes API has more hierarchy than &lt;code&gt;/proc&lt;/code&gt;. It's split into folders by version and namespace and resource type. The API path format looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/api/[version]/namespaces/[namespace]/[resource]/[name]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On a fresh &lt;code&gt;kind&lt;/code&gt; cluster, there should be some pods already running in the &lt;code&gt;kube-system&lt;/code&gt; namespace we can look at. Let's list all the system processes in our cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -s http://localhost:8001/api/v1/namespaces/kube-system/pods | head -n 20
{
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/namespaces/kube-system/pods",
    "resourceVersion": "1233"
  },
  "items": [
    {
      "metadata": {
        "name": "coredns-f9fd979d6-5zxtx",
        "generateName": "coredns-f9fd979d6-",
        "namespace": "kube-system",
        "selfLink": "/api/v1/namespaces/kube-system/pods/coredns-f9fd979d6-5zxtx",
        "uid": "a30e70cc-2b53-4511-a5de-57c80e5b68ad",
        "resourceVersion": "549",
        "creationTimestamp": "2021-03-04T15:51:21Z",
        "labels": {
          "k8s-app": "kube-dns",
          "pod-template-hash": "f9fd979d6"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's a lot of text! We can use &lt;code&gt;jq&lt;/code&gt; to pull out the names of objects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -s http://localhost:8001/api/v1/namespaces/kube-system/pods | jq '.items[].metadata.name'
"coredns-f9fd979d6-5zxtx"
"coredns-f9fd979d6-bn6jz"
"etcd-kind-control-plane"
"kindnet-fcjkd"
"kube-apiserver-kind-control-plane"
"kube-controller-manager-kind-control-plane"
"kube-proxy-sn64n"
"kube-scheduler-kind-control-plane"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;/pods&lt;/code&gt; endpoint lists out all the processes, like &lt;code&gt;ls /proc&lt;/code&gt;. If we want to look at a particular process, we can query &lt;code&gt;/pods/POD_NAME&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -s http://localhost:8001/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane | head -n 10
{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "kube-apiserver-kind-control-plane",
    "namespace": "kube-system",
    "selfLink": "/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane",
    "uid": "a8f893b7-1cdb-48fd-9505-87d71c81adcb",
    "resourceVersion": "458",
    "creationTimestamp": "2021-03-04T15:51:17Z",
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or, again, we can use &lt;code&gt;jq&lt;/code&gt; to fetch a particular field.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -s http://localhost:8001/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane | jq '.status.phase'
"Running"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to unpack what &lt;code&gt;kubectl&lt;/code&gt; is doing
&lt;/h2&gt;

&lt;p&gt;All of the things above can be done with &lt;code&gt;kubectl&lt;/code&gt;. &lt;code&gt;kubectl&lt;/code&gt; provides a more friendly interface. But if you're ever wondering what APIs &lt;code&gt;kubectl&lt;/code&gt; is calling, you can run it with &lt;code&gt;-v 6&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get -v 6 -n kube-system pods kube-apiserver-kind-control-plane
I0304 12:47:59.687088 3573879 loader.go:375] Config loaded from file:  /home/nick/.kube/config
I0304 12:47:59.697325 3573879 round_trippers.go:443] GET https://127.0.0.1:44291/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane 200 OK in 5 milliseconds
NAME                                READY   STATUS    RESTARTS   AGE
kube-apiserver-kind-control-plane   1/1     Running   0          116m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more advanced debugging, use &lt;code&gt;-v 8&lt;/code&gt; to see the complete response body.&lt;/p&gt;

&lt;p&gt;The point isn't that you should throw away &lt;code&gt;kubectl&lt;/code&gt; in favor of &lt;code&gt;curl&lt;/code&gt; to interact with Kubernetes. Just like you shouldn't throw away &lt;code&gt;ps&lt;/code&gt; in favor of &lt;code&gt;ls /proc&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;But I've found disecting Kubernetes like this is helpful to think of it as a process-management system built on a couple straightforward principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Everything is a resource over HTTP.&lt;/li&gt;
&lt;li&gt;Every object is read and written the same way.&lt;/li&gt;
&lt;li&gt;All object state is readable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are powerful ideas&lt;sup id="fnref1"&gt;1&lt;/sup&gt;. They help us build tools that fit together well.&lt;/p&gt;

&lt;p&gt;In the same way that we can pipe Unix tools together (like &lt;code&gt;jq&lt;/code&gt;), we can define new Kubernetes objects and combine them with existing ones. &lt;/p&gt;

&lt;p&gt;Sometimes they're silly! Like in this Ellen Körbes talk on &lt;a href="https://www.youtube.com/watch?v=85dKpsFFju4"&gt;how to build a Useless Machine&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In future posts, I want to talk about how to write code that uses these APIs effectively. And how we're leaning into &lt;a href="https://github.com/tilt-dev/tilt-apiserver"&gt;these ideas in Tilt&lt;/a&gt;. Stay tuned!&lt;/p&gt;

&lt;p&gt;Originally published on &lt;a href="https://blog.tilt.dev/2021/03/18/kubernetes-is-so-simple.html"&gt;the Tilt Dev Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Image Credit: "Curling;--a Scottish Game, at Central Park" by John George Brown. &lt;a href="https://commons.wikimedia.org/wiki/File:John_George_Brown_-_Curling;--a_Scottish_Game,_at_Central_Park_-_Google_Art_Project.jpg"&gt;Via Wikipedia.&lt;/a&gt;&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Representational_state_transfer"&gt;REST&lt;/a&gt; is an old idea (measured on the scale of Internet Time). What I like about Kubernetes' take on REST is that it's less focused on "how do we make sure new API endpoints we define obey the REST gospel", and more focused on "how do we autogenerate APIs from data types?" ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>kubernetes</category>
      <category>rest</category>
    </item>
    <item>
      <title>The First Modern Mention of Kubernetes</title>
      <dc:creator>Nick Santos</dc:creator>
      <pubDate>Thu, 23 Jul 2020 21:14:56 +0000</pubDate>
      <link>https://dev.to/nicks/the-first-modern-mention-of-kubernetes-5995</link>
      <guid>https://dev.to/nicks/the-first-modern-mention-of-kubernetes-5995</guid>
      <description>&lt;p&gt;I found the first mention of Kubernetes in computer science!!&lt;/p&gt;

&lt;p&gt;It comes from a book. "Cybernetics: or Control and Communication in the Animal and Machine" by Norbert Wiener. Originally published in 1948. (Yes, even in 1948, non-fiction book titles abused the colon.)&lt;/p&gt;

&lt;p&gt;The book has &lt;a href="https://en.wikipedia.org/wiki/Cybernetics:_Or_Control_and_Communication_in_the_Animal_and_the_Machine"&gt;its own Wikipedia page&lt;/a&gt;. So many people read it that he published a sequel!&lt;/p&gt;

&lt;p&gt;It's surprisingly hard to find. The New York Public Library has &lt;a href="https://browse.nypl.org/iii/encore/record/C__Rb13758012__Scybernetics%20norbert%20wiener__P0%2C2__Orightresult__U__X2?lang=eng&amp;amp;suite=def"&gt;two copies&lt;/a&gt; offsite, only available on advanced request. The Brooklyn Public Library has zero copies.&lt;/p&gt;

&lt;p&gt;I like to imagine I have the only physical copy in pandemic-lockdown New York City! Because right before the lockdown, I borrowed the 1961 edition from a science historian. And she hasn't asked for it back yet 😬.&lt;/p&gt;

&lt;p&gt;In the book, Wiener tries to come up with a name for his new field.  He writes: "All the existing teminology has too heavy a bias [...] we have been forced to coin at least one artificial neo-Greek expression to fill the gap."&lt;/p&gt;

&lt;p&gt;He suggests "Cybernetics," from the Greek word χυβερνήτης, also pronounced "Kubernetes":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t0USZMcS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x0trffx6ucfhs6g89rjh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t0USZMcS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x0trffx6ucfhs6g89rjh.jpg" alt='A page of "Cybernetics" where the word is first described'&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Cybernetics?
&lt;/h2&gt;

&lt;p&gt;You may think that this is just a pointy-headed in-joke to make the rest of us feel dumb.&lt;/p&gt;

&lt;p&gt;But it's actually a very relevant pointy-headed in-joke that only incidentally makes the rest of us feel dumb!&lt;/p&gt;

&lt;p&gt;When Wiener published "Cybernetics," the dominant model of computing was finite automata and Turing machines. These are systems that take inputs at the start and produce outputs at the end.&lt;/p&gt;

&lt;p&gt;Wiener points out that there are two problems with this model:&lt;/p&gt;

&lt;p&gt;1) In any system with lots of closely coupled inputs, we need statistical models to handle the complexity. In chapter one, he compares astronomy versus meteorology. We can count how many stars there are, and can capture the interaction between stars with simple formulas. But in meteorology, there are simply too many particles and the interactions between them are too complex.&lt;/p&gt;

&lt;p&gt;2) Information is distributed over time. We can use the time component to learn which inputs cause which outputs, and build feedback loops.&lt;/p&gt;

&lt;p&gt;Wiener argues that if computer science does a better job embracing complex statistical systems and time-based feedback loops, we'll be able to better understand lots of non-mechanical systems, like biology and sociology.&lt;/p&gt;

&lt;p&gt;The book gets very galaxy-brained to be honest, from "How do we build better thermostats?" to "Could this help us find a cure for Parkinson's disease?"&lt;/p&gt;

&lt;h2&gt;
  
  
  What Does That Mean For How We Build Systems?
&lt;/h2&gt;

&lt;p&gt;Once you understand this parallel, you see it everywhere!&lt;/p&gt;

&lt;p&gt;Consider Kubernetes. Before Kubernetes, you may have had a deploy script. That deploy script worked like a finite automata: look at some inputs, then deploy a server.&lt;/p&gt;

&lt;p&gt;One of the key insights of Kubernetes is that when you're working with multiple servers with close coupling, this isn't enough. You need a system with &lt;a href="https://kubernetes.io/docs/concepts/architecture/controller/"&gt;runtime feedback loops&lt;/a&gt; to handle the runtime dependencies between servers.&lt;/p&gt;

&lt;p&gt;I think this insight applies to developer tools as well.  From &lt;a href="https://en.wikipedia.org/wiki/Make_(software)"&gt;Make&lt;/a&gt; in the 70s to &lt;a href="https://bazel.build/"&gt;Bazel&lt;/a&gt; today, build systems are still stuck in a world of finite automata, mapping inputs to outputs.&lt;/p&gt;

&lt;p&gt;But the servers we're building have closely coupled runtime dependencies! Multi-service developer tools &lt;a href="https://blog.tilt.dev/2019/09/05/put-down-particle-accelerator.html"&gt;need runtime feedback loops&lt;/a&gt; as well.&lt;/p&gt;

&lt;p&gt;Originally published on &lt;a href="https://blog.tilt.dev/2020/04/28/the-first-mention-of-kubernetes.html"&gt;the Tilt Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>history</category>
    </item>
  </channel>
</rss>
