<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hugo Martins</title>
    <description>The latest articles on DEV Community by Hugo Martins (@caramelomartins).</description>
    <link>https://dev.to/caramelomartins</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/caramelomartins"/>
    <language>en</language>
    <item>
      <title>Abstractions can be expensive</title>
      <dc:creator>Hugo Martins</dc:creator>
      <pubDate>Sat, 10 Aug 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/caramelomartins/abstractions-can-be-expensive-2mj0</link>
      <guid>https://dev.to/caramelomartins/abstractions-can-be-expensive-2mj0</guid>
      <description>&lt;p&gt;Daniel Lemire wrote in &lt;em&gt;&lt;a href="https://lemire.me/blog/2024/06/22/performance-tip-avoid-unnecessary-copies/" rel="noopener noreferrer"&gt;Performance tip: avoid unnecessary copies&lt;/a&gt;&lt;/em&gt;, comparing &lt;a href="https://bun.sh/" rel="noopener noreferrer"&gt;Bun&lt;/a&gt; to &lt;a href="https://nodejs.org/en" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt;, about a particular case where copying data becomes expensive:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It turns out that the copy was not happening as part of base64 decoding but in a completely separate function. There is an innocuous function in Node.js called  &lt;code&gt;StringBytes::Size&lt;/code&gt; which basically must provide an upper on the memory needed by the &lt;code&gt;Buffer.from&lt;/code&gt; function.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As shown in the benchmarks, in the piece, Node.js becomes much slower than Bun when computing the same thing (e.g decoding base64). Even though both of these runtimes use the same library, an unsuspicious copy of data deep in the code of Node.js made it more expensive and slower.&lt;/p&gt;

&lt;p&gt;I’ve encountered a similar situation in Go. A service was misbehaving and using too much memory. Profiling and investigating, I narrowed it down to the fact that it was using &lt;a href="https://pkg.go.dev/io#ReadAll" rel="noopener noreferrer"&gt;io.ReadAll&lt;/a&gt; to transform a response into a slice of bytes (&lt;code&gt;[]byte&lt;/code&gt;). In itself, there is nothing wrong with this idea &lt;em&gt;but&lt;/em&gt; there’s something particular about &lt;code&gt;io.ReadAll&lt;/code&gt; (&lt;a href="%5Bhttps://cs.opensource.google/go/go/+/go1.22.5:src/io/io.go;l=709%5D(https://cs.opensource.google/go/go/+/refs/tags/go1.20.14:src/io/io.go)"&gt;source&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func ReadAll(r Reader) ([]byte, error) {
    b := make([]byte, 0, 512)
    for {
        if len(b) == cap(b) {
            // Add more capacity (let append pick how much).
            b = append(b, 0)[:len(b)]
        }
(...)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside a &lt;code&gt;for&lt;/code&gt; loop, &lt;code&gt;ReadAll&lt;/code&gt; will validate that the buffer it created (&lt;em&gt;512 bytes&lt;/em&gt;) has enough capacity to store the data. If it doesn’t, it duplicates the size of the existing buffer. This happens until the buffer has enough space to store all of the data. It inherently means we will use &lt;em&gt;a lot more memory&lt;/em&gt; than the original size of the data. All of this might be unexpected.&lt;/p&gt;

&lt;p&gt;If we don’t know the size of the data &lt;em&gt;a priori&lt;/em&gt;, there’s not much we can do to reduce this (&lt;em&gt;I believe&lt;/em&gt;) but, if we do know the size of the buffer, &lt;a href="https://pkg.go.dev/io#Copy" rel="noopener noreferrer"&gt;io.Copy&lt;/a&gt; might be a good alternative. It allows you to specify the size of the buffer, before writing data to it, avoiding the memory-intensive capacity increases.&lt;/p&gt;

&lt;p&gt;This was true up until, at least, Go 1.20.4. In recent versions, the implementation has changed. But it doesn’t change the underlying fact that this abstraction could potentially be expensive memory-wise, or even contribute to a program performing poorly or misbehaving. As Lemire points out:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The story illustrates why our software is slower than it should be. We have layers of abstractions to fight against. Sometimes you win, sometimes you lose.&lt;/p&gt;

&lt;p&gt;These layers are there for a reason, but they are not free.&lt;/p&gt;

&lt;p&gt;To make matters worse… these abstraction layers often thicken over time… and the friction goes up.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is not that the layers are unnecessary: they do good work. They allow us to avoid reimplementing the same things over and over again. They allow us to build on top of several individuals’ expertise and they support frameworks that allow us to be more productive…but &lt;strong&gt;they are not always free&lt;/strong&gt; , there’s a cost associated with these abstractions. And it is important to keep an eye (&lt;em&gt;and monitoring&lt;/em&gt;) open to the possibility that sometimes, only sometimes, they are more expensive than we are willing to accept.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>On What Lexers Do</title>
      <dc:creator>Hugo Martins</dc:creator>
      <pubDate>Tue, 21 Feb 2023 15:31:16 +0000</pubDate>
      <link>https://dev.to/caramelomartins/on-what-lexers-do-1ide</link>
      <guid>https://dev.to/caramelomartins/on-what-lexers-do-1ide</guid>
      <description>&lt;p&gt;A couple of weeks ago, I’ve started reading &lt;a href="https://interpreterbook.com/"&gt;Writing An Interpreter In Go&lt;/a&gt;. Reading this book means I’ll be building an interpreter for a language called &lt;code&gt;monkey&lt;/code&gt; (&lt;a href="https://github.com/caramelomartins/monkeylang"&gt;you can follow my implementation&lt;/a&gt;). It is a topic that I’ve always been curious about but never actually took the plunge and decided to study it.&lt;/p&gt;

&lt;p&gt;I’ve dabbled a little bit with it in university. But we never actually wrote any of the internal tools. We always used generators for the code and focused more on the concepts. Rest assured, I can’t remember most of it by now.&lt;/p&gt;

&lt;p&gt;While reading this book, I reconnected with the basic concepts of how interpreters and compilers work. I’ve been taking notes and trying to expand my knowledge with other sources. I’ve always heard that to know something, you need to be able to explain it. I’m trying to do that here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lexers transform source code into symbols (or tokens).&lt;/strong&gt; They execute what is called “lexical analysis”. From &lt;a href="https://interpreterbook.com/"&gt;Writing an Interpreter in Go&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;(…) from source code to tokens, is called “lexical analysis”, or “lexing” for short. It’s done by a lexer (…)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Wikipedia &lt;a href="https://en.wikipedia.org/wiki/Lexical_analysis"&gt;defines&lt;/a&gt; “lexical analysis” as &lt;em&gt;“the process of converting a sequence of &lt;a href="https://en.wikipedia.org/wiki/Character_(computing)"&gt;characters&lt;/a&gt; (…) into a sequence of lexical tokens (…)”&lt;/em&gt;. This aligns well with what we read in the book.&lt;/p&gt;

&lt;p&gt;So, when we execute a lexer, we transform the source code (&lt;em&gt;text&lt;/em&gt;) into a series of data structures (&lt;em&gt;tokens&lt;/em&gt;). It is an integral component of writing interpreters (or compilers) as we need to transform source code into a representation that we understand and can process in later stages (e.g. parsing).&lt;/p&gt;

&lt;p&gt;As an example, in Go, if we have the statement &lt;code&gt;str := "string"&lt;/code&gt;, we are in the presence of three tokens: an identifier token (&lt;code&gt;str&lt;/code&gt;), an assignment token (&lt;code&gt;:=&lt;/code&gt;) and a string token (&lt;code&gt;string&lt;/code&gt;).   &lt;strong&gt;Tokens are meaningful data structures&lt;/strong&gt; , allowing us to approach source code as data structures, rather than dealing with text inside of interpreters.&lt;/p&gt;

&lt;p&gt;Lexers also help us ignore elements of source code that aren’t relevant (e.g. whitespaces in some languages) and focus just on the elements that actually matter to us. After lexing, all irrelevant whitespace has been stripped off. For example, in C or Go, I wouldn’t expect whitespace to appear as tokens after lexing. In Python, where whitespace is relevant, I expect it to be taken into account in lexing.&lt;/p&gt;

&lt;p&gt;One concept I was particularly interested in, as I read, was the fact that &lt;strong&gt;lexers need to look-ahead&lt;/strong&gt;. This isn’t always done the same way, it depends on the approach used, but it is integral to lexing, to be able to resolve conflicts.&lt;/p&gt;

&lt;p&gt;As an example, reading a “=“ sign might represent different things. It can be an equal sign (e.g. assignment) or it can be the start of a comparison (e.g “==“). Without looking ahead, it is impossible to resolve conflicts and accurately know what a given character represents in source code.&lt;/p&gt;

&lt;p&gt;Lexical analysis is only but a small step, in the process of interpreting source code and its results, that will need to be fed into a parser, which will perform additional steps with the tokens.&lt;/p&gt;

&lt;p&gt;It has been a good experience reading these initial sections of &lt;a href="https://interpreterbook.com/"&gt;Writing An Interpreter In Go&lt;/a&gt; and finally feel I’m starting to understand the basics of these concepts.&lt;/p&gt;

</description>
      <category>compilers</category>
      <category>interpreters</category>
    </item>
    <item>
      <title>On Unit Testing in Temporal</title>
      <dc:creator>Hugo Martins</dc:creator>
      <pubDate>Wed, 28 Dec 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/caramelomartins/on-unit-testing-in-temporal-2iaa</link>
      <guid>https://dev.to/caramelomartins/on-unit-testing-in-temporal-2iaa</guid>
      <description>&lt;p&gt;I’ve been using &lt;a href="https://temporal.io/"&gt;Temporal&lt;/a&gt;, “a developer-first, open source platform that ensures the successful execution of services and applications”, for troubleshooting purposes for a while now. We use it &lt;a href="https://www.youtube.com/watch?v=LxgkAoTSI8Q"&gt;quite extensively at Datadog&lt;/a&gt;. Recently, I’ve started actually implementing something on top of Temporal and was amazed at the simplicity of its testing framework.&lt;/p&gt;

&lt;p&gt;At first I was a bit confused, as I’m interested in Go and could only find documentation focusing on Java. Eventually, I found &lt;a href="https://docs.temporal.io/go/how-to-test-workflow-definitions-in-go"&gt;How to test Workflow Definitions in Go&lt;/a&gt;, a great introduction to the testing framework that Temporal offers with &lt;a href="https://pkg.go.dev/go.temporal.io/sdk@v1.17.0/testsuite"&gt;testsuite&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It all starts with a &lt;code&gt;WorkflowTestSuite&lt;/code&gt;, in which we can initialize a &lt;code&gt;TestWorkflowEnvironment&lt;/code&gt;. This &lt;code&gt;TestWorkflowEnvironment&lt;/code&gt; is powerful and we immediately get access to a few important functions, such as &lt;code&gt;AssertExpectations&lt;/code&gt; or &lt;code&gt;ExecuteWorkflow&lt;/code&gt;. These allow you to do things as diverse as checking that all expected calls (to mocks) have been made and to execute a workflow inside a test environment. We can use &lt;code&gt;RegisterActivity&lt;/code&gt; or &lt;code&gt;RegisterWorkflow&lt;/code&gt; as well, to ensure our Activities and Workflows are available in the environment we’ll execute our tests in.&lt;/p&gt;

&lt;p&gt;Unfortunately, as pointed out in Temporal’s documentation, &lt;em&gt;“unless the Activity invocations are mocked(…) the test environment will execute the actual Activity code including any calls to outside services."&lt;/em&gt; This is particularly harmful if the Activities being invoked execute code that makes calls to endpoints on a network because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it makes the tests unable to execute without connecting to the network;&lt;/li&gt;
&lt;li&gt;it makes for unreliable and inconsistent behaviour of the tests;&lt;/li&gt;
&lt;li&gt;and it doesn’t allow us to test multiple failure modes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this reason, Temporal’s testing framework offers methods to mock Activities in a &lt;code&gt;TestWorkflowEnvironment&lt;/code&gt; such as &lt;code&gt;OnActivity&lt;/code&gt;. This works well when the Activities are implemented internally but I couldn’t get it to work when the Activities are implemented in a third-party package.&lt;/p&gt;

&lt;p&gt;I read a bit of the &lt;a href="https://github.com/temporalio/samples-go/blob/main/expense/workflow_test.go"&gt;temporalio/samples-go&lt;/a&gt; repository, which houses a collection of samples and accompanying tests, to understand what needs to be done. It is &lt;em&gt;seems&lt;/em&gt; simple enough:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type UnitTestSuite struct {
    suite.Suite
    testsuite.WorkflowTestSuite
}

func (s *UnitTestSuite) TestSampleWorkflow() {
    // Initialize test environment for UnitTestSuite.
    env = s.NewTestWorkflowEnvironment()

    // Register needed Activities.
    env.RegisterActivity(externalpackage.ExternalActivity)

    // Mock Activities.
    env.OnActivity(externalpackage.ExternalActivity, mock.Anything, mock.Anything).Return(nil)

    // Execute Workflow and Assert.
    env.ExecuteWorkflow(SampleWorkflow)
    (...)
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I was surprised it was this easy. I confess I was expecting a little bit more complexity. That might be due to my lack of knowledge about the internals of Temporal, but I have to say I was quite excited about how easily I was able to make progress.&lt;/p&gt;

</description>
      <category>temporal</category>
    </item>
    <item>
      <title>Understanding 'kubectl explain'</title>
      <dc:creator>Hugo Martins</dc:creator>
      <pubDate>Tue, 29 Mar 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/caramelomartins/understanding-kubectl-explain-31n3</link>
      <guid>https://dev.to/caramelomartins/understanding-kubectl-explain-31n3</guid>
      <description>&lt;p&gt;&lt;a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#explain"&gt;kubectl explain&lt;/a&gt; can be used to get more information about which fields a given resource needs, as well as the meaning behind those fields. It can be used in mainly two ways: either for getting information about the fields of a resource (e.g. &lt;code&gt;kubectl explain &amp;lt;resource&amp;gt;&lt;/code&gt;) or by getting information about a specific field of a resource (e.g. &lt;code&gt;kubectl explain &amp;lt;resource&amp;gt;.&amp;lt;field/s&amp;gt;&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Together with &lt;code&gt;api-versions&lt;/code&gt; and &lt;code&gt;api-resources&lt;/code&gt; it can quickly provide a lot of insight into Kubernetes and what needs to be written on a manifest to properly create or edit objects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Examples
&lt;/h2&gt;

&lt;p&gt;Getting information about root fields of Pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl explain pods
KIND: Pod
VERSION: v1

DESCRIPTION:
     Pod is a collection of containers that can run on a host. This resource is
     created by clients and scheduled onto hosts.

FIELDS:
   apiVersion &amp;lt;string&amp;gt;
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind &amp;lt;string&amp;gt;
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata &amp;lt;Object&amp;gt;
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   spec &amp;lt;Object&amp;gt;
     Specification of the desired behavior of the pod. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

   status &amp;lt;Object&amp;gt;
     Most recently observed status of the pod. This data may not be up to date.
     Populated by the system. Read-only. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Getting information about the &lt;code&gt;spec&lt;/code&gt; of Pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl explain pods.spec
KIND: Pod
VERSION: v1

RESOURCE: spec &amp;lt;Object&amp;gt;

DESCRIPTION:
     Specification of the desired behavior of the pod. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

     PodSpec is a description of a pod.

FIELDS:
   activeDeadlineSeconds &amp;lt;integer&amp;gt;
     Optional duration in seconds the pod may be active on the node relative to
     StartTime before the system will actively try to mark it failed and kill
     associated containers. Value must be a positive integer.

   affinity &amp;lt;Object&amp;gt;
     If specified, the pod's scheduling constraints

   automountServiceAccountToken &amp;lt;boolean&amp;gt;
     AutomountServiceAccountToken indicates whether a service account token
     should be automatically mounted.

   containers &amp;lt;[]Object&amp;gt; -required-
     List of containers belonging to the pod. Containers cannot currently be
     added or removed. There must be at least one container in a Pod. Cannot be
     updated.

   dnsConfig &amp;lt;Object&amp;gt;
     Specifies the DNS parameters of a pod. Parameters specified here will be
     merged to the generated DNS configuration based on DNSPolicy.

   dnsPolicy &amp;lt;string&amp;gt;
     Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are
     'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS
     parameters given in DNSConfig will be merged with the policy selected with
     DNSPolicy. To have DNS options set along with hostNetwork, you have to
     specify DNS policy explicitly to 'ClusterFirstWithHostNet'.

(...)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It explains what this field represents, as well as all the fields that can be configured inside this specific field, and their meanings. It can be a powerful tool to understand Kubernetes. A sort of &lt;code&gt;man&lt;/code&gt; for Kubernetes itself.&lt;/p&gt;

&lt;p&gt;It can be particularly interesting to use &lt;code&gt;kubectl explain&lt;/code&gt; to inspect, when you forget, &lt;a href="https://hugomartins.io/essays/2022/02/essential-fields-in-kubernetes-manifests/"&gt;essential fields in Kubernetes manifests&lt;/a&gt;, or internal details.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What Can Be Done With 'kubectl run' Command?</title>
      <dc:creator>Hugo Martins</dc:creator>
      <pubDate>Fri, 18 Mar 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/caramelomartins/what-can-be-done-with-kubectl-run-command-35f</link>
      <guid>https://dev.to/caramelomartins/what-can-be-done-with-kubectl-run-command-35f</guid>
      <description>&lt;p&gt;&lt;a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run"&gt;kubectl run&lt;/a&gt; is a command that can &lt;em&gt;“create and run a particular image in a pod”&lt;/em&gt;, which means that it will start a pod with a particular image. It can also create the pod with some particular details by using combinations of flags. We can see the code for it in &lt;a href="https://github.com/kubernetes/kubectl/blob/f48256c8eef5df2e3a9c621dd667839bdbe7c4cd/pkg/cmd/run/run.go"&gt;pkg/cmd/run/run.go&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interesting Flags
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--image&lt;/code&gt; specifies the image to be executed.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--command&lt;/code&gt; allows to overwrite the command field in the container with whatever arguments are sent after &lt;code&gt;--&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--env&lt;/code&gt; to set environment variables in the pods, in the form of &lt;code&gt;key:value&lt;/code&gt; and one variable per &lt;code&gt;--env&lt;/code&gt; flag.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--expose&lt;/code&gt; and &lt;code&gt;--port&lt;/code&gt; to create a ClusterIP associated with the pod.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--labels&lt;/code&gt; which allows us to configure labels for the specific pod. Labels are accepted with only one flag, separated by a comma.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--dry-run&lt;/code&gt; for testing purposes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Examples
&lt;/h2&gt;

&lt;p&gt;From documentation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl run nginx --image=nginx
kubectl run hazelcast --image=hazelcast/hazelcast --env="DNS_DOMAIN=cluster" --env="POD_NAMESPACE=default"
kubectl run hazelcast --image=hazelcast/hazelcast --labels="app=hazelcast,env=prod"
kubectl run nginx --image=nginx --command -- &amp;lt;cmd&amp;gt; &amp;lt;arg1&amp;gt; ... &amp;lt;argN&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Use Case
&lt;/h2&gt;

&lt;p&gt;While I don’t see much use in &lt;code&gt;kubectl run&lt;/code&gt; for dealing with production grade systems, it can be a great resource for testing out different configurations of pods and creating a bunch of pods if there’s a need for it. It can also be a great introductory command for when first starting to deal with Kubernetes because it eases the interaction with Kubernetes without having to deal with YAML right away.&lt;/p&gt;

&lt;p&gt;Due to its &lt;code&gt;--output&lt;/code&gt; flag, it can also be a great way to use &lt;code&gt;kubectl&lt;/code&gt; to create pods, in a stable way, from scripts because we can have stable and reproducible outputs from it. The fact that we can use a &lt;code&gt;--dry-run&lt;/code&gt; flag is also a big plus.&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Essential Fields in Kubernetes Manifests</title>
      <dc:creator>Hugo Martins</dc:creator>
      <pubDate>Tue, 15 Feb 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/caramelomartins/essential-fields-in-kubernetes-manifests-428k</link>
      <guid>https://dev.to/caramelomartins/essential-fields-in-kubernetes-manifests-428k</guid>
      <description>&lt;p&gt;I recently started a quest to &lt;a href="https://hugomartins.io/essays/2022/02/ckad-in-2022/"&gt;complete CKAD&lt;/a&gt; in the next few months, by May 2022. As I’ve explained in that previous essay, “I have spent quite some €€€ enrolling in it and I feel that it can still teach me a lot of relevant concepts about Kubernetes that will be useful” on my day-to-day.&lt;/p&gt;

&lt;p&gt;While studying for CKAD, through &lt;a href="https://www.udemy.com/course/certified-kubernetes-application-developer/"&gt;Kubernetes Certified Application Developer (CKAD) with Tests&lt;/a&gt;, I’ve come to realize the importance of understanding the syntax of manifests in Kubernetes. Subconsciously, I obviously already knew this - the same way I know how important it is to dominate the syntax of a given programming language - but it is too easy to fall into a pattern of copy-pasting-and-changing, or simply filling in the gaps in already existing manifests.&lt;/p&gt;

&lt;p&gt;Manifests in Kubernetes are the baseline of describing and defining resources, that we can then create and edit afterwards. Manifests represent the object specification describing “its desired state, as well as some basic information about the object (such as a name).” &lt;sup id="fnref:1"&gt;1&lt;/sup&gt; These manifests are most often described in &lt;code&gt;.yaml&lt;/code&gt; files.&lt;/p&gt;

&lt;p&gt;In essence, there are four essential fields in Kubernetes manifests that must be present in all manifests. These are: &lt;code&gt;apiVersion&lt;/code&gt;, &lt;code&gt;kind&lt;/code&gt;, &lt;code&gt;metadata&lt;/code&gt; and &lt;code&gt;spec&lt;/code&gt;. Each of these might have widely varying values populating them.&lt;/p&gt;

&lt;p&gt;As an example, a starting point for a Kubernetes manifest would be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion:
kind:
metadata:
spec:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  apiVersion
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;apiVersion&lt;/code&gt; allows us to define what version of the API a given resource is going to be using. It can be simply &lt;code&gt;v1&lt;/code&gt;, which means it will be part of the &lt;em&gt;core&lt;/em&gt; API specified at &lt;code&gt;/api/v1&lt;/code&gt;. It can also be &lt;code&gt;&amp;lt;name&amp;gt;/&amp;lt;version&amp;gt;&lt;/code&gt;, for example &lt;code&gt;batch/v1&lt;/code&gt;, specifying that at resource uses an API that is under &lt;code&gt;/api/&amp;lt;name&amp;gt;&lt;/code&gt;. &lt;sup id="fnref:2"&gt;2&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;We can find more about what APIs and versions exist on a given cluster by executing &lt;code&gt;kubectl api-version&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl api-versions
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
(...)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results will differ from cluster to cluster, and between Kubernetes versions. We can have custom APIs, disabled APIs, or recent APIs could’ve been implemented in different Kubernetes versions.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ReplicationController&lt;/code&gt; and &lt;code&gt;ReplicaSet&lt;/code&gt; are two popular objects that clarify this and differ in the &lt;code&gt;apiVersion&lt;/code&gt;. &lt;code&gt;ReplicaController&lt;/code&gt; is a component of the &lt;em&gt;core&lt;/em&gt; API in &lt;code&gt;v1&lt;/code&gt; so we would write &lt;code&gt;apiVersion: v1&lt;/code&gt; in its spec. &lt;code&gt;ReplicaSet&lt;/code&gt;, which evolved from &lt;code&gt;ReplicaController&lt;/code&gt;, is a component of a more recent API served at &lt;code&gt;apps/v1&lt;/code&gt;. This sort of versioning is incredibly powerful and flexible, allowing Kubernetes to evolve while keeping a lot of backwards compatibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  kind
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;kind&lt;/code&gt; represents the kind of object that is specified via a manifest. Each kind of resource will be available on a particular API. This makes it essential that the specified &lt;code&gt;kind&lt;/code&gt; and &lt;code&gt;apiVersion&lt;/code&gt; match on a specific manifest.&lt;/p&gt;

&lt;p&gt;We can inspect which &lt;code&gt;kind&lt;/code&gt; we can use for objects by executing &lt;code&gt;kubectl api-resources&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
apiservices apiregistration.k8s.io/v1 false APIService
controllerrevisions apps/v1 true ControllerRevision
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
statefulsets sts apps/v1 true StatefulSet
(...)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With &lt;code&gt;kubectl api-resources&lt;/code&gt; we can quickly see as well which &lt;code&gt;apiVersion&lt;/code&gt; needs to be specified in order to specify a particular resource.&lt;/p&gt;

&lt;h2&gt;
  
  
  metadata
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;metadata&lt;/code&gt; describes information of an object that allows for the unique identification of that object. When creating a manifest, this field should have &lt;em&gt;at least&lt;/em&gt; an associated name. Usually we will also see a field named &lt;code&gt;labels&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  spec
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;spec&lt;/code&gt; defines the desired state of the object in Kubernetes. It will vary widely between different resources and API versions, which means that it can be tricky to figure out - or memorize - all the needed fields.&lt;/p&gt;

&lt;p&gt;I’ve found that there’s one instance of a resource that can be created without a &lt;code&gt;spec&lt;/code&gt; field which is namespaces. If we create a namespace with only &lt;code&gt;apiVersion&lt;/code&gt;, &lt;code&gt;kind&lt;/code&gt; and &lt;code&gt;metadata&lt;/code&gt;, creating the namespace with &lt;code&gt;kubectl create&lt;/code&gt;, Kubernetes will accept that manifest &lt;em&gt;but&lt;/em&gt; it will create the namespace internally with an appropriate &lt;code&gt;spec&lt;/code&gt;. As an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Namespace
metadata:
  name: my-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running &lt;code&gt;kubectl create&lt;/code&gt; results in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create -f my-namespace.yaml
namespace/my-namespace created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Executing &lt;code&gt;kubectl get&lt;/code&gt; will allows us to see the injected &lt;code&gt;spec&lt;/code&gt; field:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl ns my-namespace -o yaml
apiVersion: v1
kind: Namespace
metadata:
  name: my-namespace
  (...)
  selfLink: /api/v1/namespaces/test
  uid: f1b901a6-31d6-457a-aaaf-0cb6d600d52c
spec:
  finalizers:
  - kubernetes
status:
  phase: Active
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;All of this information can be confirmed in Kubernetes' own documentation by reading &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields"&gt;Required Fields&lt;/a&gt;. Although this isn’t a deep exploration of manifests, having solid bases can be extremely important to understand what has been built on top of this. Reasoning about this structure also provides a glimpse at the baseline that provides so much flexibility to Kubernetes, allowing it to have 50+ components &lt;em&gt;out-of-the-box&lt;/em&gt; and a lot of extensibility via custom resources.&lt;/p&gt;




&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#describing-a-kubernetes-object"&gt;Describing a Kubernetes object&lt;/a&gt; ↩︎&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:2"&gt;
&lt;p&gt;&lt;a href="https://kubernetes.io/docs/reference/using-api/"&gt;API Overview&lt;/a&gt; ↩︎&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Certified Kubernetes Application Developer (CKAD) in 2022</title>
      <dc:creator>Hugo Martins</dc:creator>
      <pubDate>Sun, 06 Feb 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/caramelomartins/certified-kubernetes-application-developer-ckad-in-2022-3kac</link>
      <guid>https://dev.to/caramelomartins/certified-kubernetes-application-developer-ckad-in-2022-3kac</guid>
      <description>&lt;p&gt;May 2021 in sunny Portugal. I &lt;em&gt;virtually&lt;/em&gt; attended &lt;a href="https://events.linuxfoundation.org/archive/2021/kubecon-cloudnativecon-europe/"&gt;KubeCon + CloudNativeCon Europe 2021&lt;/a&gt;, writing a &lt;a href="https://hugomartins.io/essays/2021/05/kubecon-europe-2021-highlights-1/"&gt;series&lt;/a&gt; &lt;a href="https://hugomartins.io/essays/2021/05/kubecon-europe-2021-highlights-2/"&gt;of&lt;/a&gt; &lt;a href="https://hugomartins.io/essays/2021/05/kubecon-europe-2021-highlights-3/"&gt;daily&lt;/a&gt; &lt;a href="https://hugomartins.io/essays/2021/05/kubecon-europe-2021-highlights-4/"&gt;highlights&lt;/a&gt; about this experience. It focused on the talks I most enjoyed, what I learned and technologies I’d keep an eye on. I even took the time to write some tweets, even though I barely use Twitter.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://twitter.com/hashtag/KubeCon?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#KubeCon&lt;/a&gt; setup. Starting off with LitmusChaos office hours. &lt;a href="https://t.co/c7Q24gl6Dd"&gt;pic.twitter.com/c7Q24gl6Dd&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;— Hugo Martins (&lt;a class="mentioned-user" href="https://dev.to/caramelomartins"&gt;@caramelomartins&lt;/a&gt;) &lt;a href="https://twitter.com/caramelomartins/status/1389510179021017089?ref_src=twsrc%5Etfw"&gt;May 4, 2021&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At the time, I was using Kubernetes extensively. I was working as an SRE, which was really more a Platform Engineer, working on tooling in the Software Development Life Cycle ™️ domain, following the hype of &lt;a href="https://humanitec.com/blog/internal-platform-teams-what-are-they-and-do-you-need-one"&gt;Internal Platform Teams&lt;/a&gt;. Kubernetes was quickly becoming, if it wasn’t already, the &lt;em&gt;de facto&lt;/em&gt; standard framework to build these sorts of platforms on. Kubernetes serves as the underlying “OS” of sorts, while a lot of applications and customization are baked on top of it.&lt;/p&gt;

&lt;p&gt;Due to the type of work I had been doing, and because it is a domain that interests me and I want to keep working on it, I ended up making the decision to enroll in the &lt;a href="https://training.linuxfoundation.org/certification/certified-kubernetes-application-developer-ckad/"&gt;Certified Kubernetes Application Developer (CKAD)&lt;/a&gt;. I racked in an hefty discount from attending &lt;a href="https://events.linuxfoundation.org/archive/2021/kubecon-cloudnativecon-europe/"&gt;KubeCon + CloudNativeCon Europe 2021&lt;/a&gt; and had an entire year to study for, register and pass CKAD. What could go wrong?!&lt;/p&gt;

&lt;p&gt;It was all very promising and hopeful but I ended up not being able to study for CKAD and ended up not registering to take the exam after almost a year. To make matters worse, CKAD went through &lt;a href="https://training.linuxfoundation.org/ckad-program-change-2021/"&gt;a series of program changes&lt;/a&gt; just four months after I enrolled. It now includes questions about Helm, more security-oriented aspects, deployment strategies and CRDs, among other changes.&lt;/p&gt;

&lt;p&gt;Although I no longer feel the same motivation and excitement towards passing the exam, mostly because I’m not working as closely with Kubernetes anymore, I have spent quite some €€€ enrolling in it and I feel that it can still teach me a lot of relevant concepts about Kubernetes that will be useful, as well as help me improve my &lt;code&gt;kubectl&lt;/code&gt; proficiency, which I still use everyday.&lt;/p&gt;

&lt;p&gt;Because of the reasons above, I’m going to try and complete &lt;a href="https://training.linuxfoundation.org/certification/certified-kubernetes-application-developer-ckad/"&gt;Certified Kubernetes Application Developer (CKAD)&lt;/a&gt; in 2022, more specifically, in the last few months I have until May. This means that I’ll have to dive into Kubernetes concepts and tooling, while studying, and write more about it which I’m hoping I’ll share in the form of small snippets and notes.&lt;/p&gt;

&lt;p&gt;I don’t want it to become a reference guide to Kubernetes but I believe that sharing notes and small essays might help me get more motivated towards studying for CKAD. In this way, I believe, I’ll probably start a small series of notes on Kubernetes. I’m not yet sure on what this series will be in terms of structure, or its name, but I’m hopeful that it will give me the extra boost I need in order to successfully study and complete CKAD.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>ckad</category>
      <category>learning</category>
    </item>
    <item>
      <title>Locks, Deadlocks and Liquidbase</title>
      <dc:creator>Hugo Martins</dc:creator>
      <pubDate>Thu, 30 Dec 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/caramelomartins/locks-deadlocks-and-liquidbase-38g</link>
      <guid>https://dev.to/caramelomartins/locks-deadlocks-and-liquidbase-38g</guid>
      <description>&lt;p&gt;“Long time no see”, heh? I’ve been busy with other undertakings - such as switching jobs - which means I’ve had less time to write. The emptiness in here since October is explained by that but I’m back and I bring some extraordinary &lt;em&gt;fun&lt;/em&gt; with &lt;a href="https://www.liquibase.org/"&gt;Liquidbase&lt;/a&gt; and its locking mechanism.&lt;/p&gt;

&lt;p&gt;Liquidbase is an open source library designed to improve versioning of database schema changes. You can check its &lt;a href="https://github.com/liquibase/liquibase"&gt;source code&lt;/a&gt; but, in essence, it helps controlling versioning across database schema changes, similar to &lt;a href="https://flywaydb.org/"&gt;Flyway&lt;/a&gt;, and it is written in Java, making it ideal for bootstrapping Spring applications.&lt;/p&gt;

&lt;p&gt;Recently, I’ve had a nasty encounter with Liquidbase, due to its use in Spinnaker, in particular &lt;a href="https://github.com/spinnaker/clouddriver"&gt;clouddriver&lt;/a&gt;. It envolves corrupt locks and deadlocks caused by those.&lt;/p&gt;

&lt;p&gt;Liquidbase relies on a &lt;em&gt;changelog&lt;/em&gt;, composed of &lt;em&gt;changesets&lt;/em&gt;, to reliable and accurately know which changes have already been executed or haven’t been executed. Liquidbase uses a table in the database, named &lt;code&gt;DATABASECHANGELOG&lt;/code&gt;, to track these &lt;em&gt;changesets&lt;/em&gt;, whether they have already been executed and &lt;em&gt;when&lt;/em&gt; they have been executed.&lt;/p&gt;

&lt;p&gt;Because Liquidbase is aware that multiple instances of the same application might be booting up at the same time and might be reading &lt;code&gt;DATABASECHANGELOG&lt;/code&gt; at the same time, or conflicting times, it also has a table named &lt;code&gt;DATABASECHANGELOGLOCK&lt;/code&gt;. In this table, instances that are currently reading the database can share locks and manage themselves out of concurrency issues.&lt;/p&gt;

&lt;p&gt;Whenever an application with Liquidbase boots up, it will read &lt;code&gt;DATABASECHANGELOGLOCK&lt;/code&gt; and check for an existing lock. If the lock has been acquired by another instance, it will wait until the lock has been freed to proceed with acquiring the lock and perform the necessary operations.&lt;/p&gt;

&lt;p&gt;Unfortunately, as we know, this behaviour is prone to situations of corrupt locks being left in the database, for instances that are no longer requiring the lock or even running, leaving all other instances locked out and unable to boot. In essence, creating a deadlock because all instances are waiting for the existing lock to be freed, while the original instance has probably terminated a long time ago.&lt;/p&gt;

&lt;p&gt;This is exactly what happens with Liquidbase if an instance happens to be unexpectedly terminated without freeing its lock. Any other instance that tries to boot up won’t be able to acquire the lock and will enter an infinite loop, until the lock is finally free.&lt;/p&gt;

&lt;p&gt;I’m assuming this is a common occurrence because it is even documented in &lt;a href="https://docs.liquibase.com/concepts/basic/databasechangeloglock-table.html?%20__hstc=94692481.33ffaa0fcf7cebaa197baf4111ff2e40.1640694727731.1640694727731.1640694727731.1&amp;amp;__%20hssc=94692481.1.1640694727731&amp;amp;__hsfp=1132227981&amp;amp;_ga=2.228451742.2022857303.1640694723-1719938394.1640694723"&gt;Liquidbase’s documentation&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If Liquibase does not exit cleanly, the lock row may be left as locked. You can clear out the current lock by running &lt;code&gt;liquibase releaseLocks&lt;/code&gt; which runs &lt;code&gt;UPDATE DATABASECHANGELOGLOCK SET LOCKED=0&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Unfortunately, I didn’t know this before hand. When I encountered this issue, it made for a fun afternoon of chasing ghosts…until I saw somewhere online that there might be a corrupt lock still in the database.&lt;/p&gt;

&lt;p&gt;If you happen to encounter this error and see instances booting up and looping infinitely with &lt;code&gt;Waiting for changelog lock&lt;/code&gt;…please make sure you aren’t witnessing a deadlock due to a corrupt lock in your database.&lt;/p&gt;

</description>
      <category>concurrency</category>
      <category>spring</category>
      <category>liquidbase</category>
      <category>database</category>
    </item>
    <item>
      <title>Devops Principles of Feedback: Understanding What's Going On</title>
      <dc:creator>Hugo Martins</dc:creator>
      <pubDate>Mon, 04 Oct 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/caramelomartins/devops-principles-of-feedback-understanding-what-s-going-on-2mf0</link>
      <guid>https://dev.to/caramelomartins/devops-principles-of-feedback-understanding-what-s-going-on-2mf0</guid>
      <description>&lt;p&gt;A few months ago, I’ve written at length about DevOps themes, in essays such as &lt;a href="https://hugomartins.io/essays/2021/01/three-ways-of-devops/"&gt;Three Ways of DevOps&lt;/a&gt; and &lt;a href="https://hugomartins.io/essays/2021/02/devops-principles-and-practices-of-flow/"&gt;DevOps Principles of Flow: Deliver Faster&lt;/a&gt;, based on reading &lt;a href="https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations/dp/1942788002"&gt;The DevOps Handbook&lt;/a&gt;. Since then, I’ve had this essay half-baked, sitting on my drafts and haven’t been able to complete it up until now.&lt;/p&gt;

&lt;p&gt;In this essay, I want to focus our attention on &lt;em&gt;The Principles of Feedback&lt;/em&gt;. I’ve previously summarized them as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The Principles of Feedback&lt;/em&gt; are the guiding principles behind the Second Way. At their core, they enable a “fast and constant” flow of feedback from production all the way back to engineers. These principles focus on getting the necessary ambience and tools in place for engineers to monitor their work and quickly react to any adverse situations. They focus on amplifying feedback from the operational side of software to the development side of software.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In a simple analogy, if &lt;em&gt;The Principles of Flow&lt;/em&gt; focus on the &lt;em&gt;left-to-right&lt;/em&gt; flow of information (e.g. from “dev” to “ops”), &lt;em&gt;The Principles of Feedback&lt;/em&gt; guide the &lt;em&gt;right-to-left&lt;/em&gt; flow of information (e.g. from “ops” back to “dev”).&lt;/p&gt;

&lt;h2&gt;
  
  
  Principles
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;We make our system of work safer by creating fast, frequent, high quality information flow throughout our value stream and our organization, which includes feedback and feedforward loops. This allows us to detect and remediate problems while they are smaller, cheaper and easier to fix; avert problems while they are smaller, cheaper, and easier to fix; avert problems before they cause catastrophe; and create organizational learning that we integrate into future work. &lt;em&gt;(The DevOps Handbook, pp. 27)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To achieve what the above quote proposes, &lt;em&gt;The DevOps Handbook&lt;/em&gt; outlines the following principles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;em&gt;Working Safely within Complex Systems&lt;/em&gt;: We must create the necessary environment in which we can work in our systems “without fear, confident that any errors will be detected quickly, long before they cause catastrophic outcomes (…)”.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;See Problems as They Occur&lt;/em&gt;: We need to ensure we have telemetry to accompany our production environment and be able to effectively monitor any incidents long before they affect customers. We can also use telemetry to validate we are achieving our desired goals.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Swarm and Solve Problems to Build New Knowledge&lt;/em&gt;: Swarming, &lt;a href="https://en.wikipedia.org/wiki/Swarm_behaviour"&gt;from Wikipedia&lt;/a&gt;, “is a collective behavior exhibited by entities, particularly animals, of similar size which aggregate together, perhaps milling about the same spot or perhaps moving en masse or migrating in some direction.” In the technological sense, this means that when an issue, even the smallest of issues, occurs, all elements of a team must unite to resolve the problems as quickly as possible.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Keep Pushing Quality Closer to the Source&lt;/em&gt;: Ensure that quality is enforced by our peers, rather than top-down bureaucratic processes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Telemetry Everywhere
&lt;/h2&gt;

&lt;p&gt;A direct, but at times complex, way of abiding by the two initial principles, is to ensure that telemetry is available to detect and solve problems in real-time. For this to be a possibility, we should have centralized systems for telemetry infrastructure used by the entire organization. We should also ensure that applications have appropriate logging that can help production issues.&lt;/p&gt;

&lt;p&gt;Having this infrastructure in place helps use telemetry to guide problem-solving, rather than relying on semi-blind troubleshooting approaches. Apart from using telemetry to solve problems, we should also push for telemetry to be used for production metrics that are outside the scope of daily work such as, for example, number of successful logins. These metrics can then be used as SLIs that contribute to more effective SLOs.&lt;/p&gt;

&lt;p&gt;Another important aspect of telemetry is that it cannot be a walled garden. We must ensure that telemetry can be self-service, providing visibility to developers about the telemetry of their applications, extending this feedback loop to everyone inside the organization. This increases transparency and guarantees that everyone has a shared vision of “reality”.&lt;/p&gt;

&lt;p&gt;With telemetry available, we need to ensure that we use it to better anticipate problems and achieve our goals, this means statistical analysis, usage of anomaly detection techniques and automated alerts.&lt;/p&gt;

&lt;p&gt;This fast feedback loop, with telemetry and infrastructure for self-service, allows for deployments to be safer, for developers to follow their work and see how their applications behave, have developers share on-call rotas and even manage their services in production - instead of having to rely on a dedicated operations team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Review &amp;amp; Coordinate Close to the Source
&lt;/h2&gt;

&lt;p&gt;We should aim at creating review and coordination processes as close to the source code as possible. For example, avoiding change approval processes entirely, fearlessly cutting on bureaucratic processes, instead relying on peer-review processes, coordination and scheduling of changes, without formal approvals. Code reviews have been proven to improve code readability, a sense of ownership, as well as code transparency and contributions. We should &lt;em&gt;review&lt;/em&gt; the review process, to ensure that we are enabling the effectiveness of reviews, rather than creating a process that doesn’t ensure the quality of the source code.&lt;/p&gt;

&lt;p&gt;Reviewing and coordinating closer to the source also enables teams that &lt;em&gt;swarm&lt;/em&gt; whenever an issue occurs to troubleshoot it also contributes to the creation of shared knowledge, that becomes distributed throughout the team, allowing teams to be more effective and efficient.&lt;/p&gt;

&lt;p&gt;Whenever possible bottom-up coordination is much better than top-down mandates that are based on bureaucratic processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;An organization that has a well set up infrastructure, which provides self-service telemetry, aligned with the autonomy of self-sufficient teams is much better equipped to face the challenges that modern development teams are facing every day.&lt;/p&gt;

&lt;p&gt;These practices - outlined throughout this essay - are the core of the underlying practices that enable organizations to ensure there are feedback and feedforward loops in place that allow for errors to be cheaper, smaller and caught earlier.&lt;/p&gt;

&lt;p&gt;I hope in the future to write more about the last way of DevOps, &lt;em&gt;The Principles of Continual Learning&lt;/em&gt;, which are by far my favourite guidelines of the entire book because they are, at this moment, perhaps the lesser-known guidelines and the ones that are less put in practice.&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>Another Way of Accessing Host Resources in Minikube</title>
      <dc:creator>Hugo Martins</dc:creator>
      <pubDate>Mon, 20 Sep 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/caramelomartins/another-way-of-accessing-host-resources-in-minikube-3fo5</link>
      <guid>https://dev.to/caramelomartins/another-way-of-accessing-host-resources-in-minikube-3fo5</guid>
      <description>&lt;p&gt;A few years ago, I wrote an essay about this topic titled “&lt;a href="https://hugomartins.io/essays/2019/12/access-host-resources-minikube/"&gt;How to Access Host Resources in Minikube Pods?&lt;/a&gt;”. In it, I have described how to access host resources in Minikube. Something I thought should be easy but, at the time, it really wasn’t.&lt;/p&gt;

&lt;p&gt;Back then, I’ve described an approach, based on a Github &lt;a href="https://github.com/kubernetes/minikube/issues/2735"&gt;issue&lt;/a&gt;, relying on listing kernel routing tables, that allowed us to fetch the IP of the host machine inside Minikube. I described why this worked, going so far as sharing a few snippets of code from the Kubernetes source code.&lt;/p&gt;

&lt;p&gt;While this has served me well for the last few years, I’ve recently discovered that this approach is a bit outdated - the documentation on that essay doesn’t even exist anymore. I wrote this when Minikube was v1.6.2 and it is now v1.23.1…a lot has happened. After searching a bit more, I’ve understood that Minikube &lt;em&gt;now&lt;/em&gt; has a proper solution for this, documented in &lt;a href="https://minikube.sigs.k8s.io/docs/handbook/host-access/"&gt;Host access&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This approach applies for Minikube bigger than v1.10 and it essentially relies on using &lt;code&gt;host.minikube.internal&lt;/code&gt; in order to connect to the host Minikube is running on. According to the documentation, “minikube v1.10 adds a hostname entry host.minikube.internal to /etc/hosts.” I’ve tested this and it seems to actually work, while being much easier and reliable.&lt;/p&gt;

&lt;p&gt;As with the initial approach, the same caveat applies:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The service running on your host must either be bound to all IP’s (0.0.0.0) and interfaces, or to the IP and interface your VM is bridged against. If the service is bound only to localhost (127.0.0.1), this will not work.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>minikube</category>
    </item>
    <item>
      <title>KubeCon Europe 2021: Highlights  #4</title>
      <dc:creator>Hugo Martins</dc:creator>
      <pubDate>Fri, 07 May 2021 09:27:59 +0000</pubDate>
      <link>https://dev.to/caramelomartins/kubecon-europe-2021-highlights-4-10l6</link>
      <guid>https://dev.to/caramelomartins/kubecon-europe-2021-highlights-4-10l6</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LIiAqwt2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/cncf/artwork/master/other/kubecon-cloudnativecon/2021-eu-virtual/color/kubecon-eu-2021-color.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LIiAqwt2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/cncf/artwork/master/other/kubecon-cloudnativecon/2021-eu-virtual/color/kubecon-eu-2021-color.png" alt="kubecon-logo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today was the last day of KubeCon Europe 2021. As I have done in the last three days, it is only right that I write about what today was like for me. You can find my highlights for previous days in &lt;a href="https://hugomartins.io/essays/2021/05/kubecon-europe-2021-highlights-1"&gt;KubeCon Europe 2021: Highlights #1&lt;/a&gt;, &lt;a href="https://hugomartins.io/essays/2021/05/kubecon-europe-2021-highlights-2"&gt;KubeCon Europe 2021: Highlights #2&lt;/a&gt; and &lt;a href="https://hugomartins.io/essays/2021/05/kubecon-europe-2021-highlights-3"&gt;KubeCon Europe 2021: Highlights #3&lt;/a&gt;. These are a series of notes written as a &lt;em&gt;stream of consciousness&lt;/em&gt; without much editing. This year's edition of KubeCon Europe was once again completely virtual.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sessions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When Prometheus Can’t Take the Load Anymore&lt;/strong&gt; by Liron Cohen was the first good session I watched today. This session focused on showcasing some issues with Prometheus such as scalability issues, no real high availability, no global view of information and no long term storage. Then Liron Cohen explained, in quite some detail, three potential solutions: &lt;a href="https://eng.uber.com/m3/"&gt;M3&lt;/a&gt;, &lt;a href="https://cortexmetrics.io/"&gt;Cortex&lt;/a&gt; and &lt;a href="https://thanos.io/"&gt;Thanos&lt;/a&gt;. In the end, they used Thanos because of the simplicity of its architecture when compared with M3 and Cortex. I had no idea about the complexity that is needed to have a highly available Prometheus which is why this session was so enlightening for me.&lt;/p&gt;

&lt;p&gt;On a smaller scale, &lt;strong&gt;SIG CLI: Intro and Updates&lt;/strong&gt; by Maciej Szulik, Katrina Verey and Jeff Regan, was also interesting because I wanted to know more about this SIG. Updates with better integration between kubectl and kustomize were presented, as well as some other tooling that I was unaware of such as &lt;a href="https://github.com/kubernetes-sigs/kui"&gt;kui&lt;/a&gt; and &lt;a href="https://krew.sigs.k8s.io/"&gt;krew&lt;/a&gt;. A funny thing I discovered was that they used a pattern in kubectl development, which I have seen before, of coupling commands (from Cobra) with business logic. They are now trying to go back on that decision, which is something I've also faced before because coupling the framework with the business logic ends up creating a lot of interoperability issues in the business logic.&lt;/p&gt;

&lt;p&gt;Next up is &lt;strong&gt;Battle of Black Friday: Proactively Autoscaling StockX&lt;/strong&gt; by Mario Loria and Kyle Schrade. This session started as a story of going from manual scaling to proactive automated scaling. It was a wonderful session about proactive scaling, going further than Cluster Autoscaling and HPA. They've developed a system that can easily handle spikes in traffic of more than 500% in less than 30 seconds by having an endpoint consuming events from Kubernetes and then, based on those events, having a CronJob warming up and cooling down more resources, proactively, A great example of how to deal with the delays of cluster autoscaling and HPA when your spikes are not gradual, non-linear and unannounced. Also a great example of keeping systems simple, with a single endpoint and a CronJob to manage the warming up and cooling down proactively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How We are Dealing with Metrics at Scale on GitLab.com&lt;/strong&gt; by Andrew Newdigate. Showcased a solution for mitigating low precision, unactionable and flappy alerting. It leverages key metrics, such as Apdex, Requests per Second, Errors per Second and Saturation, as SLIs to then be able to monitor if SLOs are being met. Metrics are centralized and shared in a metrics catalogue that defines key metrics, per-service SLOs, inter-service dependencies, and generates Prometheus and Thanos rules, as well as, Grafana dashboards. Using Grafana as a launchpad to other components can also help with context during on call.&lt;/p&gt;

&lt;p&gt;One great discovery for me today was &lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway API&lt;/a&gt; presented in &lt;strong&gt;Gateway API: A New Set of Kubernetes APIs for Advanced Traffic Routing&lt;/strong&gt; by Harry Bagdi and Rob Scott. It provides a set of resources in Kubernetes that improve on what already exists with Ingresses. It focuses on improved routing and can handle much more complex scenarios, is more expressiveness and extensive. It can handle canary deployments (traffic splitting) much more easily, header matching and multicluster traffic. It also supports HTTP, UDP and TCP routing through simple and expressiveness resources. I believe this is a truly ground-breaking thing in this niche.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cert-Manager Beyond Ingress – Exploring the Variety of Use Cases&lt;/strong&gt; by Matthew Bates was a nice introduction to a lot of use cases of cert-manager that I was unaware. Apparently, you can use cert-manager in almost all layers of Kubernetes. Pod to Pod, Kubelet, kubeadm and webhooks. I had no idea cert-manager had all of this reach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High Throughput with Low Resource Usage: A Logging Journey&lt;/strong&gt; by Eduardo Silva featured a deep dive into the complexities of log collectors and processors such as Fluentd. It also went into some detail on how &lt;a href="https://fluentbit.io/"&gt;Fluent Bit&lt;/a&gt; can optimize data handling and I/O with data serialization and buffer management via chunks. There are a couple of slides that are particularly enlightening in terms of the complexity that a log collect and processor has to handle.&lt;/p&gt;

&lt;p&gt;I still had a few more sessions on "to watch" such as &lt;strong&gt;K8s Labels Everywhere! Decluttering With Node Profile Discovery.&lt;/strong&gt; by Conor Nolan, &lt;strong&gt;Isolate the Users! Supporting User Namespaces in K8s for Increased Security&lt;/strong&gt; by Mauricio Vásquez, &lt;strong&gt;Application Autoscaling Made Easy With Kubernetes Event-Driven Autoscaling&lt;/strong&gt; by Tom Kerkhove, and &lt;strong&gt;Choose Wisely: Understanding Kubernetes Selectors&lt;/strong&gt; by Christopher Hanson. Given that the sessions are available on-demand on KubeCon's platform I will potentially watch them later next week. There was just so much to see that you can't get everywhere, otherwise, you'd be awake for 4 days straight.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cool Tech
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://keptn.sh/"&gt;ketpn&lt;/a&gt;: "Cloud-native application life-cycle orchestration".&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://eng.uber.com/m3/"&gt;M3&lt;/a&gt;: "Uber’s Open Source, Large-scale Metrics Platform for Prometheus"&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cortexmetrics.io/"&gt;Cortex&lt;/a&gt;: "Horizontally scalable, highly available, multi-tenant, long term Prometheus".&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://thanos.io/"&gt;Thanos&lt;/a&gt;: "Open source, highly available Prometheus setup with long term storage capabilities".&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/kubernetes-sigs/kui"&gt;kui&lt;/a&gt;: "hybrid command-line/UI development experience for cloud-native development".&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://krew.sigs.k8s.io/"&gt;krew&lt;/a&gt;: " plugin manager for kubectl command-line tool".&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/kubernetes-sigs/multi-tenancy/tree/master/incubator/virtualcluster"&gt;Virtual Clusters&lt;/a&gt;: "a new architecture to address various Kubernetes control plane isolation challenges".&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/FairwindsOps/goldilocks"&gt;Goldilocks&lt;/a&gt;: "utility that can help you identify a starting point for resource requests and limits".&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://rook.io/"&gt;Rook&lt;/a&gt;: "Production ready management for File, Block and Object Storage".&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.ceph.com/en/latest/"&gt;Ceph&lt;/a&gt;: "uniquely delivers object, block, and file storage in one unified system".&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway API&lt;/a&gt;: "a collection of resources that model service networking in Kubernetes".&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fluentbit.io/"&gt;Fluent Bit&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubecon</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>KubeCon Europe 2021: Highlights  #3</title>
      <dc:creator>Hugo Martins</dc:creator>
      <pubDate>Thu, 06 May 2021 08:02:59 +0000</pubDate>
      <link>https://dev.to/caramelomartins/kubecon-europe-2021-highlights-3-48op</link>
      <guid>https://dev.to/caramelomartins/kubecon-europe-2021-highlights-3-48op</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LIiAqwt2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/cncf/artwork/master/other/kubecon-cloudnativecon/2021-eu-virtual/color/kubecon-eu-2021-color.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LIiAqwt2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/cncf/artwork/master/other/kubecon-cloudnativecon/2021-eu-virtual/color/kubecon-eu-2021-color.png" alt="kubecon-logo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I wrote about personal highlights from KubeCon Europe 2021 &lt;a href="https://hugomartins.io/essays/2021/05/kubecon-europe-2021-highlights-2/"&gt;yesterday&lt;/a&gt; and &lt;a href="https://hugomartins.io/essays/2021/05/kubecon-europe-2021-highlights-1/"&gt;the day before&lt;/a&gt;. I think it is only right if I write about it a third time. For those that are unaware, I’m writing about my experience in KubeCon Europe 2021 and today is Day 3. These will be a series of notes written as a &lt;em&gt;stream of consciousness&lt;/em&gt; without much editing. This year’s edition of KubeCon Europe was once again completely virtual.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sessions
&lt;/h2&gt;

&lt;p&gt;I started my day with some leftover sessions from yesterday. In this case, I started with &lt;strong&gt;Zero Pain Microservice Development and Deployment with Dapr and KubeVela&lt;/strong&gt; by Hongchao Deng which went about presenting a solution for improving workload deployments with &lt;a href="https://kubevela.io/"&gt;KubeVela&lt;/a&gt; and &lt;a href="https://dapr.io/"&gt;Dapr&lt;/a&gt; as sidecar for metrics. Two things were noticeable for me in this session: more and more tools being built to ease and abstract away the complexity of Kubernetes, in this case, KubeVela helps Platform Teams create “application-centric” resources in Kubernetes; and, once again, there’s this trend of using sidecars for deployment auxiliary technology with their applications.&lt;/p&gt;

&lt;p&gt;In the same vein, &lt;strong&gt;Turning Your Cloud Native Apps Inside Out With a Service Mesh&lt;/strong&gt; by Adam Zwickey and Liam White presented another use case of using Envoy as a sidecar to transparently create service meshes that can be managed by a Platform Team, leaving developers to focus exclusively on their own services. Have I said that there’s a trend to use sidecars with auxiliary technology? Envoy seems to be front and center in this. Above we just saw using Dapr for metrics, for example.&lt;/p&gt;

&lt;p&gt;One of the things I enjoy about these conferences are the introductory sessions such as &lt;strong&gt;Introduction and Deep Dive Into Containerd&lt;/strong&gt; by Kohei Tokunaga and Akihiro Suda where I have learned a bit more about how &lt;a href="https://containerd.io/"&gt;containerd&lt;/a&gt; internals function - for example, apparently you can easily build containerd clients - or &lt;strong&gt;CoreDNS Deep Dive: Building Custom Plugins&lt;/strong&gt; by Yong Tang where I have understood that we can build plugins for &lt;a href="https://coredns.io/"&gt;CoreDNS&lt;/a&gt;, which I had no idea about. It is a great way of getting a quick introduction to a specific technology or learning about extensibility in a project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use Managed Kubernetes?: It’s Dangerous to Go Alone!&lt;/strong&gt; by Seth McCombs is a candidate for my favourite session in this year’s KubeCon. Seth tried to dismantle the notion that you frequently hear at these conferences about “the right way” to do things. For most use cases, using managed Kubernetes services is aptly justified because, by having a team of experts in some cloud provider working day and night on these services, they can guarantee much more stability and reliance than we’d otherwise be able to. It is much more important to focus on building cool things, and features that can contribute to the core business rather than wasting time managing Kubernetes. There are cases where running your own Kubernetes is justified, particularly if you need customizations, but managed services usually contribute to fewer costs, more reliability and more time to build features. End users don’t necessarily care about whether you are running Kubernetes or using AWS/GCP. Another thing that I noticed, and I was talking about in yesterday’s updates, was Seth’s reluctance in managing your own etcd - it truly must be a pain. End users don’t necessarily care about running on Kubernetes. Seth ended the session with a great quote: “Build cool stuff. In whatever way works for you. With the tools that work for you”. Kubernetes should be just one more tool in your toolbox.&lt;/p&gt;

&lt;p&gt;I also had a chance to watch three good sessions on security topics. &lt;strong&gt;Uncovering a Sophisticated Kubernetes Attack in Real-Time&lt;/strong&gt; by Jed Salazar and Natália Réka Ivánkó where they suggested that security could be more like SRE. They have showcased a solution that leverages &lt;a href="https://ebpf.io/"&gt;https://ebpf.io/&lt;/a&gt; and &lt;a href="https://cilium.io/"&gt;https://cilium.io/&lt;/a&gt; for security monitoring in Kubernetes (again, with sidecars in pods!!). This solution allows to proactively acting whenever specific actions are executed in Kubernetes clusters, moving security from something that is always on fire to being something proactive and monitored in real-time. &lt;strong&gt;Hacking into Kubernetes Security for Beginners&lt;/strong&gt; by Ellen Körbes and Tabitha Sable was an extremely creative session where I’ve heard things such as: “We are made of stars but your RBAC shouldn’t be.” More than being simply creative, these sessions expanded my understanding of the attack surface in workloads running in Kubernetes. Misconfigured RBAC, powerful tools configured and running pods, potential vulnerabilities in Kubernetes APIs, image vulnerabilities. There’s so much to take care of! A few suggestions have been left about having audit logs in Kubernetes APIs, leveraging admission controllers and CI scanning for vulnerabilities. Finally, &lt;strong&gt;The Art of Hiding Yourself&lt;/strong&gt; by Lorenzo Fontana presented security issues in Kubernetes from the perspective of someone trying to hide inside a given system. It talks about ways to hide process activity, network activity and storage activity, and even pods!! What fascinated me the most was how fairly trivial it is to hide these things, as long as you know how to circumvent the common ways of monitoring these things. Sometimes you need to look at things from a different angle to be able to see clearly. This session leveraged &lt;a href="https://falco.org/"&gt;Falco&lt;/a&gt; for real-time security monitoring in Kubernetes.&lt;/p&gt;

&lt;p&gt;Finally, one of the sessions I was most excited about was &lt;strong&gt;Sidecars at Netflix: From Basic Injection to First Class Citizens&lt;/strong&gt; by Rodrigo Campos Catelin and Manas Alekar. I am a big fan of Netflix’s engineering culture which prompted my enthusiasm. This is a story of how Netflix evolved from an architecture where EC2 VMs stored application code, along with auxiliary technology for logging, metrics and networking, through a solution with &lt;a href="https://netflix.github.io/titus/"&gt;Titus&lt;/a&gt; where each EC2 VM was running everything in containers, and now with Kubernetes and trying to advance Kubernetes sidecars. They have laid out some sidecar issues that they are currently facing such as no startup order guarantees, no shutdown order guarantees, can’t be straightforwardly used with Jobs and can’t use them with initContainers. Interestingly enough, the thing I was most fascinated by was &lt;a href="https://github.com/kubernetes/enhancements/issues/753"&gt;KEP 753&lt;/a&gt;. I had no idea but it seems that there have been some proposals to advance sidecars to First Class Citizens. The proposal didn’t go anywhere but there are still “pre-proposal” efforts to start a new proposal and introduce sidecars in Kubernets’ specification. I was amazed because there are so many instances of companies using sidecars in production, I’ve mentioned a few of them in the last couple of days, but there’s no real standard/specification around sidecars in Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cool Tech
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://crossplane.io/"&gt;Crossplane&lt;/a&gt;: “Compose cloud infrastructure and services into custom platform APIs”.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubevela.io/"&gt;KubeVela&lt;/a&gt;: “Make shipping applications more enjoyable”.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://openkruise.io/en-us/"&gt;OpenKruise&lt;/a&gt;: “A Kubernetes extended suite for application automations”.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dapr.io/"&gt;Dapr&lt;/a&gt;: “Simplify cloud-native application development”.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubespray.io/#/"&gt;kubespray&lt;/a&gt;: “Deploy a Production Ready Kubernetes Cluster”.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/containerd/nerdctl"&gt;nerdctl&lt;/a&gt;: “Docker-compatible CLI for containerd, with support for Compose”.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ebpf.io/"&gt;eBPF&lt;/a&gt;: “eBPF is a revolutionary technology that can run sandboxed programs in the Linux kernel without changing kernel source code or loading kernel modules”.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cilium.io/"&gt;Cilium&lt;/a&gt;: “eBPF-based Networking, Observability, and Security”.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.yugabyte.com/"&gt;yugabyteDB&lt;/a&gt;: “Open source, cloud native relational database for powering global, internet-scale apps”.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://netflix.github.io/titus/"&gt;Titus&lt;/a&gt;: “a container management platform that provides scalable and reliable container execution and cloud-native integration with Amazon AWS”.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
  </channel>
</rss>
