DEV Community

Hugo Martins
Hugo Martins

Posted on • Originally published at hugomartins.io on

KubeCon Europe 2021: Highlights #2

kubecon-logo

Following yesterday’s highlights on KubeCon Europe 2021, I’m writing about my experience in KubeCon Europe 2021’s Day 2. These will be a series of notes written as a stream of consciousness without much editing. This year’s edition of KubeCon Europe was once again completely virtual.

Sessions

I’ve missed half of the Keynote Sessions because, to be frank, I found them quite bland. I didn’t feel that I was gaining much watching those and I found myself constantly zooming out so I simply left. I understand the concept of these, and that they can be a great place to sponsor and showcase specific projects, but I much prefer the remaining sessions.

I started my day by watching “TechDocs: Unlocking the Potential of Engineers’ Collective Knowledge” by Emma Indal. I’ve been amazed at Spotify’s contributions to open source, specifically Backstage, which I have been following closely. Emma talked about TechDocs which is how Spotify manages technical documentation as part of the CI/CD process to improve documentation maintainability and ease the process of producing technical documentation. Spotify also won the Top End User Award which is well deserved for all the work they have been supporting in the last years. Even though TechDocs is a Backstage plugin, the actual question is one of approach: transforming documentation into something closely resembling “documentation as code”. With this approach, you write documentation at the same time as you write code and you update documentation at the same time you update code. Then you have an external system fetching and displaying that documentation. Once that flow is properly working, and the culture is established, the sky is the limit for technical documentation.

“Building a Portable Kubernetes Deployment Pipeline with Argo Workflows and Events” by Thomas Meadows and Ollie Young have showcased a platform for ephemeral and portable Kubernetes environments based on ArgoCD, Custom Resource Definitions (CRDs) and a bit of glue with some custom CLIs and APIs. This platform would allow agility throughout the testing and release cycles, providing developers with autonomy to generate their Kubernetes environments. The thing that struck me the most here was that developers weren’t simply using Kubernetes from an abstraction provided by a Platform as a Service, they were using and provisioning, in a self-service manner, entire Kubernetes clusters.

“How DoD Uses K8s and Flux to Achieve Compliance and Deployment Consistency” by Michael Medellin and Gordon Tillman, from Department of Defense. I was curious about this session because, ergh…DoD is not particularly known for cloud native, agile capabilities. I was pleasantly surprised to see that they have managed to build an internal platform based on Kubernetes, flux and some sort of IaaC based on Git, that can reliably, securely and in a compliant fashion, deliver workloads in multi-nework (including air-gapped networks), multi-region and hybrid cloud platforms. They have been able to slowly reduce their time to delivery from years to days by relying on open-source, battle-tested applications that can help ensure compliance and auditing without compromising so much in speed.

“CERN’s 1500 Drupal Websites on Kubernetes: Sailing With Operators” by Konstantinos Samaras-Tsakiris and Rajula Vineet Reddy from CERN. In this session, CERN engineers demoed their Platform as a Service that can offer automated infrastructure and provisioning of 1500 Drupal websites, with different specifications, via CRDs (and operators) specifically built for their purpose. They have leveraged Kubernetes as a common API to control many kinds of resources and not only specifically do deploy their workloads on. The way they showed it, it seemed interesting how they could turn complex abstractions in Kubernetes into something that physicists at CERN could use to build and deploy their websites.

At this moment I realized, either through selection bias or because it is simply true, that there are two common themes in KubeCon this year: GitOps and internal platforms. Both ArgoCD and flux are mentioned frequently in sessions across multiple days at KubeCon. You can also notice that I watched, at least, two sessions around these topics. At the same time, there seems to be a trend that includes niche teams that are responsible for developing platforms that extend Kubernetes for particular purposes and that adapt to particular use cases and business cases. I believe these trends will only accelerate.

On a different style of sessions, I watched a demo on Contour. “Contour, a High Performance Multitenant Ingress Controller for Kubernetes” , essentially went about what Contour is, described its functionality and what the roadmap is. I wasn’t aware of this project and it caught my eye. Essentially, Contour is an Ingress controller that acts as a control plane for Envoy, again leveraging CRDs. Contour will monitor resources and then dynamically configure Envoy whenever a new resource needs to updated or configured in Envoy. It aims at expanding the functionality provided by Ingresses in Kubernetes and provides features such as improved configurations for TLS, cross-namespace TLS credentials and routing, multiple service load balancing, improved service weighting and load balancing strategies, and rate limiting.

I’ve heard before that etcd is complex, “Lessons Learned from Operating ETCD” by Pierre Zemb showed me a bit more in-depth how complex it is. This session went through explain etcd, some metrics that should be monitored for 4 different layers (gRPC, Raft, write-ahead log and bbolt). While this won’t make anyone an expert, it was interesting to go through the “tips and tricks” and understanding how complex this critical piece of Kubernetes is to operate. I don’t want to be near an etcd when it breaks.

Again on the topic of extending Kubernetes, I’ve watched “Operationalizing Kubernetes Sidecars in Production at Salesforce” by Mayank Kumar. It talked about mutating admission webhooks used to inject sidecars in Kubernetes in something called Salesforce Hyperforce. It showcased 10 sidecar use cases currently deployed with Salesforce Hyperforce and described a story of how multiple independent teams duplicating effort, creating the same type of code and making the same mistakes created an opportunity to create a generic sidecar injector that could be reused across teams. They’ve been able to improve monitoring and alerting for mutating webhooks by using this framework. Nonetheless, they have run into some issues around ownership, change management and visibility issues into what was injected. Be sure to try to avoid dependencies on a pod’s critical path for higher availability. They can be tricky to troubleshoot. This session also made me rethink how crucial a platform can be to establish a golden path for developers to follow but, at the same time, grant the freedom to developers to make their own experiences and then try to assess if those experiences need a golden path to be standardized.

“Multi-Cluster Service Deployments with Operators and KubeCarrier” by Rastislav Szabó showcased an example of using Kubermatic, KubeCarrier and Submariner to manage multiple clusters and deploy workloads in a multiple cluster environment. Again relying on Kubernetes Operators and CRDs, which help automate application lifecycle within a given cluster but also scale well when managing resources at a multi-cluster level.

This is another topic I’ve been seeing a lot at KubeCon sessions: multi-cluster, multi-region and multi-provider. It seems that Kubernetes has started to slowly evolve into the baseline in which people create their extensions and then advance its use cases in ever more complex scenarios. A lot of tooling has been created recently to facilitate this tendency for complexity in “multi”.

There were a lot of talks today. I wasn’t able to watch all of the sessions I wanted but these were today’s highlights. I’ll try to finish off today’s sessions tomorrow and write some more highlights tomorrow!

Cool Tech

  • pixie: “Open source Kubernetes observability for developers”.
  • kcp: “a prototype of a Kubernetes API server that is not a Kubernetes cluster”.
  • SchemaHero: “Modernized Database Schema Migrations”.
  • envoy: “an open source edge and service proxy, designed for cloud-native applications”.
  • flux: “a set of continuous and progressive delivery solutions for Kubernetes”.
  • Cluster API: “declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters”.
  • Contour: “High performance ingress controller for Kubernetes”.
  • Kubermatic: “Automate operations of thousands of Kubernetes clusters across multi-cloud, on-prem, and edge environments with unparalleled density and resilience”.
  • KubeFed: “allows you to coordinate the configuration of multiple Kubernetes clusters from a single set of APIs in a hosting cluster”.
  • KubeCarrier: “system for managing applications and services across multiple Kubernetes Clusters”.
  • Submariner: “enables direct networking between Pods and Services in different Kubernetes clusters”.

Top comments (0)